arrow_back Back to Projects
Python FastAPI React Docker Agentic AI RAG LLM

Interius: Agentic API Generator

An AI-powered platform that automates the entire software development lifecycle—from natural language prompts to a fully tested, production-ready backend for software applications.

Project image 1

Interius : Agentic AI Platform for Software Engineering

📌 Project Overview

Interius is a production-ready platform that enables non-developers to build highly capable REST APIs simply by describing their requirements in plain English. Moving beyond simple code generation, Interius implements a sophisticated Agentic Pipeline that automates the entire software development lifecycle: requirements gathering, architectural design, implementation, testing, security review, and live deployment.

Technologies Used: Python, FastAPI, React + Vite, SQLModel (PostgreSQL), LangChain, ChromaDB, Docker (Sandbox Execution), OpenRouter (Gemini, Qwen, DeepSeek).


🚀 Key Features & Architecture

1. The Multi-Agent Orchestration Pipeline

When a user submits a prompt, a FastAPI orchestrator coordinates a series of specialized AI agents through a real-time Server-Sent Events (SSE) stream:

  • Requirements Agent: Translates prompts into structured ProjectCharter artifacts.
  • Architecture Agent: Designs the data model and generates Mermaid ER diagrams dynamically.
  • Implementer Agent: Writes the executable Python codebase based on the architecture.
  • Reviewer Agent (with Loop): Performs security and logic audits, triggering “Perceive-Plan-Act” fix cycles (up to 5 passes) until a trust threshold (score ≥ 7/10) is met.

Sequence Diagram

2. Dockerized Sandbox & Automated Testing

Interius features a safe, zero-config Sandbox Execution Engine:

  • Auto-Deployment: Automatically builds and deploys the generated code into a dedicated Docker sandbox-runner sidecar.
  • LLM-Generated Tests: A TestGeneratorAgent creates a custom pytest suite for every project.
  • Self-Healing: If tests fail in the sandbox, the orchestrator parses the traceback and auto-patches the code (up to 3 retries).

Sandbox Pipeline

3. Dynamic RAG-Enhanced Experience

  • Code Q&A: Generated files are indexed into a persistent ChromaDB instance per-thread, allowing users to ask questions about the generated codebase.
  • Intent Routing: A dedicated InterfaceAgent classifies user input to determine whether to trigger a full build, a simple chat response, or a code-specific retrieval.
  • Dynamic API Tester: The frontend dynamically renders interactive endpoint cards based on the generated ProjectCharter — no hardcoded routes or components.

RAG Pipeline


🏗️ System Architecture

Interius leverages a modern, distributed architecture to handle heavy LLM generation tasks asynchronously:

  • Frontend: React + Vite with Framer Motion for high-fidelity pipeline status visualizations.
  • Backend: FastAPI with SSE for streaming multi-agent state updates.
  • Persistence: PostgreSQL for artifacts and Supabase for real-time chat persistence.
  • Execution: Docker containers for isolated runtime environments.

System Architecture

Database Schema

The platform manages complex relationships between users, projects, generation runs, and versioned artifacts.

ER Diagram


💡 What I Learned

  • Orchestrating Agentic Workflows: Gained deep experience in managing long-running, multi-step LLM processes and maintaining state across asynchronous agent transitions.
  • Safe Code Execution: Designed a secure sandbox architecture using Docker to run untrusted AI-generated code without compromising the host system.
  • Real-time UX for AI: Learned to use SSE to bridge the “latency gap” of LLM generation, providing users with immediate feedback and transparency during the 30-60 second build process.
  • Systematic Self-Healing: Implemented automated retry-and-patch logic, significantly improving the success rate of generated backend applications.