π Features
π Authentication
DocGen.AI provides a secure and seamless authentication system with support for:
π§ How It Works
- DocGen.AI connects to your codebase or documentation context.
- Users can chat with an LLM to generate unit tests, inline documentation, or ask questions about code behavior.
- You can either connect a GitHub repo or chat with a blank model.
- The system streams results in real time and provides copyable output.
- If no codebase is loaded, DocGen.AI defaults to chat-only mode for exploration and experimentation.
π€ Guest User Functionality
DocGen.AI offers guest sessions for quick testing or onboarding:
- Guests can explore model features without signing up
- All guest data (code snippets, chats, preferences) is temporary
- Guests cannot modify their profiles and have a limited time before their session expires (1 hour)
- Session data is automatically wiped after logout or timeout
π₯οΈ Main Interface
Once logged in, users are welcomed into a responsive, modern workspace built for seamless interaction and model experimentation.
- Toggle between sleek dark and light themes for a personalized experience
- Intuitive dashboards to browse installed models and codebases
- Real-time charts and usage metrics for insight into model/codebase activity
- Full chat management: rename, edit, and delete conversations effortlessly
βοΈ AI-Generated Code + Docs
The code generation panel allows you to request:
- Unit tests for specific functions or classes
- Inline comments for undocumented logic
- Refactored or simplified code
- JSDoc or docstring templates
π Codebase + Chunking Interface
Users can connect external repositories or upload local projects for embedding. DocGen.AI then:
- Parses files and chunks content intelligently (e.g. by function)
- Embeds code for retrieval-augmented generation (RAG)
- Displays loading state and import logs
- Supports reprocessing individual files for updates
π Privacy + Local-First Deployment
DocGen.AI is designed for privacy-first use cases:
- All processing can be done locally or in a containerized environment
- Supports Ollama and other LLM backends for offline LLM inference
- Codebases never leave your machine in local mode
- Ideal for internal company projects and enterprise security needs Together, these features ensure that trust is not assumedβitβs provable.
π§ Models
DocGen.AI supports multiple generative models via a modular architecture. Models can be used for chat, test generation, documentation, and search β with flexible backend integration via Ollama, OpenAI, or other compatible APIs.
Key features include:
- Users can view a searchable list of all available and installed models
- Usage statistics and last-seen data help track model engagement
- Tags indicate whether a model is installed locally or just available to pull
- Status indicators show loading, pulling, or ready state in real time
π Admin-Only Installation
To maintain a secure and controlled environment:
- Only admin users can pull or install models to the local environment
- A model pull dialog allows admins to select specific versions or tags
- Users without admin privileges will see a disabled or hidden pull button
- Install requests are tracked and reflected in the UI for transparency
This ensures that the system maintains model consistency and avoids unintentional overload of the host machine.