Back to blog
·6 min read

Introducing Voice AI: Talk to Your Codebase

Plan features, ask architecture questions, and generate implementation tasks by voice, with AI that actually understands your code.

CST

Code Summary Team

Author

Voice AIAIDeveloper ToolsProduct Update

You know how it goes. You have a feature idea. You open your editor, stare at the codebase, and try to remember where everything lives. Which service handles this? What's the pattern for that? Where did we put the middleware?

Then you open Slack to ask a teammate. Or dig through old PRs. Or just start writing code and hope you're following the right patterns.

What if you could just... ask your codebase?

Voice AI That Knows Your Code

Today we're launching Voice AI for Code Summary. It's exactly what it sounds like: you talk to an AI that has actually read your codebase. Not a generic AI that knows how code works in general, but one that knows how your code works specifically.

Here's what that looks like:

You: "I need to add rate limiting to our API. Where should that go?"

AI: "Your API middleware is in src/api/middleware/. You're using Express with a chain pattern. Auth runs first, then validation. I'd put rate limiting before auth so you're not wasting cycles authenticating requests you're going to reject anyway. Want me to generate a task for this?"

The AI isn't guessing. It read your documentation. It knows your file structure, your patterns, your architecture. When it references src/api/middleware/, that's because that's where your middleware actually lives.

The Planning Problem

Every developer knows the gap between "I want to build X" and "I know exactly how to build X in this codebase."

Bridging that gap usually means:

  1. Searching the codebase for similar patterns
  2. Reading through existing implementations
  3. Checking if there are shared utilities you should use
  4. Making sure you understand the architecture constraints
  5. Hoping you don't miss something obvious

This takes time. Sometimes it takes a lot of time, especially in larger codebases or when you're newer to the project.

Voice AI compresses this. You describe what you want to build. The AI (which has already done the reading) tells you what it found, what patterns exist, and what you need to consider.

How It Works

When you connect repositories to Code Summary, we generate comprehensive documentation: architecture overviews, API references, component docs, pattern guides. This happens automatically on every push.

Voice AI uses this documentation as context. When you ask a question, the AI has your entire codebase's structure and patterns available to reference. It's not searching your code in real-time. It's drawing on documentation specifically designed to capture how your codebase works.

This is why the answers are good. Generic AI assistants know how codebases work in general. Voice AI knows how your codebase works specifically.

Multi-Repo Conversations

Real projects span multiple repositories. Your frontend talks to your backend. Your backend uses shared packages. Your infrastructure affects how services communicate.

Voice AI handles this. Select multiple repos for a conversation, and the AI has context from all of them. Ask about a feature that spans your React app and your Node API, and the AI understands both sides.

You: "How would I add real-time notifications? I want them to show in the dashboard when a deployment finishes."

AI: "Your dashboard uses the WebSocket hook in packages/ui/src/hooks/useSocket.ts. On the backend, you have an event emitter in services/api/src/events/. You could add a deployment.completed event there and subscribe to it from the dashboard. The webhook handler at api/webhooks/route.ts is where deployments currently get processed. That's where you'd emit the event."

One question, context from three different parts of your system.

From Conversation to Code

Talking about features is useful. But at some point, you need to actually build them.

Voice AI generates implementation tasks: structured prompts you can hand off to coding agents like Cursor or Claude. The tasks include:

  • Clear objective based on your conversation
  • Context from your codebase (relevant files, existing patterns)
  • Specific files that need to be created or modified
  • Implementation notes based on your architecture

Instead of copying context manually or re-explaining your codebase to your coding assistant, you get a task that's ready to execute.

## Task: Add Rate Limiting to API Middleware

### Context
The Express middleware chain in `src/api/middleware/` processes
requests through auth, then validation, then handlers.

### Objective
Add request rate limiting (100 req/min per IP) using
express-rate-limit before the auth middleware. Use Redis
store for the production distributed environment.

### Files to Modify
- M src/api/middleware/index.ts (add rate limiter to chain)
- A src/api/middleware/rateLimit.ts (new middleware)
- M src/config/redis.ts (add rate limit store config)
- M .env.example (add RATE_LIMIT_* variables)

### Implementation Notes
- Follow existing middleware pattern (see authMiddleware.ts)
- Use the Redis client from src/lib/redis.ts
- Add bypass for health check endpoints

This is what your coding agent needs to do the work correctly. No hallucinating folder structures. No inventing patterns that don't exist in your codebase. Just accurate, context-aware implementation guidance.

Credit-Based Pricing

Voice AI uses credits from your Code Summary plan:

  • Free tier: 10 credits/month with a 2-minute voice trial
  • Pro ($29/mo): 250 credits/month, 50 minutes of Voice AI
  • Teams ($199/mo): 1,500 credits/month, 300 minutes of Voice AI

Voice conversations use 5 credits per minute. Documentation generation uses 1 credit per run.

Most developers use Voice AI for planning sessions, about 5-10 minutes to think through a feature before implementing. A Pro subscription gives you roughly 10 solid planning sessions per month, plus plenty of credits for documentation.

Getting Started

If you're already using Code Summary, Voice AI is available now. Go to the Voice AI section in your dashboard and start a conversation.

If you're new to Code Summary:

  1. Sign up free
  2. Connect your GitHub repositories
  3. Wait for documentation to generate (usually under 5 minutes)
  4. Start talking to your codebase

The free tier includes a 2-minute voice trial so you can see how it works with your actual code.

The Bigger Picture

Voice AI is part of our vision for how developers should work with AI. We think the workflow looks like this:

  1. Document: Your codebase is automatically documented, always up to date
  2. Discuss: You plan features with AI that understands your specific code
  3. Deploy: You generate tasks for coding agents to implement what you planned

Each step feeds into the next. Documentation makes conversations useful. Conversations make generated tasks accurate. Tasks make implementation faster.

This is Code Summary: the complete AI-powered developer workflow.

Get started free and see what it's like to actually talk to your code.