---
title: What a Code Audit Looks Like
url: https://varstatt.com/jurij/p/what-a-code-audit-looks-like
author: Jurij Tokarski
date: 2026-04-17
description: AI code audit and vibe code review structured as a 1-week engagement. Architecture, security, technical debt — prioritized by risk, not page count.
section: Blog (https://varstatt.com/jurij/archive)
tags: retainer-shapes (https://varstatt.com/jurij/c/retainer-shapes)
---

Vibe coding cleanup is becoming its own job category. Collins Dictionary named "vibe coding" word of the year in 2025. TechCrunch ran a piece on the rise of "AI babysitters" — developers hired not to write code, but to audit what the tools already wrote. Cursor, Lovable, v0, Bolt.new, Replit Agent ship features fast. They also ship silent problems no linter catches and no test suite covers, because no test suite was written.

One developer put it bluntly: "10x the productivity measured by lines of code written, but 1/100th the quality measured by pain in the ass to clean up." An AI code audit — call it vibe code review, call it cleanup — is the senior pair of eyes that should have been there during generation, applied after the fact.

## Yes, I Use the Same Tools You Do

The obvious objection: *"You're charging $997 to audit code I generated with Cursor, but you're using Cursor too — what makes your output different from mine?"*

The tools are the same. Cursor, Claude Code, Copilot, v0, Bolt — I use them daily. They make me 2-3x faster than I was without them. They don't make me a different kind of engineer. The thing they amplify isn't writing speed; it's whatever judgment you bring to the conversation.

I've been shipping commercial software since 2011. Fifteen years of debugging things at 2am that worked fine in dev, of inheriting codebases from previous developers who "got it working" and disappeared. None of that experience is replaced by AI tooling. AI accelerates the part of the job I was already good at — typing — and leaves the hard part untouched. The hard part is knowing where the bombs are buried.

## The First 80% Is Easy. The Last 20% Is Where Production Lives.

Software is shaped like an iceberg. The visible 80% — features that work on the happy path — is what AI tools generate fluently. Describe what you want, the model produces working code, you click through the flows, everything looks fine. That part has gotten cheap.

The remaining 20% is what keeps the product alive in production. It's a list of things that look small until they aren't:

- Race conditions that fire once a week when two users do something at the same instant
- Error handling that quietly swallows failures the AI didn't think to surface
- A query that's instant on a 200-row dev database and times out at 50,000 rows
- Auth that validates token format but doesn't check expiry, scope, or revocation
- A deploy pipeline that "works" until it doesn't, with no visibility into what changed
- Cache invalidation that's correct on every page except the one that matters
- A third-party webhook that arrives in the wrong order one time in five hundred
- A data migration that lost three rows six months ago and nobody noticed

None of these break the demo. All of them break the product. Most don't surface as crashes — they surface as customers churning quietly, support tickets that don't resolve, metrics drifting without an obvious cause. AI tools generate the first 80% beautifully and have no concept of the second 20% existing.

The audit is the second 20%. I'm not grading the code AI generated against AI's standard. I'm grading it against fifteen years of watching production fail in specific, repeatable ways.

## What the Audit Covers

A code audit service touches four areas, roughly in priority order:

**Security.** Auth flows, input validation, exposed secrets, API key handling. AI generates auth code that looks correct — and often is. Sometimes a missing `httpOnly` flag, a JWT verified without checking the signing algorithm, an env variable committed because the `.gitignore` template didn't catch it. [Polish over security is a real cost](/jurij/p/when-polish-over-security-costs-real).

**Architecture.** Component structure, data flow, dependency management. AI-generated code produces coherent local decisions and incoherent global structure. State lives in three places. The same fetch call appears in four files. None of it breaks anything until someone needs to change it.

**Performance.** Re-renders, slow queries, bundle size. A component re-rendering on every keystroke is invisible on a MacBook and noticeable on a phone on a slow connection. A query without an index works fine in development. These are the [bugs that were actually the prompts](/jurij/p/three-bugs-that-were-actually-my-prompts).

**Technical debt.** Dead code, inconsistent patterns, missing error handling. Every `catch (e) { console.log(e) }` is a failure that will look like a success. These accumulate quietly. [Silent failures that look like success](/jurij/p/production-bugs-that-never-threw-an-error) are the hardest to catch.

The standard throughout: does this code handle failure gracefully? Can a new developer understand it in a week? Will it break when traffic doubles?

## The Deliverable

Not a 50-page document nobody reads. One report, structured by severity. Each issue: the file and line, what's wrong, how to fix it. Organized by risk — security first, then anything that breaks under real conditions, then debt that slows the team down. You know exactly where to start.

This is the [quality gate](/principles/delivery/quality-gates) applied to inherited code, and the [scout rule](/principles/delivery/scout-rule) for what to do next. The audit is current state. What you do with it is the cleanup.

## Who This Is For

Founders who built with Cursor, Lovable, v0, Bolt.new, or Replit Agent and need a senior code review for hire. Startups preparing for fundraising who need an independent technical assessment. Teams inheriting a contractor's codebase. Non-technical founders who need code health translated into business risk.

One week, $997, fixed deliverable. If you need this, [submit a project brief](/brief).
