Who is Reviewing the Code AI is Writing?

We don’t talk enough about the "hidden" phase of the Software Development Life Cycle (SDLC). We talk about writing code (the creative part) and shipping code (the dopamine hit). But the reality is that developers spend significantly more time reading, debugging, and reviewing code than they do writing it.

Generative AI like GitHub Copilot has solved the "Blank Canvas" problem—it helps you write code fast. But speed often comes at the cost of precision. Even the best LLMs hallucinate, introduce subtle logic errors, or ignore edge cases.

This creates a new bottleneck: Who validates the AI?

If you are relying on your human peers to catch every AI-generated race condition in a Pull Review, you are slowing down the team. The solution is an Agentic Workflow: pairing a "Creative Coder" agent (Copilot) with an "Analytical Reviewer" agent (CodeRabbit CLI).

Here is how to set up a closed-loop AI development cycle that catches bugs locally before you ever git commit.

You can give these a try here: 🔗 Try CodeRabbit 🔗 GitHub Copilot

The Architecture: Builder vs. Reviewer

To understand why you need two tools, you have to understand their roles:

  1. The Builder (GitHub Copilot): Integrated into the IDE. It is optimized for speed, context prediction, and syntax generation. It is the "Creative Partner."

  2. The Reviewer (CodeRabbit CLI): Runs in the terminal. It is optimized for analysis, security scanning, and logic verification. It looks for what’s missing (validation, type safety, error handling).

When combined, you stop shipping "Draft 1" code to production. GitHub Copilot also has a code review agent but in this article we are learning about CodeRabbit.

The Workflow: A Live Example

Let’s look at a real-world scenario involving a Python-based Flappy Bird application. The goal is to add a feature that tracks player performance (blocks passed and accuracy).

Step 1: The Build (GitHub Copilot)

Inside VS Code, we prompt Copilot Chat to generate a new class.

Prompt:

"Create a new file named player_insights.py where write a logic so that we can provide insights to the player about their performance on how many blocks he/she passed with how much accuracy."

Copilot Output:
It generates a functional PlayerInsights class. It has methods for blocks_progress and accuracy. To the naked eye, it looks fine. It runs.

codePython

class PlayerInsights:
    def __init__(self, blocks_passed, total_blocks, correct_actions, total_actions):
        self.blocks_passed = blocks_passed
        self.total_blocks = total_blocks
        # ... rest of init

Step 2: The Analysis (CodeRabbit CLI)

Before committing this, we run the CodeRabbit CLI locally. This tool analyzes uncommitted changes against high-level coding standards and logic patterns.

Command:

codeBash

coderabbit --prompt-only

Note: The --prompt-only flag is key here. It asks CodeRabbit to generate a critique without automatically applying the fix, giving you control.

The Catch:
CodeRabbit instantly flags issues that Copilot missed. It’s not just looking for syntax errors (a linter could do that); it’s looking for logic and robustness.

CodeRabbit Feedback:

  1. Missing Type Hints: The class lacks type annotations, making it harder to maintain.

  2. Input Validation (Critical): The __init__ method accepts any values. CodeRabbit points out that total_blocks or total_actions could be negative or zero, which would cause invalid statistics or ZeroDivisionError later in the math.

  3. Documentation: No docstrings.

Step 3: The Closed Loop (AI-to-AI Prompting)

This is the "Agentic" part of the workflow. Instead of manually fixing these issues, we feed the Analytical AI's feedback directly back into the Creative AI.

We copy CodeRabbit’s critique and paste it into Copilot Chat.

Prompt to Copilot:

[Paste CodeRabbit feedback]: "Ensure the __init__ does not validate inputs... update constructor to validate that blocks_passed are non-negative integers... ensure get_insights return type is annotated..."

Step 4: The Result

Copilot rewrites the code based on the strict constraints provided by CodeRabbit.

The Optimized Code:

codePython

from typing import Dict, Union

class PlayerInsights:
    """
    Provides insights into a player's performance.
    """
    def __init__(self, blocks_passed: int, total_blocks: int, ...):
        # Validation Logic added by AI Loop
        if total_blocks < 0 or total_actions < 0:
             raise ValueError("Total blocks must be non-negative")
        
        self.blocks_passed = blocks_passed
        # ...

We run coderabbit --prompt-only one last time.
Result: Review completed. No issues found.

Why This Matters

You might ask, "Why not just ask Copilot to write secure code in the first place?"

Because prompting is hard, and humans forget edge cases. If you ask Copilot for a feature, it prioritizes the feature. If you use CodeRabbit, it acts as a specialized adversarial agent dedicated solely to finding faults.

By using this pairing, you achieve three things:

  1. Pre-PR Hygiene: You aren't wasting your senior engineer's time on code reviews pointing out missing error handling. The AI caught it locally.

  2. Contextual Awareness: Unlike a static analyzer, these tools understand the intent of the code. CodeRabbit didn't just say "variable unused"; it explained why the logic was unsafe (potential division by zero).

  3. Platform Independence: CodeRabbit runs in the CLI. It works in VS Code, JetBrains, or vim. It integrates into CI/CD pipelines to block bad merges automatically.

The future of development isn't just "AI writes the code." It is AI builds, AI validates, Human architects.

FeatureGitHub CopilotCodeRabbit CLIThe ComboPrimary RoleGeneration (Creative)Review (Analytical)End-to-End DevContextIn-IDE, File-levelLocal changes & RepositoryFull ContextSecurityBasic patternsVulnerability & Logic scanningSecure by DesignWorkflowWrite -> DebugReview -> FixLoop -> Ship

Stop treating AI tools as isolated chat bots. Chain them together. Let the Builder build, let the Reviewer review, and ship production-ready code faster.

Next
Next

What are SLMs (Small Language Models)? Why are SLMs the future?