hrtyy.dev

Local AI Review and Remote Human Review

In the AI development era, code review is one of the most important but tiring tasks. In this post, I will introduce an approach to use local AI code reviewing before remote human review.

Abstract

In the era of AI development, code review remains one of the most crucial yet tedious tasks. However, the process of human code review is too tiring and time-consuming. There are several AI tools for code review such as GitHub Copilot, Claude Code Action, CodeRabbit, and so on. And, we also want them to fix the code automatically.

However, these AI tools are not perfect. As a result, we need several iterations to fix and review code. In this post, I will introduce an approach to utilize local AI for code reviewing before sending it for remote human review.

Project

I prepared a sample project. Important files

  • ./claude/commands/ai-review.md
  • ./claude/commands/ai-review-fix.md
  • ./REVIEW_GUIDE.md

Introduction

There are several AI tools for code review as CI/CD tools such as GitHub Copilot, Claude Code Action, CodeRabbit, and so on. These tools review our codes and comment on the pull request automatically.

We select the comments we want to fix. Then, what do you do?

If you use Cursor, do you "fix in cursor"? or if you use Claude Code Action, do you say "@claude fix it"?

I wanted to do this more efficiently, so I developed the following approach

  • 1.Use local AI (Claude Code) to review the code automatically.
  • 2.Review the results and select the local AI comments to address.
  • 3.Use local AI to fix the code automatically.

Methods

We use Claude Code commands

  • .claude/commands/ai-review.md
  • .claude/commands/ai-review-fix.md

In ai-review.md, prompts are written as follows.

1Extract the diff between the current branch and `origin/develop`, then review the changes under the rules below.
2Spawn a **sub-agent** for each review item so they run in parallel; the **orchestrator** must gather the results and write the final file.
3
4Code-review block format:
5
6 Suggestion
7 $FILE_NAME
8 <short summary of the issue>
9 <current code> => <proposed code>
10
11Add one block per suggestion, stacking them.

In ai-review-fix.md, prompts are written as follows.

1Using the review results in `/tmp/ai-reviews/$BRANCH_$TIMESTAMP.md`, apply the required code changes.
2
3- If there are multiple review files for the same branch, use only the one with the most recent timestamp.
4- When the review file contains User comments, respect those comments while making the fixes.

Claude Code commands can be run with the following command in the Claude Code terminal.

1/ai-review

This command produces the review results in /tmp/ai-reviews/$BRANCH_$TIMESTAMP.md.

I can then review the results and select which comments to address. This is the most important part. In production code, we have our own rules, guidelines, and history.

For example,

  • I can choose to temporarily keep technical debt code as-is
  • Code that doesn't follow REVIEW_GUIDE.md but will be fixed in a future batch update
  • Code that must remain as-is due to external library constraints

These contexts are often not documented in the code, so AI cannot understand them.

After reviewing the results, I can run the following command to fix the code automatically.

1/ai-review-fix

This approach can fix the code automatically. This is particularly useful for routine or less critical code reviews.

Conclusion

In this post, I introduced an approach to use local AI for code reviewing before remote human review. I wanted to iterate the ai code review and fixing process. Running these processes locally was good for me.

However, I wonder: if a coding agent could write code perfectly according to the code review guide, would a reviewing agent still be needed?