Abstract
In AI development, providing feedback to AI is crucial. The most famous tool for developers is the linter, which helps teams maintain coding standards systematically. This is equally effective for AI - AI checks lint rules and writes code accordingly.
Meanwhile, developers embed various forms of feedback for users in their services, such as form validation and status codes. When these are handled elegantly, the user experience improves significantly.
I've noticed a fascinating symmetry between feedback for users and feedback among developers. Just as we've leveraged developer-to-developer feedback tools for AI development, could we also utilize user-facing feedback mechanisms for AI development?
The Parallel Worlds of Feedback
Developer Feedback: Linting
Linting represents a systematic approach to maintaining code quality. It's a developer's way of saying "this is how we write code here." When AI encounters lint errors, it understands exactly what needs to be fixed - the feedback is clear, actionable, and systematic.
User Feedback: Validation
Form validation, error messages, and status codes serve a similar purpose for users. They guide users toward successful interactions with the system. A well-designed validation system tells users exactly what went wrong and how to fix it.
The Opportunity in Symmetry
Here's where it gets interesting. Both linting and validation share core characteristics:
- Clear expectations
- Systematic feedback
- Actionable error messages
- Predictable behavior
If AI can learn from linting to write better code, why can't it learn from user validation to test applications more effectively?
Method: Playwright MCP in Action
I use Playwright MCP, which you can easily install from:
Here, I'm using Claude Code. Simply tell Claude Code to test your locally running web application:
1- test my application using playwright mcp2- my application is running on localhost:8080
With these simple prompts, Claude Code will use Playwright MCP to test your application.
What I expect here is: "finding bugs that I couldn't discover through my debugging efforts."
The Lint-Like Nature of User Testing
How does AI determine what constitutes a bug? This is where I see the parallel with linting.
Expected Errors (Like Lint Warnings)
- Entering a phone number in an email field
- Sending a string to an API that only accepts numbers
These are errors developers anticipate and handle gracefully. They're like lint warnings - expected deviations from the norm.
Unexpected Errors (Like Lint Failures)
- HTTP status codes in the 5xx range
- Page transitions caught by React Error Boundaries
These are often errors developers didn't anticipate. They represent genuine bugs.
The crucial insight is that Playwright testing can systematically distinguish between expected and unexpected errors. Doesn't this mirror the linting culture in development?
The Bidirectional Benefit
This symmetry creates a virtuous cycle:
- Better UX → Better AI Testing: When user experience is improved with clear validation and error handling, AI can test more effectively
- AI-Friendly Design → Better UX: When we design systems to be AI-testable, we often create better user experiences as a byproduct
This parallel suggests several best practices:
- Treat validation as seriously as linting: Just as we enforce coding standards, we should enforce validation standards
- Design for testability: Systems that are easy for AI to test are often easier for users to understand
Summary
The symmetry between developer linting and user validation offers a powerful lens for improving AI development. By recognizing that user-facing feedback mechanisms can serve as "lints" for AI testing, we can create systems that are both more user-friendly and more AI-testable.
Just as linting has become an essential part of modern development workflows, treating user validation as a first-class citizen in AI testing could revolutionize how we build and test applications. The future of development might not just be about making AI understand our code - it's about making our entire systems speak a language that both humans and AI can understand fluently.