Skip to content

Instantly share code, notes, and snippets.

@bbchriscesar
Last active August 22, 2025 05:46
Show Gist options
  • Save bbchriscesar/a2fd19510679d4af2e648d8b4fcfe14b to your computer and use it in GitHub Desktop.
Save bbchriscesar/a2fd19510679d4af2e648d8b4fcfe14b to your computer and use it in GitHub Desktop.

AI Code Review Prompt for Professional Standards

Instructions

Please review the following Playwright TypeScript test automation code and evaluate it against professional standards. Focus on identifying whether the code appears AI-generated and provide specific recommendations for improvement.

Review Criteria

1. Code Quality & Professionalism

  • Variable Naming: Are variables named meaningfully and domain-specifically? Flag generic names like element, data, result, item, temp, value1, response
  • Method Structure: Are methods appropriately sized and focused? Identify overly long methods that try to do too much
  • TypeScript Usage: Is proper typing used instead of any? Are interfaces and types defined appropriately?
  • Comments & Documentation: Is the code self-documenting with meaningful JSDoc comments explaining business logic?

2. Framework Integration

  • Code Reuse: Does this code duplicate existing functionality that could be found in the framework?
  • Consistency: Does the code follow the same patterns, naming conventions, and architectural style as the existing codebase?
  • Utilities Usage: Are existing helper methods and utilities being leveraged appropriately?

3. Playwright Best Practices

  • Locator Strategy: Are locators robust and maintainable? Avoid fragile selectors
  • Wait Strategies: Are proper wait conditions used instead of hard waits?
  • Page Object Model: Is the code following POM patterns if applicable?
  • Test Data Management: How is test data handled? Are there hardcoded values that should be externalized?

4. Error Handling & Resilience

  • Error Messages: Are error messages meaningful and provide context about what failed?
  • Exception Handling: Is error handling comprehensive and appropriate for the test context?
  • Logging: Is there adequate logging for debugging and monitoring?
  • Recovery Mechanisms: Are there appropriate fallback strategies?

5. AI-Generated Code Indicators

Look for these common signs of AI-generated code:

  • Generic variable names and method names
  • Overly verbose comments explaining obvious code
  • Repetitive patterns that could be abstracted
  • Missing business context in method names
  • Basic error handling without domain-specific context
  • Hardcoded values without configuration
  • Missing integration with existing framework utilities

Output Format

Overall Assessment

  • Professionalism Score: (1-10, where 10 is production-ready professional code)
  • AI-Generated Likelihood: (Low/Medium/High)

Specific Findings

✅ Strengths

  • List what the code does well
  • Highlight good practices followed

⚠️ Areas for Improvement

  • Specific issues found with examples
  • Suggestions for refactoring

🔧 Recommended Changes

  • Concrete code improvements with examples
  • Integration opportunities with existing framework
  • Better naming suggestions

Refactoring Suggestions

Provide 2-3 specific code snippets showing how to improve the most critical issues found.

Additional Questions to Consider

  1. Does this code solve a problem that's already solved in the existing framework?
  2. Could this functionality be made more generic to benefit other team members?
  3. Are the method and class names clear enough that a new team member would understand their purpose?
  4. Does the code handle edge cases and error scenarios appropriately for a production test suite?
  5. Is the code maintainable and would it be easy to debug when tests fail?

Please provide a comprehensive review that would help improve the code's professionalism and integration with the existing framework.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment