GitHub Copilot now includes integrated security review that blends LLM detections with deterministic tools (CodeQL, ESLint). When the Copilot coding agent writes code, it automatically runs CodeQL, checks dependencies, and performs secret scanning — fixing issues before opening the PR.
Why It Matters for AI-Assisted Development
This is the first major "AI that secures its own output" feature in a mainstream coding tool:
- Self-Review Loop (Oct 2025): The Copilot coding agent reviews its own changes before opening PRs — running CodeQL analysis, dependency checks, and secret scanning. If issues are found, it attempts to fix them automatically.
- Agentic Code Review: Blends LLM detections with tool calling to CodeQL and ESLint, gathering full project context.
- No GHAS License Required: Security validation of agent-generated code works without a GitHub Advanced Security license.
Strengths
- Zero-config security scanning built into the coding agent workflow
- Combines LLM review with deterministic tools
- Seamless GitHub ecosystem integration
- Organization-wide coverage — works on PRs from users without Copilot licenses
Limitations
- LLM-only review (without CodeQL) has significant blind spots — research from NYU and Microsoft found LLM-based review frequently missed SQLi, XSS, and insecure deserialization
- Not a replacement for dedicated SAST tools
- Only works within GitHub ecosystem
- Still maturing — capabilities added October 2025
Why Assess
The self-review concept is compelling, but the security capabilities are still immature. Veracode's 2025 State of Software Security report found that AI-generated code frequently introduces vulnerabilities that traditional scanners miss, and early versions of Copilot's security review primarily caught low-severity style issues. Watch this space — the integration with CodeQL strengthens it significantly.