AI Is Writing Your Laravel Code. Who Is Checking Its Security?
Laravel's AI SDK, Boost, and tools like Cursor and Claude Code are changing how we build applications. But over 40% of AI-generated code contains security flaws. Here is how to ship faster without opening the door to attackers.
The way we write Laravel applications is changing. Laravel Boost gives AI agents deep access to your application structure. The Laravel AI SDK lets you build AI-powered features with first-party support. Tools like Claude Code, Cursor, and GitHub Copilot are writing more of our code than ever before.
This is genuinely useful. AI agents that understand your routes, schema, and conventions can scaffold features in minutes that would take hours by hand. But there is a security cost that most teams are not accounting for.
Over 40% of AI-generated code contains security vulnerabilities, according to research from Endor Labs. That number holds even with the latest generation of large language models. The code works. It passes tests. It just happens to be insecure in ways that are easy to miss during code review.
What Laravel's AI tooling actually does
Before talking about risks, it is worth understanding what is now available. Laravel's AI ecosystem has matured rapidly in early 2026.
Laravel Boost
Boost is an MCP server that bridges AI coding agents and your Laravel application. Install it with composer require laravel/boost --dev and it gives your AI agent access to 15+ tools:
- Database schema inspection — the agent can query your table structures and relationships
- Route listing — every registered route with its middleware, controller, and parameters
- Configuration and environment — application settings without exposing secrets
- Log analysis — read and analyze application logs and browser console errors
- Tinker integration — execute PHP in your application context to test hypotheses
- Documentation search — 17,000+ pieces of Laravel-specific knowledge, version-aware
Boost also provides composable AI guidelines that are automatically assembled based on your installed packages. If you use Livewire 4, Filament 4, and Pest, the agent gets guidelines specific to those exact versions.
Laravel AI SDK
The AI SDK (released February 2026) is a first-party package for building AI-powered features into your application. It uses Prism under the hood and adds:
- The Agent pattern — dedicated PHP classes that encapsulate instructions, tools, and output schemas for interacting with LLMs
- Conversation persistence — maintain context across requests
- Streaming responses — real-time output for chat interfaces
- Testing utilities — mock AI responses in your test suite
Agent Skills
Laravel Skills is an open directory of reusable AI agent skills. These are lightweight knowledge modules that agents activate on demand, loading detailed patterns and best practices only when relevant to the current task.
Together, these tools mean your AI agent is not just guessing at Laravel conventions — it genuinely understands your application. That is a significant improvement. But it also means AI-generated code ships faster, with less friction, and with fewer human checkpoints between generation and production.
The security gap AI creates
The problem is not that AI writes bad code. The problem is that AI writes plausible code that passes the smell test but misses security fundamentals.
Missing input validation
This is the most common vulnerability in AI-generated code (CWE-20). When you ask an AI to build a controller action, it will generate something that handles the request and returns a response. What it often skips is validating and sanitizing the input.
// What AI often generates
public function update(Request $request, Project $project)
{
$project->update($request->all());
return redirect()->route('projects.show', $project);
}
// What it should generate
public function update(Request $request, Project $project)
{
$validated = $request->validate([
'name' => ['required', 'string', 'max:255'],
'description' => ['nullable', 'string', 'max:1000'],
'status' => ['required', 'in:active,archived'],
]);
$project->update($validated);
return redirect()->route('projects.show', $project);
}
The first version works. It will pass a basic test. But $request->all() opens the door to mass assignment, and without validation rules, any field on the model can be overwritten — including fields like is_admin or owner_id.
Broken authentication and authorization
AI models frequently generate endpoints without authorization checks (CWE-306, CWE-284). Unless your prompt explicitly mentions permissions, the generated code often assumes any authenticated user can access any resource.
// AI-generated: works, but no authorization
public function destroy(Project $project)
{
$project->delete();
return redirect()->route('projects.index');
}
// What you need
public function destroy(Project $project)
{
$this->authorize('delete', $project);
$project->delete();
return redirect()->route('projects.index');
}
A single missing $this->authorize() call means any authenticated user can delete any project. This kind of flaw is easy to miss in review because the code looks complete.
Hardcoded credentials and secrets
AI models sometimes embed API keys, database credentials, or encryption keys directly in code (CWE-798), especially when generating configuration files, seeders, or integration code. Even when they use .env references, they may generate example .env files with real-looking credentials that get committed.
Stale and hallucinated dependencies
AI models suggest packages based on their training data, which has a cutoff date. This creates two risks:
- Stale libraries — the model suggests a package version with a known CVE that was patched after its training cutoff
- Hallucinated packages — the model suggests a package that does not exist. Attackers monitor for these and register the names with malicious code (a technique called "slopsquatting")
Both risks expand your attack surface through your dependency tree without introducing any obvious red flags in code review.
Why speed makes this worse
The bottlenecks that used to catch security issues — writing code manually, waiting for code review, debugging — are being removed. With AI assistance, a feature that took a week now takes a day. That is genuinely valuable. But it also means:
- More code reaches production per unit of time
- Less time is spent manually reading each line
- Security reviews become the bottleneck, so they get shortened or skipped
- Configuration changes happen faster, increasing the chance of drift
A developer using AI tools might scaffold 10 endpoints in an afternoon. Each one might have a subtle authorization gap or missing validation rule. Multiply that across a team, and you have a significant accumulation of security debt that no one explicitly chose to take on.
How to stay secure while using AI tools
The answer is not to stop using AI. These tools are too valuable for that. The answer is to build security checks into the workflow so they happen automatically, regardless of how the code was written.
1. Use Laravel Boost's conventions
Boost gives AI agents Laravel-specific guidelines, which significantly reduces the chance of non-idiomatic code. Make sure you have it installed and configured:
composer require laravel/boost --dev
php artisan boost:install
The auto-generated guidelines steer agents toward Laravel's built-in security features (Form Requests, Policies, Gates) rather than hand-rolled alternatives.
2. Run static analysis on every PR
Tools like Larastan and PHPStan catch type errors and potential bugs that AI introduces. Add them to your CI pipeline so they run on every pull request:
# .github/workflows/analysis.yml
- name: Run Larastan
run: ./vendor/bin/phpstan analyse --memory-limit=2G
Static analysis will not catch every security issue, but it will catch mass assignment through $request->all(), undefined method calls, and type mismatches that indicate missing validation.
3. Require Form Requests for all input handling
Establish a team convention that every controller action accepting user input must use a Form Request rather than inline validation. This makes missing validation visible in code review because the type hint is either there or it is not:
// Easy to spot: this action accepts raw input
public function store(Request $request) { ... }
// Clear: validation rules are defined in the Form Request class
public function store(StoreProjectRequest $request) { ... }
4. Audit your dependencies
Run composer audit regularly and add it to CI. This catches known CVEs in your dependency tree, including stale packages that AI suggested:
composer audit
For npm dependencies (which AI also generates):
npm audit
5. Review AI-generated code with security in mind
When reviewing AI-generated PRs, check specifically for:
- Authorization: Does every action check permissions? (
$this->authorize(), Gates, Policies) - Validation: Is every input validated with specific rules? (not
$request->all()) - Mass assignment: Are
$fillableor$guardedproperly set on models? - Authentication: Are routes protected by
authmiddleware? - Secrets: Are any credentials hardcoded instead of using
.env?
6. Monitor your production application externally
This is where internal tools reach their limit. Static analysis, code review, and CI/CD checks all happen before deployment. But security issues can emerge after deployment from:
- Configuration drift — a deploy changes environment variables or exposes debug mode
- New endpoints — AI-generated routes that were not caught in review
- Missing security headers — AI-generated middleware that does not set CSP, HSTS, or X-Frame-Options
- Exposed development tools — Telescope, Horizon, or Ignition left accessible
External monitoring scans your running application from the outside, the same way an attacker would, and catches these issues regardless of how the code was written or reviewed.
The real risk is not AI itself
AI coding tools are not inherently dangerous. They produce the same categories of vulnerabilities that human developers produce — missing validation, broken authorization, hardcoded secrets. The difference is volume and speed.
When you write code by hand, the pace naturally creates checkpoints. When AI writes code, those checkpoints disappear unless you deliberately build them into your process. The teams that ship securely with AI tools are the ones that:
- Use framework-aware tools (Boost, Larastan) that enforce conventions
- Automate security checks in CI (static analysis, dependency audits)
- Review AI output specifically for security, not just functionality
- Monitor production continuously for issues that slip through
The velocity AI gives you is only valuable if the code reaching production is secure. Otherwise, you are just shipping vulnerabilities faster.
Key takeaways
- Over 40% of AI-generated code contains security flaws, primarily missing input validation, broken authentication, and hardcoded credentials
- Laravel Boost and the AI SDK improve code quality by giving agents framework-specific context, but they do not eliminate the need for security review
- Speed is the risk — AI removes the natural checkpoints that used to catch issues before production
- Automate your checks — static analysis, dependency audits, and external monitoring should run automatically, not depend on human diligence
- Monitor externally — the gap between what your CI catches and what is actually exposed in production is where attackers operate
Frequently Asked Questions
Is AI-generated Laravel code secure?
Not by default. Studies show that over 40% of AI-generated code contains security flaws, including missing input validation, SQL injection, broken authentication, and hardcoded credentials. AI models generate code that works logically but often lacks the security context of your specific application. You need to validate, review, and monitor AI-generated code the same way you would code from a junior developer.
What is Laravel Boost?
Laravel Boost is an MCP (Model Context Protocol) server that gives AI coding agents deep insight into your Laravel application. It provides 15+ tools for inspecting your database schema, routes, configuration, logs, and documentation. Boost transforms general-purpose AI agents into Laravel-specific experts that understand your exact application structure and conventions.
What is the Laravel AI SDK?
The Laravel AI SDK is a first-party package (released February 2026) for building AI-powered features directly in Laravel applications. It provides the Agent pattern, conversation persistence, streaming responses, and testing utilities. It uses Prism under the hood and supports providers like OpenAI, Anthropic, and Ollama.
What are the most common security vulnerabilities in AI-generated code?
The most common vulnerabilities are missing input validation (CWE-20), SQL injection (CWE-89), broken authentication (CWE-306), broken access control (CWE-284), hardcoded credentials (CWE-798), and OS command injection (CWE-78). AI models also introduce risks through stale dependencies with known CVEs and hallucinated package names that attackers can register with malicious code.
How do I secure my Laravel application when using AI coding tools?
Use AI tools like Boost that understand Laravel conventions. Always review generated code for security issues, especially input validation, authentication, and authorization. Run static analysis (Larastan, PHPStan) on AI-generated code. Use external security monitoring to catch misconfigurations that reach production. Never trust AI-generated code with sensitive operations without manual review.
Related Articles
Laravel Debug Mode in Production: Why It's Dangerous and How to Fix It
Debug mode in production exposes stack traces, database credentials, environment variables, and internal paths. Learn exactly what it reveals, how attackers use it, and how to make sure it never reaches production.
SecurityOWASP Top 10 for Laravel: A Practical Guide
A hands-on mapping of every OWASP Top 10 (2021) category to specific Laravel vulnerabilities, with code examples of what goes wrong and how to fix it.
SecurityIs Your Laravel .env File Exposed? How to Check and Fix It
Your .env file contains database credentials, API keys, and encryption secrets. If it's accessible from the web, attackers already have everything they need. Here's how to check and fix it.