"Vibe coding" (letting an AI agent implement features from natural-language prompts with minimal review) is fast—but it's also a reliable way to accidentally ship vulnerabilities.
A recent benchmark study on agent-generated code ("SUSVIBES") found that even when the best system produced solutions that were functionally correct, a large majority of those "working" solutions were still insecure (the paper reports ~80%+ insecure among functionally-correct outputs in their setup).
Below is a practical checklist you can bake into your build process (and your AI coding rules) so your app doesn't become part of that statistic.
The Security Checklist
1) Don't do sensitive "math" on the client
If business-critical logic lives on a user device (browser/mobile), assume it can be changed.
Move server-side:
- Pricing, discounts, taxes, credits
- Scores, ranks, eligibility, permissions
- Anything that affects money, access, or trust
Client should: Collect input, display results, never be the source of truth for sensitive calculations.
2) Sanitize and validate all inputs
Treat every input as hostile until proven otherwise—forms, query params, headers, webhook payloads, file uploads.
Minimum bar:
- Validate types + bounds (e.g., number ranges, enum values)
- Normalize/escape user-provided text before storage/output
- Use parameterized queries/ORM safely (avoid string-built queries)
- Apply output encoding in templates to prevent XSS
3) Put a speed limit on expensive actions (rate limiting)
Without rate limits, any "cost button" becomes an attack surface: Send email/SMS, Generate image/AI completion, Create invoices/exports, Login/password reset.
Do:
- Rate limit per IP + per user
- Add quotas for costly endpoints
- Add backoff/cooldowns for repeated failures
- Consider CAPTCHA/attestation only where it's warranted
4) Don't log sensitive data
AI-generated fixes often add "just log everything" debugging. That can leak secrets in browser console, server logs, observability dashboards, and error reporting tools.
Never log: Passwords, reset tokens, API keys, auth headers, session cookies, full payment details, raw personal data.
Prefer: Redaction helpers, structured logs with allow-lists, correlation IDs to trace errors without exposing payloads.
5) Audit with a "rival" (second pass, different tool/model)
A second opinion catches patterns the first agent misses.
- Ask a different AI model to audit for vulnerabilities
- Run SAST/linters (Semgrep, ESLint security rules)
- Run dependency scanning (npm audit, Snyk, Dependabot)
- Add a lightweight threat-model checklist per feature
6) Keep dependencies up to date (this is not optional)
Outdated packages are one of the easiest ways to get owned—especially in popular frameworks.
Process improvements:
- Weekly dependency update cadence
- Auto PRs + CI for upgrades
- Patch fast when a critical CVE drops
7) Handle errors without revealing internals
Verbose errors help attackers map your system.
Public-facing errors: Generic ("Something went wrong"), no stack traces, SQL details, file paths, internal hostnames.
Private logs: Full stack traces, request IDs, minimal necessary context (redacted).
Add this to your .cursorrules
# Security rules for AI-generated code (Cursor / agent rules) - Never trust client-side values for money, permissions, scoring, or eligibility. All sensitive calculations and authorization checks must run server-side. - Validate and sanitize ALL inputs (body, query, headers, webhooks). Enforce strict schemas, bounds, and allow-lists. Use parameterized queries only. - Add rate limiting + quotas to any endpoint that triggers cost or side effects: auth, email/sms, AI generation, uploads, exports, webhooks. Include per-IP and per-user limits, plus backoff/cooldowns. - Do not log secrets or sensitive user data. Redact tokens, passwords, headers, cookies, payment details, and PII by default. - Implement secure error handling: generic messages to users, detailed logs privately with correlation/request IDs. - Keep dependencies updated and respond urgently to critical CVEs. Add dependency scanning (CI) and pin/upgrade vulnerable packages. - Before merging, run a security review: (1) static scan (lint/SAST), (2) dependency scan, (3) second-model audit prompt: "Audit this diff for auth, injection, XSS, SSRF, IDOR, secrets leakage, and rate-limit gaps."
A note on front-end CRUD with Supabase
Front-end-to-database patterns can be secure if you have tight Row Level Security (RLS), correct policies, and careful key management—but misconfiguration is common, and sensitive logic still tends to belong behind a server boundary (especially for billing, permissions, and anything you might need to change quickly without waiting on mobile store releases).