LR-02

One Exposed Key Gives Strangers Full Access to Your Database

Your Supabase project has two keys. The anon key is public — it's designed to be in the browser. The service_role key is the master key. It bypasses all Row Level Security. It can read, write, and delete every row in every table. It can access the auth.users table. It can create and remove users.

If the service_role key is in your frontend bundle, every visitor to your site has it.

The same applies to your Stripe secret key. If sk_live_ is in your client-side JavaScript, anyone with a browser can refund payments, access customer data, create charges, and read your full transaction history.

These keys are not hard to find. They're in the JavaScript source, in network requests, in your public GitHub repository. Security researchers found 500+ exposed Stripe keys in client-side bundles in a single month. Supabase secret leaks on GitHub grew 992% year over year according to GitGuardian's 2026 report.

This is one of the most dangerous single points of failure in an AI-generated app. One key, total exposure.


Who This Is For

  • Founders who used AI tools to generate their Supabase or Stripe integration and aren't sure which keys ended up where
  • Developers working with Next.js who use environment variables but aren't confident about the client/server boundary
  • Anyone who has a .env file and hasn't verified which variables are exposed to the browser
  • Teams who committed .env files to Git at any point during development — even if they were later removed

If you built with Lovable, Bolt, Cursor, or a similar AI tool, key exposure is one of the most frequently detected critical findings. In audits of AI-generated apps, exposed credentials consistently rank as the number one issue.


What Founders Experience

  • The app works perfectly. The service_role key in client code doesn't cause errors — it gives the frontend more access than it should have. Everything works better than expected, because nothing is restricted.
  • You don't know it's exposed. The key is embedded in the JavaScript bundle. You don't see it in the UI. You don't see it in logs. It's only visible to someone who opens DevTools or reads the page source.
  • Someone finds it. A security researcher, a competitor, or an automated scanner finds the key. With service_role, they bypass all your RLS policies. Every table is fully readable and writable. With a Stripe secret key, they have full API access to your payment infrastructure.
  • The blast radius is total. Unlike a single missing RLS policy that exposes one table, a leaked service_role key exposes everything — every table, every user, every file in storage. A leaked Stripe key exposes every customer, every payment, every refund capability.
  • Rotation is urgent and disruptive. Fixing this isn't just removing the key from the code. It's rotating the key in Supabase or Stripe, redeploying, and auditing whether the key was used maliciously during the exposure window.

What's Actually Happening

There are four common ways keys end up in the browser:

1. The NEXT_PUBLIC_ Prefix Mistake

In Next.js, any environment variable prefixed with NEXT_PUBLIC_ is included in the client-side JavaScript bundle. If someone names their variable NEXT_PUBLIC_SUPABASE_SERVICE_ROLE_KEY, that master key ships to every browser.

This is the most common pattern in AI-generated code. The AI doesn't distinguish between public and secret keys when generating environment variable names. It optimizes for "code that runs," and NEXT_PUBLIC_ makes the variable accessible everywhere.

2. AI Generates Client-Side Code That Uses the Secret Key

AI tools sometimes import the service_role key directly in a React component or client-side utility file. The code works — it has more permissions than necessary, so it never hits a "permission denied" error. The developer never notices because the app functions correctly.

In the Lovable CVE (CVE-2025-48757), apps used VITE_ prefixed variables that were bundled into the client. Bolt.new had cases where the service_role key appeared in seed scripts committed to public GitHub repositories.

3. The .env File Is Committed to Git

The developer commits .env to the repository. Even if it's later added to .gitignore, the key remains in git history. Anyone who clones the repo — or looks at the commit history — can find it.

GitGuardian's 2026 report found 28.65 million secrets on public GitHub repositories, a 34% year-over-year increase. Repositories using AI coding assistants like Copilot leaked secrets at a 40% higher rate than baseline.

4. The 'use server' False Confidence

In Next.js App Router, 'use server' is not a substitute for explicit client/server boundary discipline. It marks exported functions as Server Actions, but it does not replace careful separation between secret-bearing server code and client-accessible code. If secret-bearing modules are imported into the wrong places, or shared incorrectly across client and server code, you can still create exposure risk or false confidence.


What This Puts at Risk

Complete database access. The service_role key bypasses all RLS. Even if you have perfect Row Level Security policies, they don't apply when the request uses service_role. An attacker can read every user's data, modify any row, and delete records.

Payment infrastructure compromise. A Stripe secret key (sk_live_) gives full API access: create charges, issue refunds, read customer data, modify subscriptions. One documented incident saw 175 customers charged $500 each — $87,500 in fraudulent charges — before the founder rotated the key.

Credential chain reaction. A leaked key in a public repo often leads to additional exposure. The key can be used to access other secrets stored in the database, API tokens for third-party services, or customer payment data — creating a cascade of compromised credentials.

Long-lived exposure. GitGuardian found that 64% of secrets confirmed valid in 2022 were still active and exploitable in 2026. Keys that leak are rarely rotated promptly, even after the leak is discovered.


How Trust Score Detects It

Trust Score runs five checks that catch key exposure across the most common leak vectors:

AUTH-01: service_role key not in client code. Scans for service_role usage, secret-bearing Supabase client initialization, and patterns that place privileged database access in client-reachable code.

AUTH-05: No secrets with NEXT_PUBLIC_ prefix. Checks your environment configuration for secret keys that have been incorrectly prefixed with NEXT_PUBLIC_, which would include them in the client bundle.

BIL-01: Stripe secret key not in client code. Scans for sk_live_ and sk_test_ patterns in frontend code. A Stripe secret key in the browser gives anyone full access to your Stripe account.

ENV-01: .env.example exists. A hygiene check: projects with a clean .env.example are less likely to scatter secrets ad hoc across code and configuration. Not a direct exploit check, but a strong signal of intentional secret management.

ENV-02: No secrets in committed .env files. Checks whether .env files containing real secrets have been committed to the repository.


Real Incidents

$87,500 in fraudulent charges (February 2026). A founder built his startup with Claude Code. The AI placed the Stripe secret key in a client-accessible location. An attacker found it, charged 175 customers $500 each, and the founder lost $2,500 in Stripe fees before rotating the key. The founder wrote: "He trusted the AI. The AI never questioned the security. Neither did he."

Moltbook — exposed Supabase key in client-side JavaScript (January 2026). The AI-built social network had a Supabase key in client-side JavaScript. Combined with missing access controls and no RLS, Wiz researchers were able to access 1.5 million authentication tokens, 35,000 emails, and 4,000 private DMs.

Bolt.new seed script on public GitHub. A campus events app built with Bolt.new had the Supabase service_role key hardcoded in scripts/seed.cjs — committed to a public GitHub repository. Anyone who found the repo had full database access.

500+ Stripe keys found in one month (2026). A security researcher reported finding over 500 exposed Stripe live secret keys in client-side JavaScript bundles across production apps, many built with AI tools.

Supabase secret leaks +992% YoY (GitGuardian 2026). The annual secrets report showed that Supabase-related secret leaks grew nearly 10x in one year, driven largely by AI-assisted development. AI-assisted commits leak secrets at 2x the baseline rate.

72% of AI Android apps have hardcoded secrets (Cybernews). An analysis of 38,630 Android apps found that 72% contained at least one hardcoded secret, with an average of 5.1 secrets per app. Among them: Stripe live secret keys.


Detection: How to Check Your Own App

Check 1: Search your frontend bundle

After building your Next.js app, search the output for exposed keys:

# Build and search for Supabase service_role
npx next build
grep -r "service_role" .next/static/ .next/server/

# Search for Stripe secret key
grep -r "sk_live_" .next/static/ .next/server/
grep -r "sk_test_" .next/static/ .next/server/

Interpretation: Any match in .next/static/ means the key is in the client bundle — visible to every visitor.

Check 2: Check your environment variables

# List all NEXT_PUBLIC_ variables
grep -r "NEXT_PUBLIC_" .env* --include="*.env*"

# Check for secrets with public prefix
grep -i "NEXT_PUBLIC_.*SERVICE_ROLE\|NEXT_PUBLIC_.*SECRET\|NEXT_PUBLIC_.*SK_LIVE" .env*

Interpretation: Any secret key with NEXT_PUBLIC_ prefix ships to the browser.

Check 3: Search your git history

# Check if .env was ever committed
git log --all --diff-filter=A -- .env
git log --all --diff-filter=A -- .env.local

# Search commit history for key patterns
git log --all -p | grep -i "service_role\|sk_live_\|sk_test_" | head -20

Interpretation: Even if .env is now in .gitignore, if it was ever committed, the secrets are in your git history.


Related Launch Risks


FAQ

How do I know if my key is in the frontend bundle?

Build your app with npx next build, then search the .next/static/ directory for your key patterns. If a key appears there, it's in the client-side JavaScript that every visitor downloads. You can also open your deployed app, view page source, and search for service_role or sk_live_.

I removed the key from the code. Is that enough?

No. If the key was ever deployed in production, it may have been captured by browsers, proxies, or caches. If it was committed to Git, it's in the repository history. You need to rotate the key in Supabase or Stripe, not just remove it from the codebase.

What's the difference between the anon key and service_role key?

The anon key is designed to be public. It respects Row Level Security — users can only access data that RLS policies allow. The service_role key bypasses all RLS. It has full read/write access to every table, including auth.users. It should never leave the server.

Does 'use server' in Next.js protect my keys?

No. 'use server' helps define server actions, but it does not replace careful separation between secret-bearing server code and client-accessible code. Use separate Supabase client instances for server and client — and verify your build output to confirm that secrets don't appear in the client bundle.

AI generated my Stripe integration. Should I worry?

Yes. AI tools frequently place secret keys in locations accessible to the client. They optimize for "code that runs" and don't distinguish between publishable keys (pk_live_) and secret keys (sk_live_). Check specifically whether sk_live_ appears anywhere in your frontend code or client-side environment variables.


Is This Happening in Your App?

Run a free Trust Score scan — 24 safety checks across auth, billing, and admin. Results in seconds.