Missing CI/CD in AI-Generated Code: Detection and Remediation
Missing CI/CD is a failure pattern in AI-generated codebases where the path from code change to production has no automated enforcement layer. Every merge is unguarded. Every deployment is a manual verification exercise. Every structural violation — circular dependencies, layer boundary violations, regeneration losses, test failures — reaches production with no automated mechanism to detect or prevent it.
The structural mechanism: prompt-driven development is optimized for the speed of the first ship. Establishing CI/CD requires a deliberate investment that competes with feature development. Each sprint, the decision is deferred. By the time the codebase has grown to a scale where CI/CD is critical, the absence of it has become a structural liability — every change carries unquantified risk, and establishing CI/CD now requires first fixing the structural violations that a functioning pipeline would immediately surface.
This page explains how to assess the actual enforcement state of the deployment path, how to distinguish between a functioning CI/CD pipeline and a false pipeline that provides no protection, and what the remediation path looks like.
What We Observe
Missing CI/CD in AI-generated codebases presents with specific structural signatures:
- Manual deployment as the primary release process — the team runs
git pushand manually verifies the deployment in a staging environment before promoting to production; there is no automated gate - "It worked on my machine" regressions — a change that passes manual testing in the developer's environment fails in production because environment differences are not caught by automated checks
- Structural violations accumulate unchecked — circular dependencies, cross-layer imports, and naming inconsistencies are introduced without any automated signal; they are discovered weeks later during a manual audit
- Regeneration losses reach production — a prompt-driven session overwrites custom logic; without a preservation marker check in CI/CD, the loss is deployed and discovered in production
- Coverage degradation without signal — test coverage drops from 35% to 8% over three months because there is no automated threshold enforcement; the team does not notice until a production incident
The critical distinction: missing CI/CD is not just the absence of a pipeline configuration file. It includes pipelines that exist but enforce nothing — build-only pipelines that report green on every commit while providing no protection against structural violations or regressions.
What a Functioning CI/CD Pipeline Enforces
A functioning CI/CD pipeline for an AI-generated codebase enforces four categories of checks:
Layer 1: Structural integrity (fastest — fail early)
✓ Boundary linter — no cross-layer imports, no circular dependencies
✓ Naming linter — no convention violations
✓ Type checker — no type errors (TypeScript: tsc --noEmit)
Layer 2: Test enforcement
✓ Test suite passes — all tests green
✓ Coverage threshold — coverage does not drop below baseline (e.g., 30%)
Layer 3: Preservation integrity
✓ Preservation marker check — protected regions not modified by regeneration
Layer 4: Build verification (last — most expensive)
✓ Build succeeds — application compiles and bundles correctly
A pipeline that enforces only Layer 4 (build verification) is a false pipeline — it reports green on every commit while providing no protection against the failure patterns that matter.
Detection
Step 1: CI/CD Presence Check
echo "=== CI/CD configuration presence ==="
# GitHub Actions
if [ -d ".github/workflows" ]; then
COUNT=$(ls .github/workflows/*.yml 2>/dev/null | wc -l)
echo "✓ GitHub Actions: $COUNT workflow(s)"
ls .github/workflows/*.yml 2>/dev/null | while read f; do
echo " - $(basename $f)"
done
else
echo "✗ GitHub Actions: ABSENT"
fi
# Other CI providers
[ -f ".gitlab-ci.yml" ] && echo "✓ GitLab CI: present" || echo " GitLab CI: absent"
[ -f ".circleci/config.yml" ] && echo "✓ CircleCI: present" || echo " CircleCI: absent"
[ -f "azure-pipelines.yml" ] && echo "✓ Azure Pipelines: present" || \
echo " Azure Pipelines: absent"
[ -f "Jenkinsfile" ] && echo "✓ Jenkins: present" || echo " Jenkins: absent"
[ -f ".travis.yml" ] && echo "✓ Travis CI: present" || echo " Travis CI: absent"
Step 2: Enforcement Depth Assessment (false pipeline detection)
echo ""
echo "=== Enforcement depth assessment ==="
CI_FILES=".github/workflows/*.yml .gitlab-ci.yml .circleci/config.yml"
# Check for test step
grep -rl "pytest\|jest\|vitest\|npm test\|yarn test\|python -m pytest" \
.github/workflows/ .gitlab-ci.yml .circleci/ 2>/dev/null | head -1 && \
echo "✓ Test step: PRESENT" || echo "✗ Test step: ABSENT"
# Check for linting step
grep -rl "flake8\|eslint\|mypy\|ruff\|pylint\|tsc --noEmit\|tsc --noem" \
.github/workflows/ .gitlab-ci.yml .circleci/ 2>/dev/null | head -1 && \
echo "✓ Lint step: PRESENT" || echo "✗ Lint step: ABSENT"
# Check for coverage threshold
grep -rl "cov-fail-under\|coverageThreshold\|--coverage\|coverage-minimum" \
.github/workflows/ .gitlab-ci.yml .circleci/ 2>/dev/null | head -1 && \
echo "✓ Coverage threshold: PRESENT" || echo "✗ Coverage threshold: ABSENT"
# Check for dependency/boundary check
grep -rl "madge\|depcruise\|dependency-cruiser\|asa lint\|import-lint" \
.github/workflows/ .gitlab-ci.yml .circleci/ 2>/dev/null | head -1 && \
echo "✓ Boundary check: PRESENT" || echo "✗ Boundary check: ABSENT"
# Check for preservation marker verification
grep -rl "preservation\|BEGIN USER CODE\|END USER CODE\|check.*marker" \
.github/workflows/ .gitlab-ci.yml .circleci/ 2>/dev/null | head -1 && \
echo "✓ Preservation check: PRESENT" || echo "✗ Preservation check: ABSENT"
echo ""
echo "=== False pipeline detection ==="
# A pipeline with only a build step is a false pipeline
BUILD_ONLY=$(grep -rl "npm run build\|python -m build\|cargo build" \
.github/workflows/ 2>/dev/null | head -1)
NO_TESTS=$(grep -rl "pytest\|jest\|npm test" .github/workflows/ 2>/dev/null | head -1)
if [ -n "$BUILD_ONLY" ] && [ -z "$NO_TESTS" ]; then
echo "⚠ FALSE PIPELINE DETECTED: build step present, no test step"
echo " This pipeline reports green on every commit but provides no protection."
fi
Step 3: RC05 Quick Severity Assessment
echo ""
echo "=== RC05 severity assessment ==="
HAS_CI=0
HAS_TESTS=0
HAS_LINT=0
HAS_COVERAGE=0
HAS_BOUNDARY=0
[ -d ".github/workflows" ] || [ -f ".gitlab-ci.yml" ] && HAS_CI=1
grep -rl "pytest\|jest\|npm test" .github/workflows/ .gitlab-ci.yml \
2>/dev/null | head -1 > /dev/null && HAS_TESTS=1
grep -rl "flake8\|eslint\|mypy\|tsc" .github/workflows/ .gitlab-ci.yml \
2>/dev/null | head -1 > /dev/null && HAS_LINT=1
grep -rl "cov-fail-under\|coverageThreshold" .github/workflows/ .gitlab-ci.yml \
2>/dev/null | head -1 > /dev/null && HAS_COVERAGE=1
grep -rl "madge\|depcruise" .github/workflows/ .gitlab-ci.yml \
2>/dev/null | head -1 > /dev/null && HAS_BOUNDARY=1
SCORE=$((HAS_CI + HAS_TESTS + HAS_LINT + HAS_COVERAGE + HAS_BOUNDARY))
echo "Enforcement components present: $SCORE / 5"
case $SCORE in
5) echo "Severity: LOW — full enforcement layer present" ;;
3-4) echo "Severity: MEDIUM — partial enforcement, gaps present" ;;
1-2) echo "Severity: HIGH — minimal enforcement, most violations undetected" ;;
0) echo "Severity: CRITICAL — no CI/CD, all violations reach production" ;;
esac
Interpretation Table
| CI/CD state | Severity | What reaches production undetected |
|---|---|---|
| Full enforcement (tests + lint + boundary + coverage) | Low | Nothing structural — violations blocked at merge |
| Tests only, no boundary check | Medium | Circular deps, layer violations, naming inconsistencies |
| Build only (false pipeline) | High | All structural violations + regressions + regeneration losses |
| No CI/CD | Critical | Everything — no automated gate exists |
Remediation Path
Step 1: Establish the Minimum Viable Pipeline First
The minimum viable CI/CD pipeline runs in under 3 minutes and catches the most critical violations:
# .github/workflows/ci.yml — minimum viable pipeline
name: CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
enforce:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up environment
uses: actions/setup-node@v4 # or setup-python@v5
with:
node-version: '20'
- name: Install dependencies
run: npm ci # or: pip install -r requirements.txt
# Step 1: Type check (fast — catches obvious errors)
- name: Type check
run: npx tsc --noEmit # or: mypy .
# Step 2: Lint (fast — catches naming and style violations)
- name: Lint
run: npx eslint . --ext .ts,.tsx # or: flake8 .
# Step 3: Tests with coverage threshold
- name: Test
run: npx jest --coverage --coverageThreshold='{"global":{"lines":30}}'
# or: pytest --cov=. --cov-fail-under=30
Step 2: Add Structural Enforcement (boundary linter)
# Step 4: Boundary check (catches circular deps and cross-layer imports)
- name: Dependency boundary check
run: npx madge --circular --extensions ts,tsx src/ --exit-code
# Exit code 1 if any circular dependency found — fails the build
Step 3: Add Preservation Marker Check
# scripts/check_preservation_markers.py
"""
Verifies that content between === BEGIN USER CODE === and === END USER CODE ===
markers has not been modified in the current commit.
"""
import subprocess
import sys
import re
def get_changed_files():
result = subprocess.run(
['git', 'diff', '--cached', '--name-only'],
capture_output=True, text=True
)
return result.stdout.strip().split('\n')
def check_markers(filepath):
try:
with open(filepath) as f:
content = f.read()
# Check if file has preservation markers
if '=== BEGIN USER CODE ===' not in content:
return True # No markers — not protected
# Get the diff for this file
diff = subprocess.run(
['git', 'diff', '--cached', filepath],
capture_output=True, text=True
).stdout
# Check if any changed lines are inside protected regions
in_protected = False
violations = []
for line in diff.split('\n'):
if '=== BEGIN USER CODE ===' in line:
in_protected = True
if '=== END USER CODE ===' in line:
in_protected = False
if in_protected and (line.startswith('+') or line.startswith('-')):
if not line.startswith('+++') and not line.startswith('---'):
violations.append(line)
if violations:
print(f"❌ PRESERVATION VIOLATION: {filepath}")
for v in violations[:3]:
print(f" {v}")
return False
return True
except Exception:
return True
changed = get_changed_files()
violations = [f for f in changed if f.endswith(('.py', '.ts', '.tsx'))
and not check_markers(f)]
if violations:
print(f"\nPreservation check failed: {len(violations)} file(s) modified protected regions")
sys.exit(1)
print("✓ Preservation check passed")
# Step 5: Preservation marker check
- name: Check preservation markers
run: python scripts/check_preservation_markers.py
Step 4: Enforce on Every Merge (branch protection)
GitHub branch protection settings for 'main':
✓ Require status checks to pass before merging
✓ Require branches to be up to date before merging
✓ Status checks required: CI / enforce
✓ Do not allow bypassing the above settings
This configuration makes it structurally impossible to merge a PR that fails any CI/CD step — regardless of who authored the change, whether it was generated by AI, or whether the reviewer approved it.
How FP017 Interacts with Other Failure Patterns
Missing CI/CD is the failure pattern that makes all other failure patterns permanent:
- FP001 (Oversized Files) — without a file size check in CI/CD, files grow without bound; no automated signal prevents accumulation
- FP002 (Business Logic in Wrong Layer) — without a boundary linter in CI/CD, layer violations merge silently; every prompt session can introduce new violations
- FP006 (Circular Dependencies) — without a circular dependency check in CI/CD, new cycles form with every session; the graph corruption accelerates
- FP014 (Low Test Coverage) — without a coverage threshold in CI/CD, coverage degrades silently; the feedback loop disappears without any signal
- FP018 (Missing Preservation Markers) — without a preservation check in CI/CD, regeneration losses reach production undetected
Establishing CI/CD enforcement is the structural intervention that makes every other remediation durable. Without it, structural improvements made in a stabilization sprint are eroded by subsequent prompt sessions — because there is no automated mechanism to prevent new violations from merging.