AI-powered SEO audits
for developers

Most SEO tools audit your live site. serpIQ reads your codebase first, then pulls your real Google Search Console data. No third-party keyword estimates. No subscriptions.

$npx serpiq audit

Install

One command.

Install once, run anywhere on your machine.

$npm install -g serpiq

Build your command

Answer 5 questions, copy the command.

Tell us about your setup and we'll generate the exact command to run.

1 How do you want to install it?
2 Which LLM provider?
3 Which model? (optional)
3a Base URL
Endpoint of your OpenAI-compatible provider.
4 Google Search Console property
This is your verified GSC property identifier (not a URL to crawl). Use sc-domain:yoursite.com for Domain properties or the full URL for URL-prefix properties. Leave blank to skip GSC.
5 GSC lookback period
Your command

            

How it works

Five steps. One command.

serpiq runs the whole pipeline locally. No dashboards, no SaaS. Outputs land in .serpiq/ in your project.

01
Analyse codebase
The LLM reads your README, package.json, landing page HTML, sitemap, robots.txt, and directory tree. Returns a product summary plus initial keyword seeds and content gaps the codebase reveals.
02
Fetch GSC + diagnose stage
Pulls 90 days of real Search Console data, then computes deterministic signals: site stage (no_data → scaling), striking-distance keywords, low-CTR pages, URL pattern clusters, declining pages.
03
Keyword research
Scrapes Google Autocomplete (real Google data, not LLM hallucinations) for every seed keyword. Combined with GSC striking-distance keywords, the LLM expands into quick wins, blog opportunities, pSEO templates, and competitor gaps.
04
AI audit (multi-call)
One strategic call produces the executive summary, score, quick fixes, and seeds. Then per-brief and per-pSEO expansion calls run in parallel, each with a focused prompt that demands production-ready specs.
05
Output files
Renders the main audit, individual blog briefs (each with meta tags, JSON-LD, FAQ, internal links, image suggestions), and the pSEO plan with launch checklists and universal SEO best-practice appendices.

How keyword + gap detection works

Six layers of signal. Mostly deterministic.

The LLM only synthesises on top of real data. Every keyword and gap traces back to a specific source you can verify yourself.

Source
What it surfaces
How
codebase
What your product should talk about; missing topics and pages
LLM-inferred from README, code, HTML
GSC
Striking-distance keywords (you're almost ranking, pos 5-30)
Threshold-based, scales with site size
GSC
High-impression-low-CTR (title/meta rewrite candidates)
Threshold-based, scales with site size
GSC
Title-vs-actual-query mismatches per page
Joins page data with query data, top 5 per page
GSC
Working pSEO clusters to expand (not start new)
Groups top pages by parent URL path
Google
Real-world long-tail variants for each seed
Scrapes suggestqueries.google.com
GSC
Content rotting (declining pages, last 30 vs prior 30 days)
Deterministic, requires --days >= 60
LLM
Competitor gaps, keyword clusters, internal-link gaps
Synthesises on top of all the layers above

Two things serpIQ deliberately does not do: no third-party keyword volume APIs (Ahrefs / SEMrush) and no live SERP scraping of competitors. The philosophy is "use your real GSC data plus free Google signals."

Why serpIQ

Other tools audit your website. serpIQ audits your codebase.

Two things make serpIQ different from every other SEO tool.

serpIQ vs everything else
  Other SEO tools              serpIQ
  ──────────────────────────────────────────────────
  Audit your live HTML        Read your source code
  Estimated keyword data      Your real GSC data
  Generic recommendations     Product-aware strategy
  Monthly SaaS fee            Free. Your own API key.

What you get

Files you can read, diff, and ship.

Everything is markdown plus JSON. Commit it, share it, paste it into Cursor, or pipe it into your own scripts.

.serpiq/audit-{date}.md

Audit Report

The full strategy doc. Health score, prioritised fixes, and CTR opportunities.

  • Health score 0 to 100
  • Quick fixes table
  • Striking-distance keywords
  • Low-CTR queries
  • Declining pages
.serpiq/blog-briefs/

Blog Briefs

One markdown brief per recommended post. Hand it to a writer or to an AI.

  • Target keyword + intent
  • Suggested title
  • Section-by-section outline
  • Word count target
  • Priority
.serpiq/pseo/pseo-plan.md

pSEO Plan

Programmatic page templates with URL patterns and where the data comes from.

  • URL pattern
  • Estimated page count
  • Data source
  • Example pages
  • Implementation notes

LLM providers

Bring your own API key.

serpiq is a thin wrapper around an LLM. Pick the provider you trust, or run a model locally with Ollama. Nothing is stored on our servers because there are no servers.

Anthropic OpenAI OpenRouter Groq Together Mistral Ollama (local)

Default model is Claude Sonnet 4.5. Switch with --provider and --model.

Live demo

Watch it run. No install needed.

Click play. This is exactly what you'll see when you run the real command on your own site. Total time on a real site: about 60 seconds.

~10 seconds
~/code/deadsubs
$ 

Outputs

The files it writes.

Click between the tabs below to see real samples of what serpIQ generates. Every file is plain markdown or JSON. Commit them, share them, or paste them into Cursor.

.serpiq/audit-2026-04-28.md markdown

SEO Audit: deadsubs

Generated: 2026-04-28 by serpIQ

Health Score

62 / 100

Product: A free tool for finding inactive subreddits that haven't posted in 6+ months.

GSC Property: sc-domain:deadsubs.com (2026-01-28 to 2026-04-28)

Performance: 1,247 clicks · 84,302 impressions · CTR 1.48% · avg pos 18.4

Executive Summary

deadsubs has solid topical authority around niche Reddit discovery queries but is losing easy wins on title tag optimisation. 12 striking-distance keywords sit in positions 8-12 with high intent. Three pSEO templates could 10x organic surface area within a week of implementation.

Quick Fixes

PriorityPageIssueFix
high / Generic title "Find Dead Subreddits | deadsubs"
high /about Missing meta description Add 155-char description with primary keyword
medium /category/gaming Thin content (180 words) Expand to 600+ words with examples
low / No structured data Add WebApplication schema

Striking-Distance Keywords (positions 8-20)

KeywordPositionImpressions
dead subreddits8.24,210
find inactive subreddit11.41,890
abandoned subreddits list14.71,103
request subreddit reddit9.8867

Blog Content Plan

8 blog posts identified. See ./blog-briefs/ for full briefs.

.serpiq/blog-briefs/brief-best-reddit-finder.md markdown

How to Find Dead Subreddits in 2026: A Complete Guide

Target Keyword: how to find dead subreddits
Search Intent: Informational, with transactional tail
Estimated Length: 1,800 words
Priority: high

Why this brief

Currently ranks position 11.4 for "find inactive subreddit" with 1,890 impressions. A focused long-form post will move this into the top 5 and pull in 20+ related long-tails.

Outline

  1. What counts as a "dead" subreddit (definition + criteria)
  2. Why people look for them (4 use cases: requesting, archiving, reviving, research)
  3. Method 1: Manual checking with Reddit's UI (with screenshots)
  4. Method 2: Using deadsubs.com (60 seconds, link to tool)
  5. Method 3: Reddit API + Python script (for the technical reader)
  6. What to do once you find one (request flow walkthrough)
  7. Common pitfalls (auto-banned subs, NSFW handling)
  8. FAQ (5 questions targeting People Also Ask)

Internal links

  • /category/gaming as an example category
  • /about for tool credibility
.serpiq/pseo/pseo-plan.md markdown

pSEO Implementation Plan: deadsubs

Template 1: Dead subreddits by category

URL Pattern: /dead-subreddits-in-{category}
Target Keyword Template: "dead subreddits in {category}"
Estimated Pages: 120
Data Source: existing categories table joined with subreddits filtered by inactivity

Example pages:

  • /dead-subreddits-in-gaming
  • /dead-subreddits-in-finance
  • /dead-subreddits-in-cooking

Implementation notes:

Each page needs: an H1 with the category name, a counter ("248 dead subreddits in gaming"), a sortable table, and 200-300 words of unique intro text auto-generated from the category metadata. Unique intro text is critical to avoid thin-content filtering.

Template 2: Subreddit alternatives

URL Pattern: /{subreddit}-alternatives
Target Keyword Template: "r/{subreddit} alternatives"
Estimated Pages: 280
Data Source: top 300 dead subreddits, with similarity matched against active subs by topic embedding

Template 3: Inactivity vs activity comparisons

URL Pattern: /{subreddit-a}-vs-{subreddit-b}
Target Keyword Template: "r/{a} vs r/{b}"
Estimated Pages: 50
Data Source: hand-curated list of high-impression "vs" queries from GSC