AI-powered SEO audits
for developers
Most SEO tools audit your live site. serpIQ reads your codebase first, then pulls your real Google Search Console data. No third-party keyword estimates. No subscriptions.
$npx serpiq audit
Install
One command.
Install once, run anywhere on your machine.
Build your command
Answer 5 questions, copy the command.
Tell us about your setup and we'll generate the exact command to run.
sc-domain:yoursite.com for Domain properties or
the full URL for URL-prefix properties. Leave blank to skip GSC.
How it works
Five steps. One command.
serpiq runs the whole pipeline locally. No dashboards, no SaaS. Outputs land in
.serpiq/ in your project.
How keyword + gap detection works
Six layers of signal. Mostly deterministic.
The LLM only synthesises on top of real data. Every keyword and gap traces back to a specific source you can verify yourself.
suggestqueries.google.com--days >= 60Two things serpIQ deliberately does not do: no third-party keyword volume APIs (Ahrefs / SEMrush) and no live SERP scraping of competitors. The philosophy is "use your real GSC data plus free Google signals."
Why serpIQ
Other tools audit your website. serpIQ audits your codebase.
Two things make serpIQ different from every other SEO tool.
Other SEO tools serpIQ ────────────────────────────────────────────────── Audit your live HTML → Read your source code Estimated keyword data → Your real GSC data Generic recommendations → Product-aware strategy Monthly SaaS fee → Free. Your own API key.
What you get
Files you can read, diff, and ship.
Everything is markdown plus JSON. Commit it, share it, paste it into Cursor, or pipe it into your own scripts.
Audit Report
The full strategy doc. Health score, prioritised fixes, and CTR opportunities.
- Health score 0 to 100
- Quick fixes table
- Striking-distance keywords
- Low-CTR queries
- Declining pages
Blog Briefs
One markdown brief per recommended post. Hand it to a writer or to an AI.
- Target keyword + intent
- Suggested title
- Section-by-section outline
- Word count target
- Priority
pSEO Plan
Programmatic page templates with URL patterns and where the data comes from.
- URL pattern
- Estimated page count
- Data source
- Example pages
- Implementation notes
LLM providers
Bring your own API key.
serpiq is a thin wrapper around an LLM. Pick the provider you trust, or run a model locally with Ollama. Nothing is stored on our servers because there are no servers.
Default model is Claude Sonnet 4.5. Switch with
--provider and
--model.
Live demo
Watch it run. No install needed.
Click play. This is exactly what you'll see when you run the real command on your own site. Total time on a real site: about 60 seconds.
$
Outputs
The files it writes.
Click between the tabs below to see real samples of what serpIQ generates. Every file is plain markdown or JSON. Commit them, share them, or paste them into Cursor.
SEO Audit: deadsubs
Health Score
Product: A free tool for finding inactive subreddits that haven't posted in 6+ months.
GSC Property:
sc-domain:deadsubs.com (2026-01-28 to 2026-04-28)
Performance: 1,247 clicks · 84,302 impressions · CTR 1.48% · avg pos 18.4
Executive Summary
deadsubs has solid topical authority around niche Reddit discovery queries but is losing easy wins on title tag optimisation. 12 striking-distance keywords sit in positions 8-12 with high intent. Three pSEO templates could 10x organic surface area within a week of implementation.
Quick Fixes
| Priority | Page | Issue | Fix |
|---|---|---|---|
| high | / | Generic title | "Find Dead Subreddits | deadsubs" |
| high | /about | Missing meta description | Add 155-char description with primary keyword |
| medium | /category/gaming | Thin content (180 words) | Expand to 600+ words with examples |
| low | / | No structured data | Add WebApplication schema |
Striking-Distance Keywords (positions 8-20)
| Keyword | Position | Impressions |
|---|---|---|
| dead subreddits | 8.2 | 4,210 |
| find inactive subreddit | 11.4 | 1,890 |
| abandoned subreddits list | 14.7 | 1,103 |
| request subreddit reddit | 9.8 | 867 |
Blog Content Plan
8 blog posts identified. See ./blog-briefs/ for full briefs.
How to Find Dead Subreddits in 2026: A Complete Guide
Target Keyword: how to find dead subreddits
Search Intent: Informational, with transactional tail
Estimated Length: 1,800 words
Priority: high
Why this brief
Currently ranks position 11.4 for "find inactive subreddit" with 1,890 impressions. A focused long-form post will move this into the top 5 and pull in 20+ related long-tails.
Outline
- What counts as a "dead" subreddit (definition + criteria)
- Why people look for them (4 use cases: requesting, archiving, reviving, research)
- Method 1: Manual checking with Reddit's UI (with screenshots)
- Method 2: Using deadsubs.com (60 seconds, link to tool)
- Method 3: Reddit API + Python script (for the technical reader)
- What to do once you find one (request flow walkthrough)
- Common pitfalls (auto-banned subs, NSFW handling)
- FAQ (5 questions targeting People Also Ask)
Internal links
/category/gamingas an example category/aboutfor tool credibility
pSEO Implementation Plan: deadsubs
Template 1: Dead subreddits by category
URL Pattern: /dead-subreddits-in-{category}
Target Keyword Template: "dead subreddits in {category}"
Estimated Pages: 120
Data Source: existing categories table joined with
subreddits filtered by inactivity
Example pages:
/dead-subreddits-in-gaming/dead-subreddits-in-finance/dead-subreddits-in-cooking
Implementation notes:
Each page needs: an H1 with the category name, a counter ("248 dead subreddits in gaming"), a sortable table, and 200-300 words of unique intro text auto-generated from the category metadata. Unique intro text is critical to avoid thin-content filtering.
Template 2: Subreddit alternatives
URL Pattern: /{subreddit}-alternatives
Target Keyword Template: "r/{subreddit} alternatives"
Estimated Pages: 280
Data Source: top 300 dead subreddits, with similarity matched
against active subs by topic embedding
Template 3: Inactivity vs activity comparisons
URL Pattern: /{subreddit-a}-vs-{subreddit-b}
Target Keyword Template: "r/{a} vs r/{b}"
Estimated Pages: 50
Data Source: hand-curated list of high-impression "vs" queries
from GSC