GEOClarity
Case Study

GEO Case Study: Zero to 40+ AI Citations

A detailed case study of implementing GEO optimization over 30 days. Covers the strategy, execution, results, and lessons learned from going from.

GEOClarity · · Updated February 25, 2026 · 9 min read

This case study documents a 30-day GEO optimization sprint for a B2B SaaS company’s blog. The site had strong traditional SEO (85+ pages ranking in Google’s top 10) but zero deliberate GEO optimization. In 30 days, we went from sporadic, accidental AI citations to 40+ tracked citations per month across ChatGPT, Perplexity, and Google AI Overviews.

Key takeaway: The fastest path to AI citations is optimizing existing high-ranking content, not creating new content from scratch. Content that Google already trusts is content AI engines are predisposed to cite — it just needs to be structured for AI extraction. For more on this, see our guide to Content Formats That Get AI Citations.

What Was the Starting Position?

The site:

A B2B SaaS blog in the project management space. 120 published articles, most targeting informational and commercial keywords. Domain Rating (Ahrefs) of 52. Monthly organic traffic of approximately 85,000 sessions.

Pre-optimization baseline:

Before any GEO work, we ran a citation audit — manually checking 100 relevant queries across ChatGPT, Perplexity, and Google AI Overviews.

MetricBaseline
Queries tested100
Queries with any citation8
Total citations12
Citation rate8%
Perplexity citations7
ChatGPT citations3
Google AIO citations2
Competitor avg citation rate22%

The site was significantly underperforming compared to competitors with similar SEO authority. The gap wasn’t content quality — it was content structure and AI readiness.

Key issues identified:

  1. No FAQ schema on any pages
  2. Minimal structured data (basic Article schema without author details)
  3. Content used minimal heading structure (3-4 H2s per 2,000+ word article)
  4. No tables or comparison charts
  5. Long, rambling paragraphs instead of atomic, citable statements
  6. robots.txt didn’t explicitly allow AI crawlers (they were allowed by default, but no explicit signals)
  7. No “last updated” dates visible or in metadata

What Was the 30-Day Strategy?

We divided the sprint into three phases:

Phase 1 (Days 1-5): Technical foundation

Fix the infrastructure that enables AI crawling and citation. Our 10M AI Search Results: What Gets Cited & Why guide covers this in detail.

Phase 2 (Days 6-20): Content optimization

Update existing high-ranking content with GEO-optimized structures.

Phase 3 (Days 21-30): New content + monitoring

Publish targeted new content for high-opportunity queries and measure results.

Resource allocation:

ActivityHoursCost
Technical audit and fixes8Staff time
Content updates (20 articles)30$800 freelance + staff
New content (3 articles)10Staff time
Monitoring and analysis5$400 tools
Total53~$3,200

What Did Phase 1 (Technical Foundation) Involve?

Day 1: Robots.txt and crawl access.

Added explicit allow rules for AI crawlers: As we discuss in GEO Case Study: From Zero to AI-Cited in 10 Days, this is a critical factor.

User-agent: GPTBot
Allow: /

User-agent: ChatGPT-User
Allow: /

User-agent: PerplexityBot
Allow: /

Also added a clear sitemap reference and removed unnecessary Disallow rules that blocked non-critical paths AI crawlers might follow.

Day 2: Schema markup template.

Created a reusable schema template for all blog posts:

{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "{{title}}",
  "author": {
    "@type": "Person",
    "name": "{{author_name}}",
    "url": "{{author_url}}",
    "jobTitle": "{{author_title}}"
  },
  "datePublished": "{{publish_date}}",
  "dateModified": "{{update_date}}",
  "publisher": {
    "@type": "Organization",
    "name": "{{company_name}}"
  }
}

Applied this template to all 120 blog posts via a CMS template update.

Day 3: FAQ schema infrastructure.

Built a reusable FAQ component that automatically generates FAQPage schema from structured FAQ content. This allowed content editors to add FAQs without touching code. If you want to go deeper, How AI Search is Changing Consumer Behavior in 2026 breaks this down step by step.

Days 4-5: Content audit and prioritization.

Identified the top 20 articles for GEO optimization using this scoring:

GEO Priority = Google Position Score × Search Volume × Gap Score

Where:

  • Position Score: 10 for #1, 8 for #2-3, 5 for #4-5, 2 for #6-10
  • Gap Score: 3 if not cited by any AI engine, 2 if cited by one, 1 if cited by two+

The top 20 articles all ranked in Google’s top 5 for their target keywords but had zero or one AI citation.

What Did Phase 2 (Content Optimization) Look Like?

Each of the 20 prioritized articles received these updates:

1. Heading structure expansion.

Before: 3-4 generic H2 headings After: 8-12 question-format H2 headings

Example transformation:

  • Before: ## Features
  • After: ## What Features Should You Look For in Project Management Software?

Question-format headings match AI query patterns and provide clear section targets for AI extraction.

2. Atomic paragraph rewriting.

Before: 200-300 word blocks covering multiple ideas After: 60-100 word paragraphs, each containing one citable idea

We specifically wrote “citation-ready” statements — clear, factual sentences that directly answer questions: (We explore this further in On-Page SEO Checklist 2026: 25 Essential Optimizations.)

  • Before: “There are a lot of things to consider when choosing project management software, including features, pricing, team size, integrations, and many other factors that we’ll discuss.”
  • After: “The five most important factors when choosing project management software are: task management flexibility, team collaboration features, integration ecosystem, pricing scalability, and reporting capabilities.”

3. Table addition.

Added comparison tables to every commercial-intent article. Tables comparing features, pricing, or specifications across products/approaches.

4. FAQ section addition.

Added 3-5 FAQs to each article with FAQPage schema. Questions were derived from Google’s “People Also Ask” for the article’s target keyword.

5. “Last updated” date addition.

Added visible “Last updated: [date]” to each article and updated the dateModified in schema markup. This signals freshness to AI crawlers. This relates closely to what we cover in Free GEO Audit Tools for AI Visibility.

6. Internal link enhancement.

Added 3-5 new internal links per article connecting to other relevant updated articles. This created a denser link network among optimized pages.

Timeline:

We updated 2 articles per day on Days 6-15, then 1 article per day on Days 16-20 (as we learned what worked and spent more time per article).

What Were the Results?

We measured citations at Day 15 (mid-sprint) and Day 30 (end of sprint), comparing to the Day 0 baseline.

Citation rate progression:

MetricDay 0Day 15Day 30
Queries tested100100100
Queries with citations81937
Total citations122843
Citation rate8%19%37%
Perplexity citations71624
ChatGPT citations3711
Google AIO citations258

Citation rate by optimization type:

Content TypeCitation Rate BeforeCitation Rate AfterLift
Updated articles (20)6%42%+600%
New articles (3)0%33%N/A
Unchanged articles (97)9%11%+22%

The updated articles drove the vast majority of citation gains. Unchanged articles saw a small lift — likely from the improved internal linking and site-wide schema improvements.

Traffic impact:

AI-referred traffic increased from approximately 200 to 850 sessions per month. More significantly, organic search traffic to the updated articles increased by 12% on average — the GEO optimizations (better headings, clearer content, FAQ schema) also improved traditional SEO performance.

What Were the Key Lessons Learned?

Lesson 1: Update existing content before creating new content.

The 20 updated articles generated 38 citations. The 3 new articles generated 4 citations. On a per-article basis, updating existing ranked content was 3x more efficient for generating citations.

Lesson 2: FAQ schema had the single biggest impact.

Articles with FAQ schema saw the largest citation gains. The FAQs provided direct, extractable answers that AI engines could cite verbatim. For more on this, see our guide to GEO for SaaS: How to Get Your Product Recommended by AI.

Lesson 3: Perplexity responds fastest to changes.

Perplexity reflected content changes within 3-7 days. ChatGPT took 7-14 days. Google AI Overviews took 14-21 days. If you’re measuring GEO impact, Perplexity is your fastest feedback loop. Our AI Citations Have Almost No Correlation with Web Traffic guide covers this in detail.

Lesson 4: Tables are citation magnets.

Every article that gained citations had at least one comparison table. AI engines frequently extracted table data for comparison queries.

Lesson 5: Citation quality matters as much as quantity.

Some citations were direct links to our pages. Others were brand mentions without links. Direct-link citations drove traffic; brand mentions drove awareness. Both have value, but tracking them separately gives a more accurate picture of impact.

Lesson 6: The compound effect is real.

By Day 30, we noticed AI engines citing our articles in response to queries we weren’t specifically targeting. The improved content structure and topical authority made our content eligible for a broader range of queries than we’d anticipated.

What Would We Do Differently?

Start monitoring earlier. We should have established baseline citation tracking 2-4 weeks before starting optimization, not on Day 1. This would have given us cleaner before/after comparisons.

Prioritize FAQ schema even higher. Given the outsized impact, we would implement FAQ schema on all 120 articles in Phase 1, not just the 20 we updated.

Test different content structures. We applied the same optimization template to all 20 articles. A/B testing different structures (more tables vs. more lists, longer FAQs vs. shorter) would have identified the optimal format faster.

Invest more in internal linking. The cross-linking between updated articles could have been more systematic. Using BERT-based similarity scoring (as described in our internal linking guide) would have identified better linking opportunities.

This 30-day sprint demonstrated that GEO optimization produces measurable results quickly — but only when built on a foundation of existing SEO authority. The lesson isn’t “GEO is fast.” It’s “GEO leverages existing SEO investments to unlock a new channel.” Sites without existing rankings should focus on building traditional SEO first, then layer GEO optimization on top.


Frequently Asked Questions

How many AI citations can you get in 30 days?
Starting from a strong SEO foundation, 30-50 AI citations per month is achievable in 30 days through systematic GEO optimization. Starting from zero SEO presence, expect fewer — perhaps 5-15 citations as content gets indexed and ranked first.
What was the biggest factor in gaining AI citations quickly?
Updating existing high-ranking content with citation-optimized structures (FAQ schema, tables, clear headings, quotable statements) produced citations faster than publishing new content. The content already had Google authority — adding GEO optimization leveraged that authority for AI visibility.
How much did the 30-day GEO push cost?
The case study site spent approximately $3,200 total: $800 on content updates (freelance editor), $400 on GEO monitoring tools, and $2,000 in staff time (approximately 50 hours). No additional tools or technology investments were needed.
Can these results be replicated?
The strategy is replicable, but results depend on starting conditions. Sites with existing Google rankings see faster GEO results because AI engines already know their content. Sites without SEO authority need to build rankings first, which extends the timeline to 3-6 months.
G

GEOClarity

Writing about Generative Engine Optimization, AI search, and the future of content visibility.

Related Posts

Get GEO insights in your inbox

AI search optimization strategies. No spam.