Rendering & Indexation
How search engines process JavaScript, handle deferred content, and decide what reaches the index.
- Evergreen Googlebot / WRS
- SSR vs CSR indexation gaps
- DOM diffing and render parity
- URL Inspection API analysis
I help companies build web experiences that are genuinely discoverable, through precise technical work at the intersection of search systems and modern web architecture.
Why I crossed from keyword research into rendering pipelines and crawl engineering, and what changed when I did.
I started where most SEOs do: keyword research, on-page fundamentals, link building. That foundation mattered. But I kept watching the real problems compound at a layer those tools couldn't reach.
JavaScript frameworks became the dominant way to build for the web. Googlebot's rendering pipeline became a genuine engineering concern. The gap between what a user sees and what a search engine indexes grew into a business-critical problem, and almost no one in SEO was equipped to diagnose it properly.
So I crossed that gap deliberately. I learned how the Web Rendering Service actually works, how crawl budgets are allocated, how Core Web Vitals are measured in field data versus lab conditions. Not to pass a certification, but because the problems required it.
Today I work at the intersection of search visibility and web engineering. I can diagnose why your Next.js app isn't being indexed, trace a CLS regression to a specific component, or redesign a URL taxonomy that's cannibalising itself, and explain it to both your CTO and your marketing director.
Each engagement starts with diagnosis, not templates. Tap a card to expand.
Not a list of certifications. The actual layers of search engineering I work at, mapped to the problems they solve.
How search engines process JavaScript, handle deferred content, and decide what reaches the index.
Crawl budget allocation, crawl path design, and ensuring the right pages are discovered at the right frequency.
JSON-LD implementation, rich result eligibility, and entity-based schema strategies for modern content architectures.
Field data versus lab data, render-blocking resources, loading strategies, and the performance-to-ranking relationship.
URL taxonomy design, internal linking graph engineering, and faceted navigation systems that crawlers can process efficiently.
Framework-specific SEO patterns, metadata management, hydration timing, and rendering strategy selection for modern stacks.
Combining crawler data, log analysis, Search Console signals, and rendering output into a single diagnostic picture.
GBP optimisation, local schema, citation architecture, and service-area targeting for competitive local search results.
Matching page types to search intent at a structural level, not just keyword mapping, but information design informed by how queries behave.
Search behaves very differently across compliance regimes, content models, and competitive intensity. Here's where I have direct, repeated experience.
iGaming platforms sit at the intersection of heavy JavaScript, strict regulatory environments, and some of the most competitive SERPs on the web. Player acquisition funnels are deeply search-dependent, but most operators run on platforms that render poorly, carry indexation debt, and have no coherent crawl strategy. I understand the compliance constraints, geo-targeting requirements, and responsible gambling content obligations that shape what can and cannot be done with SEO in this space.
Mental health is a YMYL category. Google holds content in this space to the highest standards of expertise, authoritativeness, and trustworthiness. Providers who treat SEO casually in this sector lose visibility to competitors who understand the content quality and structured data requirements in depth. Local search is also critical here: most people finding a therapist begin with a location-specific query, and the local pack is where the decision is often made.
SaaS companies almost universally build on modern JavaScript frameworks. That creates a specific class of technical SEO problems: metadata that depends on client-side execution, documentation that search engines cannot index cleanly, and product pages that render differently for bots than for authenticated users. On the content side, SaaS SEO requires precise intent mapping across the awareness-to-conversion funnel, from bottom-of-funnel comparison pages to top-of-funnel educational content that captures the right audience at the right stage.
Healthcare SEO operates under the same YMYL constraints as mental health, with the added complexity of multi-location operations, clinical service taxonomies, and the tension between patient-facing content and medically accurate language. Multi-location healthcare groups frequently have inconsistent GBP profiles, unvalidated structured data across service pages, and citation infrastructure that actively undermines their local visibility. I have direct experience turning that around at scale.
Real engagements. Real numbers from CrUX, GSC, and analytics, not model outputs.
A 4,000-page South African retail brand migrated to a React SPA and lost 74% of organic traffic in 8 weeks. Googlebot was receiving empty HTML shells: every product page rendered entirely client-side with no fallback in the server response. The site looked fine to users. To crawlers, it was blank.
Full rendering audit using mobile-first Googlebot simulation via the URL Inspection API and log file analysis. Confirmed 94% of product pages returned empty body content. Implemented a phased Next.js SSR migration, dynamic rendering as a transition layer for high-priority segments, and rebuilt the sitemap pipeline for recovery indexation.
A B2B SaaS was failing Core Web Vitals across 80% of URLs, average LCP 6.8s in field data. The marketing team blamed hosting. The problem was code: an unoptimised hero, render-blocking font loading, and a bundle sequence that deferred critical above-the-fold resources.
Traced LCP regression to three causes: a 1.8MB hero image without priority hints, a Google Fonts stylesheet blocking render for 480ms, and a JS bundle deferring LCP paint by 2.1s. 28% conversion rate improvement documented in the following quarter's A/B data.
A digital news publisher had 2,341 articles published with social traffic, but zero organic indexation. URL Inspection API and direct DOM comparison confirmed Googlebot was rendering an empty article body. The content existed in the DOM, but inside a Vue component using IntersectionObserver-triggered lazy loading. Content that never entered Googlebot's render viewport.
Fixed by implementing server-side article body rendering for above-the-fold content and removing the IntersectionObserver gate from the primary container. Remaining lazy-load logic preserved for secondary components. Within 8 weeks all 2,341 articles were indexed and ranking.
A 14-location private healthcare group had inconsistent GBP profiles, no local schema on service pages, and NAP discrepancies across 80+ citation sources. None of the 14 locations appeared in the local 3-pack for primary service queries. A competitor with weaker clinical credentials outranked every location on proximity alone.
Full citation audit across 80+ directories, NAP normalisation via the GBP API, LocalBusiness and MedicalClinic schema across all service pages, and a GBP content/category strategy tailored per location. Within 4 months, 11 of 14 locations ranked in the local 3-pack for primary terms.
Long-form notes I write when a topic deserves more than a tweet.
Work with someone who understands the problem.
If your site has a technical SEO problem (slow crawls, indexation gaps, CWV failures, JavaScript rendering issues), I can diagnose it precisely and fix it properly.