Key takeaways: Claude and Perplexity don’t render client-side JavaScript. If your site relies on React, Vue or poorly configured Next.js, you’re invisible to 2 major LLMs out of 4 in 2026.
Why 2 major LLMs out of 4 don’t render your JavaScript in 2026
The official Anthropic documentation published in May 2026 is unambiguous: “The web fetch tool currently does not support websites dynamically rendered via JavaScript”. Claude only retrieves the initial HTML served by the server. Any content loaded later by JavaScript remains invisible. Perplexity operates on the same mechanic according to Vercel tests published in December 2024 and confirmed by Glenn Gabe (GSQi) in August 2025.
The cause is technical. LLMs aren’t Google. Google maintains JavaScript rendering infrastructure for over 10 years, with billions of dollars in infrastructure costs. LLMs arrived too quickly to replicate that infrastructure. They use simple HTTP calls that retrieve raw HTML, parse the textual content, and move on. No JavaScript engine downstream.
The Salt Agency study published in September 2025 on 2,138 sites confirms this mechanic at scale. In the analyzed sample, sites with purely client-side rendering (Single Page Application without SSR) get significantly lower inclusion rates in AI responses than server-rendered sites, regardless of editorial content quality.
The operational finding for 2026: a well-written, well-chunked site with strong SEO authority can stay totally invisible to Claude and Perplexity if its technical architecture relies on client-side JavaScript. The GEO editorial efforts are then nullified by an upstream technical blocker.
The visibility matrix per LLM in May 2026
The 4 major LLMs have distinct behaviors regarding JavaScript. This matrix synthesizes the state of the art in May 2026, based on official sources (Anthropic documentation, OpenAI communications) and independent tests by Vercel and Salt Agency.
| LLM | JavaScript rendering | Consequence for your site | Source |
|---|---|---|---|
| Claude (Anthropic) | No | Site invisible if client-rendered only | Official Anthropic docs, May 2026 |
| Perplexity | No | Site invisible if client-rendered only | Vercel tests 2024, GSQi 2025 |
| ChatGPT (OpenAI) | Uncertain by version | Random visibility, depends on mode (SearchGPT or conversational) | Salt Agency tests 2025 |
| Gemini (Google) | Yes | Site visible, benefits from Google’s infrastructure | Google docs + Salt Agency tests |
The result is asymmetric: only Gemini reads your JavaScript reliably. The other 3 LLMs, which collectively represent the majority of AI traffic in May 2026 (ChatGPT in volume, Claude in knowledge work, Perplexity in B2B), have unfavorable behavior toward client rendering.
This asymmetry has a strong strategic consequence: if your audience mainly uses ChatGPT, Claude or Perplexity, the technical investment to switch to server rendering pays off quickly. If your audience is Google-centric and uses Gemini, the urgency is lower but remains relevant medium-term.
How to test your site in 5 minutes
You can verify right now whether your site is correctly read by LLMs without JavaScript fetch capability. The method takes 5 minutes and requires no paid tool. Three complementary tests reinforce each other.
Test 1: Disable JavaScript in your browser
On Chrome, open DevTools (F12), go to Settings (gear icon top right of DevTools), then “Debugger” and check “Disable JavaScript”. Reload your page. If the main textual content appears correctly, your site passes the test. If the page is blank, empty or only shows a loader, your content is invisible to LLMs that don’t render JavaScript.
This method exactly reproduces what Claude and Perplexity see when they fetch your URL. It’s the closest test to the real LLM extraction.
Test 2: Use curl on the command line
In a terminal, run curl -L https://your-site.com/your-page. The command returns the raw HTML served by your server, without any JavaScript execution. Search for the main textual content of your page in the output. If it’s there, you’re in SSR (Server-Side Rendering) or static prerendering. If it’s not there, your content is generated client-side and stays invisible to the affected LLMs.
This command literally reproduces the LLM fetch mechanism. If you see almost-empty HTML with only script tags and a div id=”root” or “app” without content, you’re in pure client rendering.
Test 3: Ask Claude and Perplexity directly
The ultimate test: open Claude and Perplexity, copy a key page URL from your site, and ask “Can you read the content of this page and summarize the first paragraph for me?”. If the AI responds with a relevant summary, your content is accessible. If it answers it can’t access the content or produces a generic summary, your page is invisible.
Glenn Gabe (GSQi) documented this methodology in 2025. It remains the most reliable to validate in real conditions.
The 3 categories of sites most invisible to LLMs
Three categories concentrate most sites in problematic situations in May 2026. If your site belongs to one of them, the 5-minute test is priority.
B2B SaaS in poorly configured React or Next.js
Most French B2B SaaS launched since 2022 use React, Next.js or Vue. The problem doesn’t come from the framework itself. Well-configured Next.js (App Router with Server Components, or Pages Router with getServerSideProps / getStaticProps) renders correctly server-side. The problem comes from default configurations or partial migrations where critical content remains client-rendered.
The most exposed pages: pricing pages with dynamically calculated prices, feature pages with content loaded via client API, blog pages with React components that load articles after mount. On these pages, the served HTML is often empty. It’s the most frequent situation I encounter in audits.
Headless e-commerce
Headless commerce architecture (frontend in Next.js, Nuxt or Gatsby consuming a Shopify, Commercetools, Magento API) has become standard for mid-market e-commerce. The GEO risk is massive if the configuration doesn’t include server rendering or static generation at build time.
Product pages in poorly configured headless often serve empty HTML to LLMs. Product sheets, comparisons, buying guides become invisible. The commercial stake is high because these pages are precisely those that could capture AI traffic on product recommendation queries.
Poorly configured Webflow or Framer sites
Webflow and Framer became popular for marketing sites in 2024-2025. These platforms can generate correctly rendered HTML, but some configurations (heavy interactions, complex animations, content loaded via dynamic CMS) shift to client rendering. The affected sites lose AI visibility without the marketing teams being aware.
The simple test: pages with many Lottie animations, scroll triggers, or CMS content loaded via API are most at risk. A simple Webflow homepage renders correctly, but a page with dynamic components may not render.
The solutions by complexity and cost
Several corrections are possible depending on your stack and budget. I rank by ascending complexity and cost.
- Enable SSR or static generation on critical pages (free to 5 days of dev): for Next.js, switch to App Router with Server Components or use getServerSideProps / getStaticProps. For Nuxt, enable universal mode. The cleanest and most durable solution.
- Use a prerender service (50-200 €/month): Prerender.io or Rendertron intercept bot requests and serve them a pre-rendered version. Quick to deploy, but adds an external dependency and recurring cost.
- Migrate to a hybrid architecture (10 to 30 days of dev): keep React/Vue front-end for interactivity, but add an SSR layer for critical content. Most flexible solution but requires partial technical refactoring.
- Serve an alternative text version for bots (1-2 days of dev): detect LLM User-Agents (GPTBot, ClaudeBot, PerplexityBot) and serve them a static content version. Quick solution but fragile over time and potentially perceived as cloaking if poorly executed.
The choice depends on your current stack, available dev resources, and strategic urgency. For a mature B2B SaaS in Next.js, migration to App Router with Server Components is the cleanest path. For a headless e-commerce, prerender via Prerender.io remains the fastest option to get started.
How to measure the real impact of a fix via Cockpyt AI
Without measurement, you can’t justify the technical investment of the correction. The baseline / post-correction method on a panel of strategic prompts produces internally usable quantified data.
The method I apply at Cockpyt AI:
- Pre-correction baseline: capture AI Share of Voice across 30 to 100 brand-related strategic prompts, isolating Claude and Perplexity citations (the 2 most affected LLMs)
- Implementing the technical fix: SSR, prerender or hybrid based on your choice
- Measurement at 30 and 60 days: rerun the panel to identify new citations, particularly on Claude and Perplexity
- Coverage Breadth delta: measure your cross-platform coverage progression. A successful fix moves your presence from 1-2 LLMs (Gemini + partial ChatGPT) to 3-4 LLMs
First effects appear at 30 days for Perplexity (active web search) and between 30 and 60 days for Claude depending on capture by web-connected retrievers. Across the 12 technical audits I conducted in 2025-2026 on client-rendered sites, the median AI Share of Voice progression after correction sits between +20% and +50% on prompts where Claude and Perplexity were initially absent.
FAQ on JavaScript rendering and LLMs in 2026
Why does Google render JavaScript and not Claude or Perplexity?
Google maintains JavaScript rendering infrastructure for over 10 years, with billions of dollars in investment. LLMs arrived too quickly to replicate that infrastructure. They use simple HTTP calls that retrieve raw HTML without executing JavaScript. It’s a technical architecture and cost choice, not a will to discriminate against modern sites.
My site is on Next.js, am I necessarily affected?
No, not necessarily. Well-configured Next.js (App Router with Server Components or Pages Router with getServerSideProps / getStaticProps) renders correctly server-side. The problem occurs with default client-only configurations, or with critical components that load their content via client-side fetch after mount. The 5-minute test gives you the precise answer.
Does SearchGPT bypass the JavaScript problem for ChatGPT?
Partially. SearchGPT uses active web search that can pull more content, including from Google-indexed pages already rendered. But ChatGPT’s classic conversational mode keeps the uncertain behavior documented by Salt Agency. Best practice remains serving server-rendered HTML to guarantee reading by all ChatGPT versions.
How much does switching from CSR to SSR cost on an existing site?
It varies based on site complexity. For a Next.js site with App Router migration: 5 to 15 days of dev for an average site. For a headless e-commerce with custom architecture: 20 to 40 days of dev. For a prerender solution like Prerender.io: 1 to 2 days of dev plus 50 to 200 euros of monthly cost. Profitability is measured on the AI Share of Voice delta gained, measurable at 60 days.
Will the situation evolve in 2026-2027 with new versions of Claude and Perplexity?
Possibly. Anthropic and Perplexity may invest in JavaScript rendering in coming months or years. The official Anthropic documentation in May 2026 confirms the absence of current support, with no roadmap announcement. Rather than waiting, the safest strategy is to serve server-rendered HTML, which will always be readable by all future LLM versions.
Are Cloudflare or a CDN enough to solve the problem?
No, not on their own. Cloudflare and CDNs serve content faster but don’t transform client JavaScript into rendered HTML. Cloudflare does offer prerender features (HTML Cache + Workers) that can simulate SSR, but require specific configuration. For a clean solution, native SSR from your framework remains preferable.
How do I know if the problem comes from JavaScript or another cause?
The curl test is diagnostic. If curl -L on your URL returns HTML with your main textual content, the problem doesn’t come from JavaScript. It can come from a lack of domain authority, poor chunking, absence of brand entity signals, or simply low presence on the third-party sources LLMs prioritize. A Cockpyt AI audit with cross-LLM analysis identifies the precise cause.
Sources
- Anthropic, Web fetch tool documentation, updated May 2026, https://platform.claude.com/docs/en/agents-and-tools/tool-use/web-fetch-tool
- Salt Agency, Technical SEO for AI Search, September 2025, https://salt.agency/blog/technical-seo-for-ai-search/
- Glenn Gabe (GSQi), AI Search and JavaScript Rendering Case Study, August 2025, https://www.gsqi.com/marketing-blog/ai-search-javascript-rendering/
- Vercel, AI Bots and JavaScript Rendering Tests, December 2024.
- Stacker + Scrunch, Coverage Breadth Study: The Latest GEO Research, March 2026, https://stacker.com/blog/latest-research-on-expanding-brand-visibility-across-llms


