An extended version of llms.txt that provides the full text of a site's most important content in a single file, so AI models can ingest the whole site without crawling individual pages.
llms-full.txt is a proposed companion to llms.txt that goes one step further. Instead of just listing links and descriptions, it embeds the full Markdown content of a site's key pages in a single file. The goal is to give large language models a complete, self-contained representation of the site that can be ingested in one shot.
Where llms.txt is an index (here are the pages you should look at), llms-full.txt is a corpus (here is the actual content of those pages). Both files typically live at the root of a domain, at `example.com/llms.txt` and `example.com/llms-full.txt`.
LLMs have finite context windows and rate-limited crawling. A site that exposes its critical content in a single pre-concatenated file dramatically reduces the effort required for an AI model to understand the whole site. This is especially valuable for documentation-heavy sites, glossaries, and reference materials where the value comes from the entire collection rather than any single page.
Early adopters like Anthropic, Cloudflare, and several developer tool companies publish llms-full.txt files for their documentation. As AI agents become more common, expect this format to become a standard way of making a site agent-ready.
Acta AI publishes both `llms.txt` and `llms-full.txt` at the root of withacta.com. The full-text version includes the complete content of every glossary term, every feature page, and every core blog post, concatenated as Markdown with clear section dividers. This makes the entire site ingestible by any AI model in a single request.
A typical llms-full.txt file starts with a short header and then concatenates Markdown content with dividers between sections:
# Acta AI, Full Content Index > AI-powered autoblogging for WordPress, Shopify, > Wix, and headless sites. ## /glossary/generative-engine-optimization # Generative Engine Optimization (GEO) GEO is the process of structuring, formatting, and writing content so that large language models... --- ## /glossary/query-fan-out # Query Fan-Out Query fan-out is the practice of decomposing a single user query into multiple sub-queries... --- ## /features/content-pipeline # 10-Stage Content Pipeline Every Acta AI article flows through a ten-stage generation pipeline...
An AI model fetching a single URL now has the entire glossary, feature set, and documentation in one ingestible document, with clear section markers it can use to build internal citations.
Every article on our blog was written by Acta AI. No edits. No ghostwriter.
Read Our BlogStart Free Trial