Publishers Urged to Adopt 'Generative Engine Optimisation' at London Book Fair
A prominent session at the London Book Fair argued that publishers must urgently develop strategies for 'Generative Engine Optimisation' — ensuring their books and authors are discoverable and accurately represented by AI-powered search and recommendation systems.

Analysis
The concept of "Generative Engine Optimisation" — or GEO — is emerging as one of the most important strategic challenges for publishers in the AI era, and the London Book Fair session that introduced it to a mainstream publishing audience has clearly struck a nerve. The core insight is straightforward: as readers increasingly use AI-powered tools to discover, evaluate, and purchase books, the criteria for discoverability are changing in ways that traditional SEO strategies do not address. A publisher that has invested heavily in Google search optimisation may find that its books are invisible to readers using ChatGPT, Perplexity, or the AI-powered recommendation features that are being built into every major retail platform.
The mechanics of GEO differ from traditional SEO in important ways. Search engine optimisation works by influencing how algorithms rank pages based on signals like keyword density, backlinks, and page authority. Generative engine optimisation works by influencing how AI systems represent and recommend content — a process that is less transparent, less predictable, and less amenable to the kind of systematic optimisation that SEO practitioners have developed over decades. An AI system's recommendation of a book depends on the training data it has been exposed to, the way that book and its author are described across the web, and the specific query being answered — none of which publishers can control directly.
What publishers can do is ensure that accurate, rich, and compelling information about their books and authors is widely available in forms that AI systems can access and process. This means investing in structured data — metadata that clearly describes a book's genre, themes, comparable titles, and intended audience in formats that AI systems can parse. It means ensuring that author profiles, book descriptions, and editorial reviews are consistent and accurate across all the platforms and databases that AI systems draw on. And it means thinking carefully about the language used to describe books, since AI systems are sensitive to the framing and context in which content is presented.
The session at the London Book Fair also raised a more uncomfortable question: what happens when AI systems misrepresent books or authors? The risk of AI hallucination — generating plausible-sounding but factually incorrect information — is well documented, and the consequences for publishers and authors could be significant. An AI system that confidently describes a book as belonging to a genre it does not occupy, or attributes views to an author that they do not hold, could damage sales and reputation in ways that are difficult to correct. Publishers will need to develop monitoring and correction processes for AI-generated content about their titles, adding a new dimension to an already complex metadata management challenge.
The emergence of GEO as a discipline reflects a broader truth about the AI transition in publishing: the changes are not limited to content creation and rights. They extend to every aspect of how books are discovered, evaluated, and sold. Publishers who treat AI as a production technology story — relevant to editorial and operations but not to marketing and sales — are missing half of the picture. The London Book Fair session is a useful corrective, and the publishers who leave it with a concrete GEO strategy will be better positioned than those who treat it as an interesting idea to revisit later.