💡The CIO Strategic Playbook for AI in 2026 is now available!

Get the Playbook

How Content Teams Can Prepare for 2026: Lessons from 2025’s Biggest Conferences

Discover key lessons for technical communications teams that we learned from major conferences in 2025, including LavaCon, tcworld, and KMWorld. From how to write for AI to key Agentic AI use cases and how to prove documentation value, these insights are important for any documentation strategy.

Side view of a row of people sitting, listening, and taking notes at a conference.

Table of Contents


From LavaCon to tcworld and KMWorld, the 2025 end-of-year event season was packed with unique networking opportunities, bold insights, and concrete tips to tackle common documentation challenges. We’ve gathered the top takeaways from across these three conferences. It should come as no surprise that AI continues to be a major topic, with a special emphasis on the rise of Agentic AI and how it will deliver value to technical documentation teams. Let’s take a look!

Takeaway 1: Technical Writers Will Need to Write for AI

With the rise of Agentic AI and generative engine optimization, documentation teams must now create content that effectively serves two audiences: humans and AI systems. It’s not about creating separate content streams. Rather, writers must shift existing practices to develop explicit, structured, and context-rich documentation so that both AI and humans understand the content’s value.

Writing with Context Benefits Everyone 

At LavaCon’s Component Content Alliance Panel, “The Role of Structured Content and DITA in Agentic AI and RAG,” speakers agreed that AI doesn’t struggle because content is missing but because it’s often ambiguous or implicit. As Rob Hanna (Precision Content) noted, “A machine needs more of that good human-written content for context. […] We allow our human readers to hallucinate on our content all the time, and it’s expected. Right? So [if] we want to bring down hallucination for machines, we’ve got to bring down hallucination for humans as well.” The panelists challenged the common assumption that humans can easily interpret implicit information, infer context, and fill in gaps. Noz Urbina (Urbina Consulting) similarly highlighted that humans often struggle more with unclear content than they care to admit. Well-structured, explicit content benefits both audiences.

When you write for machines, you’ve got to be very explicit…

icon quote.
When you write for humans, you don’t know, […] what glasses people are wearing, what filters they are using that you don’t know about that prefilter context or content […]. When you write for a machine, you have to provide that context in order to make that clear. So, if you can write clear for machines, then the humans will interpret from that context as well […].

Wiegert Tierie

VP Strategic Accounts, RWS

Best Practices Still Matter, But They Need an AI Upgrade

In his LavaCon talk “The Curators of Truth: Elevating Knowledge in the Age of AI,” Jason Kaufman (Zaon Labs) underscored why shifting writing practices for AI-fit content matters. He described how in one test, AI struggled with a seemingly simple task of extracting one piece of information from a two-page PDF. Despite having a factual document, it lacked the context required for the AI to interpret the right answer. While this was just one document, many companies rely heavily on PDFs, so scaling this issue across hundreds of files can lead to significant AI confusion and hallucinations.

The solution is simply an evolution of existing best practices. “Now, we’re kind of at this time where we’re writing for both humans and machines, and the good news is, a lot of the best practices that we already apply help part of that,” Kaufman noted. His recommendation is for documentation teams to audit their content through an AI lens and incorporate relevant guidelines into style guides now.

icon quote.
If you haven't done this already, really take a look at what that means for your business and starting to incorporate those best practices into your style guides, applying that to the content today. We're all doing tagging. Hopefully we're adding context.

Jason Kaufman

CEO, Zaon Labs

At KMWorld, Michael Greenhow (Ondexx) and Simon Sorrell (KMS Lighthouse) also highlighted the value of structured writing for AI content understanding in the multi-speaker session “Industry Insights: AI & Future Workspaces”. Breaking content into dedicated topics that align with the broader documentation subject contributes to increased clarity, consistency, and context, all of which are beneficial for computer understanding. For knowledge teams that have yet to integrate structured writing tools in their content production, now could be the right time.

4 Insights to Avoid Content Migration Headaches

Use Information Architecture as a Guardrail for AI

At tcworld, Rahel Anne Bailie (Altuent) presented a framework for implementing “IA for AI”. Strong information architecture, spanning architectural, semantic, and editorial dimensions, directly reduces AI hallucinations, ambiguity, and inaccuracies. When documentation is organized through knowledge graphs, taxonomies, and consistent editorial conventions, AI systems achieve higher first-query success rates.

For example, when a UK bank put these structural guardrails in place, the chatbot’s first-response accuracy jumped from 65% to the high 80s.

icon quote.
They don’t want those 12 million […] people searching for something on a chatbot, not getting a response, and then saying ‘huh I think I'll just call the call center’. Right? Like, when you’re up in the millions you really want people to self-serve as much as they can. So, you want to add these structural guardrails.

Rahel Anne Bailie

Content Solutions Director, Altuent

These structural and semantic guardrails guide AI behavior by providing clear context and organization. Teams can use information architecture, semantic tags, and even Retrieval-Augmented Generation (RAG) models to provide context and guidance for the large language model (LLM). She explained that this kind of structure improves the LLM’s understanding of content, so that “then the machines know how to read it as well as we know how to read it”. As a result, the AI model produces more accurate, relevant responses.

icon quote.
The more structure you have, the less hallucinations you will have, the less ambiguity, and the less response inaccuracies.

Rahel Anne Bailie

Content Solutions Director, Altuent

To conclude, Bailie outlined the clear business value companies get from adding layers of structure to their content for AI comprehension.

  • Disambiguates between concepts and terminology;
  • Categorizes content for better recommendations;
  • Accelerates content findability;
  • Improves answers to chatbot queries;
  • Surfaces business insights.

Takeaway 2: Agentic AI is Here…What Do Tech Writers Need to Know?

Agentic AI is a hot topic across 2025 conferences. From what it is to how it works and the best practices and use cases, presenters covered everything attendees wanted to know.

To get things started, Noz Urbina got everyone on the same page at LavaCon by breaking down how Agentic AI works. “The idea with Agentic AI is that you can put Agentic AIs as little workers in a workflow and they have capacity to take actions, make decisions, and do things without waiting for human prompts.” Rob Hanna then followed up, stating that, “with Agentic AI, it’s not simply a decision bot. It’s not an algorithm. In fact, it works on outcomes and goals, alright, and finding your best set or course of action to achieve outcomes and goals.

Concrete Agentic AI Use Cases

Understanding the “what” of Agentic AI is interesting; understanding the “how” is essential.

In the panel “The Role of Structured Content and DITA in Agentic AI and RAG” at LavaCon, Harald Stadlbauer (NINEFEB GmbH) spoke about how to use Agentic AI to optimize business value. He mentioned that “what we are seeing is that you are creating agents which are dedicated to very special kind of tasks.” Alongside this, Stadlbauer highlighted how “around 40-60% of the technical writer [job] is the research part. And the research part is something that is really best for being addressed by Agentic AI.

Stadlbauer wasn’t the only one providing actionable advice on how technical documentation teams should implement Agentic AI. In his tcworld session “How to improve the experience, quality and trustworthiness of AI-Enabled Technical Content Delivery”, Kees van Mansom (Accenture), mapped three valuable Agentic AI use cases.

  1. Simple AI assistance: Teams can use AI-assisted authoring for simple content transformations. For instance, to get well-written documentation on how to fix a simple product issue, technical writers can create a prompt for an LLM to generate step-by-step instructions in simplified technical English (STE) and create visuals to go along with each step.
  2. Co-authoring with an AI agent: When technical writers interview subject matter experts (SMEs), they will still prepare the questionnaire and conduct the live interview to gather specific, necessary information. An AI agent will then transcribe the interview script. Then, AI agents will use the transcript to develop new documentation topics directly in the CCMS according to the company’s writing guidelines. They mark the topics as ready for validation and send review requests to writers and SMEs.
  3. Keeping documentation up to date: AI agents can also handle incoming documentation change requests. An agent will analyze the content in question and determine where updates are needed. It will then assist in authoring and updating the impacted pieces of content directly in the CCMS.

How to Connect Knowledge Silos for Agentic AI

As we identified, Agentic AI goes beyond automating individual tasks to take action and complete goals using multiple systems and data sources. For technical documentation teams considering how to implement this technology, the challenge will quickly become apparent. How can teams connect the knowledge scattered across systems (e.g., in CRMs, helpdesks, CCMSs, and other tools) to the agentic system so the AI agents can access it?

In his tcworld presentation, “Automating Processes with Content and Agentic AI”, Fabrice Lacroix (Fluid Topics) examined this challenge and described the emerging. Technology that makes this possible: the Model Context Protocol (MCP). Rather than extracting data from each silo and injecting it directly into prompts, MCP changes how the LLM interacts with existing systems. When designing an agentic workflow, you specify the tools or endpoints the agent can consult. The model then determines at runtime whether it needs additional information and which system to query.

icon quote.
Let’s just send a question to the LLM, only the query, and then we let the LLM decide if it needs knowledge. And then the LLM will dynamically come to the silos and ask for the knowledge.

Fabrice Lacroix

CEO, Fluid Topics

With access to a list of available tools, the LLM can inspect each one’s capabilities, decide what data or operations it needs, and interact with those systems as required. MCP makes this possible by providing a shared communication layer between AI agents and enterprise applications. In Lacroix’s own words, “MCP is the language, the protocol for LLMs to talk to your applications in your company and ask for information.

The protocol solves the critical problem that while most modern applications have APIs, each one speaks its own “local language”. MCP, meanwhile, acts as a universal translator — what Lacroix calls a “lingua franca” — that reduces the need for custom connectors and allows agentic AI systems to work with varied applications in a consistent way.

icon quote.
That’s the language that has been designed so that it can be spoken by any application between applications and LLMs.

Fabrice Lacroix

CEO, Fluid Topics

This is big for documentation teams who have previously had to work with developers and IT to build complex RAG systems. Very soon, they will create agentic workflows that intelligently pull the information an LLM needs directly from the centralized content repository, support tickets, CRM, or any other application. While this technology is still on the rise, it’s becoming increasingly popular, as “every app today is adding an MCP server on top of its APIs.

Takeaway 3: Teams Must Prove Documentation’s Business Value

Tracking content metrics within documentation teams remains important for understanding what to create, prioritize, and update. However, teams also need to go beyond this and connect documentation outcomes directly to the business value that executives care about. Several conferences outlined how to do just that, using AI tools to support analytics analysis when possible.

What Should Teams Measure and How Can AI Help?

Vishal Gupta (Cisco) outlined the challenge clearly in his presentation “Quantifying Quality: Navigating the Challenges of Measuring Content Impact” at LavaCon. “Defining content quality is straightforward. However, the challenge lies in measuring it consistently across our entire content set.” He shared his approach to documentation analytics, which focuses on three areas companies need to measure:

  1. Customer ratings to capture direct user feedback
  2. Self-service metrics through case deflection to track how effectively content empowers users to find answers autonomously
  3. Documentation defects identified through customer reports and internal audits

He then suggests innovating with AI to analyze these data points, identify patterns, and categorize insights into actionable buckets:

  • Clear documentation errors;
  • Enhancement opportunities (i.e., missing configuration steps);
  • Cases unrelated to documentation.
icon quote.
Quality is a journey and measurement is a compass, which ensures we are sailing in the right direction.

Vishal Gupta

Content Designer, Cisco

How to Share Analytics Findings with Leadership

As Sofiya Minnath (Wheelo Technologies) shared at her LavaCon presentation, “Rethinking Content Metrics with AI: Proving Business Value and Driving Strategic Decisions”, tracking analytics is only half the battle. She explained how, once a team gathers interesting data, the next challenge is knowing how to present that information to leadership.

Instead, she recommends teams shift their approach to connect metrics to business value. Minnath explained that this helps illustrate how documentation teams can serve as “strategic growth drivers” rather than “cost centers”.

To demonstrate her point, she outlined some clear ways to connect metrics to business outcomes.

  • Instead of saying “the metrics show quicker onboardings,” say “we save X amount of time, which leads to faster ARR capture”.
  • Rather than mentioning the reduction in tickets, explain how much costs were avoided thanks to fewer tickets.
  • Teams can even investigate what percentage of people use content in the sales process and use that number as “revenue influence”.

Final Takeaways from 2025’s Biggest Conferences

As expected, these conferences provided valuable insights and concrete examples to help technical documentation and knowledge teams tackle new challenges and emerging opportunities head-on. For teams working on their 2026 event schedule to get even more insights, check out our list of top technical writer conferences.

With so many great speakers, it’s impossible to fit all of the nuggets of wisdom into one article. Here are some of our other favorite quotes from onsite conferences:

  • If your content isn’t for everyone, it’s for no one.” (Dipo Ajose-Coker, RWS)
  • [Customers] don’t care. They are not interested in our organizational problems. They are interested in solving the problem that has them annoyed.” (Sarah O’Keefe, Scriptorium)
  • Content is king, context is kingdom.” (Stefan Gentz, Adobe)
  • If you are writing well for machines, by definition, you get better content for humans. It’s the explicit content or explicit purpose versus the implied or implicit purpose that is the differentiator.” (Lief Erikson, Intuitive Stack)
  • AIs, they have limitations, they have cognitive limitations the same way organic intelligences do, and if you can provide more structure, more context, more specificity to narrow down what they should be focusing on then they will perform better.” (Noz Urbina, Urbina Consulting)

Schedule a free demo of Fluid Topics with a product expert