Woman in an office, typing, and looking at computer monitors that show AI.

Fluid Topics Fluid Topics Blog Events

How Adobe DITAWORLD Experts are Navigating Content in an AI Era

Jun 24, 2025  |  Reading Time: 8 minutes

The market is overflowing with stories of the great successes some companies are achieving with their AI projects. However, other teams are quietly stressed and frustrated wondering why their projects don’t reflect these promised results. The presentations at Adobe DITAWORLD 2025 addressed this gap, providing concrete insights into how to upgrade documentation for the AI era.

Vivek Kumar kicked off the 10th edition of this event, underpinning this theme from the very beginning as he introduced results from Adobe’s recent AI Survey that reported that 54% of respondents were concerned about the output quality and AI hallucinations of Generative AI (GenAI) tools.

Read on to discover the 4 elements content leaders say are essential to building AI-readiness and business agility in this new era of intelligent content. We’ve got direct quotes from the presentations offering advice and first-hand experience on how to navigate this transition.

1. A Modular Mindset for Scalable Content

At DITAWORLD, it was no surprise that many presenters were big advocates for structured authoring tools, stating that DITA (or other writing standards) is a foundational part of preparing content for AI.

In her presentation, “Analyzing & preparing your DITA for Generative AI”, Amanda Patterson (Comtech Services) addressed the common question of ‘how do I know my content is going to be utilized by the LLM?’. This question is particularly important for companies whose LLMs need to support archived content. “Depending on how far back you have to go, and when you have moved into DITA, you’re going to get questions about ‘Is it structured?’ ‘How do I get [the LLM] to read [the content]?’ ‘Do I have native files?’ ‘How are we getting that content in there?’”.

Before discussing her AI content preparation list, Amanda reassured viewers who don’t currently have concrete plans to implement AI, stating, “you can start working on these kinds of things to get your content prepped so that when the day comes and you are asked to feed the LLM, you’re going to be ready.” The first point on her list was to “make content modular and structured”. Here, she recommended that anyone not already in DITA consider using it to structure content, while those already in DITA should focus on templates.

Elaborating on the value of DITA for AI, Amanda explained that “AI loves structure because it is still a machine. It is still reading the text or the code or source views of our content. And I think those of us who are like ‘yeah, yeah we’ve been in DITA forever, we’re fine,’ I really ask you to question your granularity.” She elaborated on the importance of granularity by explaining that someone who Googles “how tall is André the Giant” expects to receive a succinct, direct answer, not a long block of text with the answer hidden inside. Therefore, while many companies are already using DITA, they need to re-evaluate the size of their topics to ensure AI will be able to easily extract key pieces of information.

“Really contemplate granularity. And there’s a lot of ways to get there. It doesn’t necessarily mean you have to go and rewrite all of your topics. It might be adding a layer of metadata. It might be, you know, phrasing and adding some ideas [or] some ID tags and stuff like that in there.”

Amanda Patterson, Senior Consultant at Comtech Services

Anyone feeling overwhelmed by the transition to structured content can find valuable guidance in Greg Kalten and Alex Price’s (Broadcom) presentation “Scale smart!”. In 2019 Broadcom acquired CA technologies which required the migration of about 1.5 million unstructured content pages.

This migration raised the need for a CCMS that allowed them to scale documentation, work with a lean documentation team, and offer functionalities like authoring capabilities, versioning support, translation services, PDF and HTML5 publishing, and online publishing. Given these core capabilities, they chose to migrate to AEM 6.4.2 and Guides (then called DoX) 3.3. With a five-month deadline, they successfully moved from unstructured to structured content with the following steps:

  • Convert existing content into DITA
  • Package, upload, and install the content
  • Generate the documentation and verify it to see if it works
  • Fix any content issues and repeat the validation step, if needed, until there are no more issues
  • Once everything works, publish the content

Companies that haven’t yet made the switch can learn from Broadcom’s experience to prioritize best practices from the beginning. Despite this being a complex and time-consuming process, it’s clear from Amanda’s presentation that granular, structured content improves AI relevance, making it a worthy investment for future-proof documentation.

2. A Comprehensive Metadata Strategy: Annotate or Lag Behind

Alongside structure, another key indicator of modern technical content is metadata. Presenters were unanimous in the opinion that early metadata investment pays off, and it is not optional. Steven Rosenegger (Topcon Positioning Systems) highlighted metadata and taxonomy as one of the first steps in his team’s 90-day kickstart approach to developing future-proof content. This message was loud and clear, echoing across the presentations. Event attendee Heli Hytönen from Lionbridge came to the same conclusion, sharing on LinkedIn that “after two days of DITAWORLD here’s my key takeaway: Annotate that data. Tag your content. Label it. Add metadata.”

Why is metadata so important? It is essential for reuse, AI accuracy, and dynamic delivery. Well-annotated content helps AI systems find the right content faster for each user’s needs and prevents hallucinations. Noz Urbina (Urbina Consulting) summed this up in his presentation “The Truth Collapse” when he explained that semantic data has meaning. Therefore, adding metadata makes content easier for machines to read and understand, which in turn helps human users find relevant information faster.

A Closer Look at KONE’s Metadata Journey

Annotating content with metadata isn’t a simple step to check off, but a complex new strategy and workflow. To illustrate this, Hanna Heinonen and Kristian Forsman (KONE) walked attendees through their metadata transition in the presentation “How KONE Delivers Intelligent Experiences” with Fabrice Lacroix (Fluid Topics).

There, Kristian explained that adding metadata was a long process, particularly as KONE had many legacy PDFs at the start. He explained that at the beginning “PDFs were generated, and you don’t really need that much metadata for that. But we were still kind of enforcing and making people add metadata to a lot of content which has now been a very good thing.” Several years later, with metadata as a core part of their content operations, KONE has improved information findability and search result accuracy.

“If the users can get the information that they need and spend 10 less minutes at the elevator, that actually translates to millions. As I said, [KONE represents] 1.6 million equipment, so the minutes translate into millions.”

Hanna Heinonen, Digital Content Lead at KONE R&D

Crucially, Hanna also highlighted that this isn’t a one-time project, “but it’s an ongoing process over the years to improve and adjust.” If that sounds scary or overwhelming alongside the rest of your content operations, Hanna reassured fellow content leaders, stating, “we noticed we shouldn’t get lost in the look and feel of PDFs. What matters is actually the semantics of XML. We can fine tune the style sheet but need to pay attention to the tagging.

After several valuable insights, Fabrice summarized the key takeaways of KONE’s metadata strategy. “Your advice is to do foundational investment into metadata as soon as you can. Be pragmatic, and then iterate to optimize that and add more as you understand the needs and the business case and the use case for those metadata for putting the right piece of information like the context or the profile of the user and all that.

Finally, looking at current content projects, KONE saw an opportunity to help onsite technicians remotely solve complex issues with AI. Their new GenAI chatbot, Technician Assistant, is now the primary contact point for technicians and has greatly reduced the time to resolution and increased help center call deflections.

KONE’s many years of work to structure their content and implement well-designed metadata for dynamic content delivery continue to provide business value. These efforts are now also beneficial for efficient machine processing, helping their AI chatbot fetch data in a smart way.

3. An Increase in Quality with Intention

Content quality is the next frontier of AI performance improvement.” This maxim from Noz Urbina urges widespread implementation of quality assurance to increase efficiency and computer understanding of content. After all, maintaining a consistent quality level — including standardized terminology, concise language, and minimalist writing — reduces AI hallucination risks.

Deborah Walker (Acrolinx) further highlighted the importance of integrating these quality steps into your content operations in her presentation “Quality By Design”. The explosion in content volume today presents new risk opportunities in a complex and evolving landscape of regulations. The manual, reactive strategy treating compliance as a final hurdle is outdated and leads to bottlenecks in the publication process. Companies must flip the approach and embed quality assurance and compliance into their content workflows for seamless operations, improved consistency, and enhanced efficiency. She noted that among the benefits, this strengthens content governance for AI-generated content.

“By making quality and compliance inherent in content creation, we can transform your structured DITA from a potential liability into a strategic compliance asset.”

Deborah Walker, Manager of Linguistic Solutions at Acrolinx

4. Human-Led Content and Oversight for Value-Added AI

Alongside the excitement and best practices for optimizing content for AI tools, several presentations also highlighted risks and management considerations for companies. In fact, understanding the relationship between humans and AI was a consistent theme in the panel discussion “Empathy is not a prompt!” moderated by Stefen Gentz (Adobe) and featuring Bernard Aschwanden (CCMS Kickstart), Sarah O’Keefe (Scriptorium), and Markus Wiedenmaier (c-rex.net).

Their discussion touched on the notion that, as humans, we understand both users and systems and therefore maintain responsibility and accountability over machines. Documentation teams control what goes into AI and are responsible for adjusting what comes out, so it is accurate and reliable. They are also responsible for user trust or lack thereof in content, which is why quality assurance is so important in AI projects.

“We’re evolving into a trust-based economy…Do you trust the website? Do you trust the person that’s giving you that information?”

Sarah O’Keefe, CEO at Scriptorium

Later, their conversation touched on the shifting role of technical writers. Markus highlighted that whether AI will replace technical writers depends on how you define their roles. They may write less, instead focusing more on AI management as these systems are tricky and require oversight to maintain quality outputs. As this role shifts, documentation teams will become information architects that focus on content structure, governance, and reuse strategies.

As part of this oversight, Noz warned in his presentation that AI models use feedback loops to continuously ingest our data and content to generate hyper-engaging content. However, this shift from us being users looking for information to being fed information puts volume over value, and here, we risk truth collapse.

“We have to control the information diet that goes into the LLMs so that it puts out good content.”

Sarah O’Keefe, CEO at Scriptorium

Therefore, we need to use AI wisely, so its outputs are creating value. This means using AI for tasks like auto-tagging, research synthesis, persona and journey mapping — for drafts not deliverables.

How to Start? Adjust now, Iterate Continuously

While this implementation checklist and best practices may feel long and complex, don’t worry. Enhancing content for optimized use by AI systems isn’t a one-time project. In other words, it’s a marathon, not a sprint. Technical and product content is an evolving, living system that pays off when it’s done right, so the key is to start building the foundation now and then continuously iterate and adjust for better results.

In the wise words of Noz Urbina, “For decades we’ve been preparing ourselves and our content for this moment in history.” We know what works — clear language, modular and structured content, searchable formats, annotated text — and it’s nothing new. But now, it’s time to optimize your strategy and put those best practices into action.

And don’t miss the presentations mentioned in the article at the links below:

Get a demo

About The Author

Kelly Dell

With a background rooted in digital marketing for B2B startups, Kelly strives to aid tech companies understand and connect with their customers through engaging, impactful content. Her expertise spans across content marketing, social media, SEO, and project management.

Latest post

Digital icons of metrics, content, and learning systems over a person typing on laptop.
6 Strategies to Develop Product Knowledge

Learn about 6 strategies to develop your product knowledge and put them in place quickly with our action-oriented do’s and don’ts.

Person touching a tablet and a spending chart showing less spending and more growth over time.
5 Ways Product Knowledge Platforms Save Companies Millions

Learn how Fluid Topics customers are saving big thanks to the benefits from using our Product Knowledge Platform.

IT worker on computer with digital screen behind him that says AI.
Hey IT Department, Please Don’t Build Your Own RAG System

Unpack why IT teams should invest in external RAG-enabled solutions rather than build their own RAG model.