How Adobe DITAWORLD Experts are Navigating Content in an AI Era
Discover the 4 elements that content leaders at Adobe DITAWORLD 2025 agree are essential to building AI-readiness and business agility in this new era of intelligent content.
Table of Contents
The market is overflowing with stories of the great successes some companies are achieving with their AI projects. However, other teams are quietly stressed and frustrated, wondering why their projects donât reflect these promised results. The presentations at Adobe DITAWORLD 2025 addressed this gap, providing concrete insights into how to upgrade documentation for the AI era.
Vivek Kumar kicked off the 10th edition of this event, underpinning this theme from the very beginning as he introduced results from Adobeâs recent AI Survey that reported that 54% of respondents were concerned about the output quality and AI hallucinations of Generative AI (GenAI) tools.
Read on to discover the 4 elements content leaders say are essential to building AI-readiness and business agility in this new era of intelligent content. Weâve got direct quotes from the presentations offering advice and first-hand experience on how to navigate this transition.
1. A Modular Mindset for Scalable Content
At DITAWORLD, it was no surprise that many presenters were big advocates for structured authoring tools, stating that DITA (or other writing standards) is a foundational part of preparing content for AI.
In her presentation, âAnalyzing & preparing your DITA for Generative AIâ, Amanda Patterson (Comtech Services) addressed the common question of âhow do I know my content is going to be utilized by the LLM?â. This question is particularly important for companies whose LLMs need to support archived content. âDepending on how far back you have to go, and when you have moved into DITA, youâre going to get questions about âIs it structured?â âHow do I get [the LLM] to read [the content]?â âDo I have native files?â âHow are we getting that content in there?ââ.
Before discussing her AI content preparation list, Amanda reassured viewers who donât currently have concrete plans to implement AI, stating, âyou can start working on these kinds of things to get your content prepped so that when the day comes and you are asked to feed the LLM, youâre going to be ready.â The first point on her list was to âmake content modular and structuredâ. Here, she recommended that anyone not already in DITA consider using it to structure content, while those already in DITA should focus on templates.
Elaborating on the value of DITA for AI, Amanda explained that âAI loves structure because it is still a machine. It is still reading the text or the code or source views of our content. And I think those of us who are like âyeah, yeah weâve been in DITA forever, weâre fine,â I really ask you to question your granularity.â She elaborated on the importance of granularity by explaining that someone who Googles âhow tall is AndrĂŠ the Giantâ expects to receive a succinct, direct answer, not a long block of text with the answer hidden inside. Therefore, while many companies are already using DITA, they need to re-evaluate the size of their topics to ensure AI will be able to easily extract key pieces of information.
Really contemplate granularity. And thereâs a lot of ways to get there. It doesnât necessarily mean you have to go and rewrite all of your topics. It might be adding a layer of metadata. It might be, you know, phrasing and adding some ideas [or] some ID tags and stuff like that in there.
Amanda Patterson
Senior Consultant at Comtech Services
How AI is Reshaping Technical Writing: Structure
Anyone feeling overwhelmed by the transition to structured content can find valuable guidance in Greg Kalten and Alex Priceâs (Broadcom) presentation âScale smart!â. In 2019, Broadcom acquired CA Technologies, which required the migration of about 1.5 million unstructured content pages.
This migration raised the need for a CCMS that allowed them to scale documentation, work with a lean documentation team, and offer functionalities like authoring capabilities, versioning support, translation services, PDF and HTML5 publishing, and online publishing. Given these core capabilities, they chose to migrate to AEM 6.4.2 and Guides (then called DoX) 3.3. With a five-month deadline, they successfully moved from unstructured to structured content with the following steps:
- Convert existing content into DITA
- Package, upload, and install the content
- Generate the documentation and verify it to see if it works
- Fix any content issues and repeat the validation step, if needed, until there are no more issues
- Once everything works, publish the content
Companies that havenât yet made the switch can learn from Broadcomâs experience to prioritize best practices from the beginning. Despite this being a complex and time-consuming process, itâs clear from Amandaâs presentation that granular, structured content improves AI relevance, making it a worthy investment for future-proof documentation.
2. A Comprehensive Metadata Strategy: Annotate or Lag Behind
Alongside structure, another key indicator of modern technical content is metadata. Presenters were unanimous in the opinion that early metadata investment pays off, and it is not optional. Steven Rosenegger (Topcon Positioning Systems) highlighted metadata and taxonomy as one of the first steps in his team’s 90-day kickstart approach to developing future-proof content. This message was loud and clear, echoing across the presentations. Event attendee Heli HytĂśnen from Lionbridge came to the same conclusion, sharing on LinkedIn that âafter two days of DITAWORLD hereâs my key takeaway: Annotate that data. Tag your content. Label it. Add metadata.â
Why is metadata so important? It is essential for reuse, AI accuracy, and dynamic delivery. Well-annotated content helps AI systems find the right content faster for each userâs needs and prevents hallucinations. Noz Urbina (Urbina Consulting) summed this up in his presentation âThe Truth Collapseâ when he explained that semantic data has meaning. Therefore, adding metadata makes content easier for machines to read and understand, which in turn helps human users find relevant information faster.
A Closer Look at KONEâs Metadata Journey
Annotating content with metadata isnât a simple step to check off, but a complex new strategy and workflow. To illustrate this, Hanna Heinonen and Kristian Forsman (KONE) walked attendees through their metadata transition in the presentation âHow KONE Delivers Intelligent Experiencesâ with Fabrice Lacroix (Fluid Topics).
There, Kristian explained that adding metadata was a long process, particularly as KONE had many legacy PDFs at the start. He explained that at the beginning, âPDFs were generated, and you donât really need that much metadata for that. But we were still kind of enforcing and making people add metadata to a lot of content which has now been a very good thing.â Several years later, with metadata as a core part of their content operations, KONE has improved information findability and search result accuracy.
If the users can get the information that they need and spend 10 less minutes at the elevator, that actually translates to millions. As I said, [KONE represents] 1.6 million equipment, so the minutes translate into millions.
Hanna Heinonen
Digital Content Lead at KONE R&D
How AI is Reshaping Technical Writing: Metadata
Crucially, Hanna also highlighted that this isnât a one-time project, âbut itâs an ongoing process over the years to improve and adjust.â If that sounds scary or overwhelming alongside the rest of your content operations, Hanna reassured fellow content leaders, stating, âwe noticed we shouldnât get lost in the look and feel of PDFs. What matters is actually the semantics of XML. We can fine-tune the style sheet, but need to pay attention to the tagging.â
After several valuable insights, Fabrice summarized the key takeaways of KONEâs metadata strategy. âYour advice is to do foundational investment into metadata as soon as you can. Be pragmatic, and then iterate to optimize that and add more as you understand the needs and the business case and the use case for those metadata for putting the right piece of information, like the context or the profile of the user, and all that.â
Finally, looking at current content projects, KONE saw an opportunity to help onsite technicians remotely solve complex issues with AI. Their new GenAI chatbot, Technician Assistant, is now the primary contact point for technicians and has greatly reduced the time to resolution and increased help center call deflections.
KONEâs many years of work to structure their content and implement well-designed metadata for dynamic content delivery continue to provide business value. These efforts are now also beneficial for efficient machine processing, helping their AI chatbot smartly fetch data.
3. An Increase in Quality with Intention
âContent quality is the next frontier of AI performance improvement.â This maxim from Noz Urbina urges widespread implementation of quality assurance to increase efficiency and computer understanding of content. After all, maintaining a consistent quality level â including standardized terminology, concise language, and minimalist writing â reduces AI hallucination risks.
Deborah Walker (Acrolinx) further highlighted the importance of integrating these quality steps into your content operations in her presentation âQuality By Designâ. The explosion in content volume today presents new risk opportunities in a complex and evolving regulatory landscape. The manual, reactive strategy treating compliance as a final hurdle is outdated and leads to bottlenecks in the publication process. Companies must flip the approach and embed quality assurance and compliance into their content workflows for seamless operations, improved consistency, and enhanced efficiency. She noted that among the benefits, this strengthens content governance for AI-generated content.
By making quality and compliance inherent in content creation, we can transform your structured DITA from a potential liability into a strategic compliance asset.
Deborah Walker
Manager of Linguistic Solutions at Acrolinx
4. Human-Led Content and Oversight for Value-Added AI
Alongside the excitement and best practices for optimizing content for AI tools, several presentations also highlighted risks and management considerations for companies. In fact, understanding the relationship between humans and AI was a consistent theme in the panel discussion âEmpathy is not a prompt!â moderated by Stefan Gentz (Adobe) and featuring Bernard Aschwanden (CCMS Kickstart), Sarah OâKeefe (Scriptorium), and Markus Wiedenmaier (c-rex.net).
Their discussion touched on the notion that, as humans, we understand both users and systems and therefore maintain responsibility and accountability over machines. Documentation teams control what goes into AI and are responsible for adjusting what comes out, so it is accurate and reliable. They are also responsible for user trust or lack thereof in content, which is why quality assurance is so important in AI projects.
Weâre evolving into a trust-based economy…Do you trust the website? Do you trust the person thatâs giving you that information?
Sarah OâKeefe
CEO at Scriptorium
Later, their conversation touched on the shifting role of technical writers. Markus highlighted that whether AI will replace technical writers depends on how you define their roles. They may write less, instead focusing more on AI management, as these systems are tricky and require oversight to maintain quality outputs. As this role shifts, documentation teams will become information architects who focus on content structure, governance, and reuse strategies.
As part of this oversight, Noz warned in his presentation that AI models use feedback loops to continuously ingest our data and content to generate hyper-engaging content. However, this shift from us being users looking for information to being fed information puts volume over value, and here, we risk truth collapse.
We have to control the information diet that goes into the LLMs so that it puts out good content.
Sarah OâKeefe
CEO at Scriptorium
Therefore, we need to use AI wisely, so its outputs create value. This means using AI for tasks like auto-tagging, research synthesis, persona and journey mapping â for drafts, not deliverables.
How to Start? Adjust now, Iterate Continuously
While this implementation checklist and best practices may feel long and complex, donât worry. Enhancing content for optimized use by AI systems isnât a one-time project. In other words, itâs a marathon, not a sprint. Technical and product content is an evolving, living system that pays off when itâs done right, so the key is to start building the foundation now and then continuously iterate and adjust for better results.
In the wise words of Noz Urbina, âFor decades weâve been preparing ourselves and our content for this moment in history.â We know what works â clear language, modular and structured content, searchable formats, annotated text â and itâs nothing new. But now, itâs time to optimize your strategy and put those best practices into action.
And donât miss the presentations mentioned in the article at the links below:
- Analyzing & preparing your DITA for Generative AI – Amanda Patterson (Comtech Services)
- Empathy is not a prompt! – Stefan Gentz (Adobe), Bernard Aschwanden (CCMS Kickstart), Sarah OâKeefe (Scriptorium), and Markus Wiedenmaier (c-rex.net)
- How KONE Delivers Intelligent Experiences – Hanna Heinonen (KONE), Kristian Forsman (KONE), and Fabrice Lacroix (Fluid Topics)
- How Topcon Future-Proofed Its Global Content – Steven Rosenegger (Topcon Positioning Systems) and Bernard Aschwanden (CCMS Kickstart)
- Quality By Design – Deborah Walker (Acrolinx)
- Scale smart! – Greg Kalten and Alex Price (Broadcom)
- Truth Collapse – Noz Urbina (Urbina Consulting)
Schedule a free demo of Fluid Topics with a product expert