All posts by Fabrice LACROIX

Fabrice is Fluid Topics visionary thinker. By tirelessly meeting clients, prospects and partners, he is sensing the needs of the market and fueling his creativity to invent the functions that makes Fluid Topics the market leading solution for technical content dynamic delivery.

New White Paper: Reshape Customer Support with Tech Content in Salesforce

Far beyond its roots in Customer Relationship Management, Salesforce now delivers and supports many customer-centric processes. And yet, despite being built around a comprehensive and extensible data model, Salesforce is not equipped for storing and managing the technical documentation that is needed.

Fortunately, options are available for solving this challenge! The white paper Reshape Customer Support with Tech Content in Salesforce consolidates years of experience in building efficient customer support infrastructures, and is brought to you courtesy of the content delivery experts at Fluid Topics.

This guide details the practical aspects of making technical content available through Salesforce. It also explains how to interconnect that content with Salesforce. Along the way, concrete examples are provided that show the limitations of default choices, and indicate how to dramatically improve the efficiency of both information storage and retrieval. A special focus is placed on the highest-impact capability of information delivery: search.

Want to learn more?

Reshape customer support now. Download the white paper that shows you how.

Get the White Paper Now!
Antidot + Ventech + CM CIC Innovation

Antidot announces a €5.5m funding round from Ventech and CM-CIC Innovation to accelerate its international deployment of Fluid Topics

Lyon, January 23, 2019 – Antidot, the software vendor behind Fluid Topics Dynamic Content Delivery platform, announced today that it has closed a €5.5m financing round from pan-European investment fund Ventech and its historical partner CM-CIC Innovation (a member of Crédit Mutuel Alliance Fédérale Group).

With roots in Lyon and Lambesc (France) and field operations in Boston (US), Antidot has been developing innovative search and enrichment technologies that help companies leverage their textual content since 1999. Capitalizing on its expertise, in 2015 Antidot launched Fluid Topics, a Content Delivery Platform that revolutionizes how technical documentation and product information is published and accessed in Customer Support and Field Maintenance applications.

Antidot targets makers of sophisticated products (software vendors, industrial firms, high-tech equipment manufacturers, infrastructure operators, financial services, etc.) for whom seamless access to technical content is critical to product adoption and customer satisfaction.

This funding enables Antidot to accelerate the development of its strong technology roadmap around content delivery. Building on recent successes with major customers that include Talend (US), Teradata (US), Kone (Finland), Médiamétrie (France) and EasyMile (France), this €5.5m capital infusion also signifies an increase in Antidot’s global coverage as the firm endeavors to both strengthen its US operations and deepen its focus on Northern and Central Europe.

Fabrice Lacroix, CEO and Founder of Antidot, attests to the value of this funding, explaining: “We are pleased to receive the backing of a global investment player like Ventech, and a renewed commitment from CM-CIC Innovation who has been supporting us since the beginning of the story. This new funding will allow us to accelerate the next stages of our growth, both by expanding the functional scope of our platform, and by extending our market coverage in a context of strong internationalization.”

Karine Lignel, President of CM-CIC Innovation and member of the Board of Antidot, adds: “CM-CIC Innovation is proud to back Antidot, and to have supported the various phases of the company’s development. The recent successes of the Fluid Topics solution are extremely promising and we are excited about the next phase of growth.”

Claire Houry, General Partner at Ventech, who with this investment joins the Board of Antidot, explains: “We were impressed by Antidot’s highly experienced team and its extensive international client list. With Fluid Topics, the company leverages 20 years of expertise in Machine Learning to offer software vendors and industrial companies a solution that re-invents the user experience of Customer Support or Field Maintenance departments. We are very proud to work with them to accelerate their international development.”

About Antidot

Antidot is the vendor of a content delivery software platform (CDP) that is based on 20 years of advanced research in semantic search and content enrichment. Its flagship product, Fluid Topics, multiplies the value of technical documentation by delivering dynamic product information in a personalized and contextualized form. Antidot helps more than 150 customers around the world to improve their operational efficiency.

For more information: and

About CM-CIC Innovation

CM-CIC Innovation is the subsidiary specializing in venture capital investments of CM-CIC Investissement (€3.0 billion in capital), a member of the Crédit Mutuel Alliance Federal Group. It invests in companies developing promising technologies. CM-CIC Innovation selects companies with strong growth potential in dynamic sectors such as information technology, telecommunications, electronics, life sciences, new materials and the environment. For more than 15 years, CM-CIC Innovation has been investing – and often reinvesting – its own capital to support innovative companies in their go to market. CM-CIC Innovation provides long-term capital support to innovative startups to increase their chances of success.

For more information:

About Ventech

Ventech, a leading European venture capital firm, invests through offices in Paris, Munich and Helsinki into high-growth companies with activities in the digital economy (Enterprise Software, Deep Technology, Marketplaces, Media). In China, Ventech manages a dedicated regional fund with a team based in Shanghai which also offers business development support in Asia for European portfolio companies.

With more than 850M€ raised since 1998, Ventech has invested in over 150 companies in Europe, China, Russia and in the US.

Current active investments in Europe include successful high growth companies such as Believe, Botify, Freespee, Launchmetrics, Ogury or Speexx.

For more information:

Fluid Topics - best knowledge management software - dynamic delivery

10 points to check before choosing your dynamic delivery solution

Dynamic publishing delivers personalized, customized, on-the-fly documentation. It enables evolution from a static scenario—the provider tossing their documentation over a wall to the user—to having a detailed one-on-one conversation with readers.

Developing a publishing portal is not your job. Specialized solutions exist. The following points are meant to help you choose a solution that matches your needs.

  1. Multi-source capabilities with extensive content format support

    Even though you start with a tactical project dealing with only one type of content, you need to think ahead if you want to be prepared for a variety of use cases and stakeholders. Make sure the dynamic content delivery solution can support multiple and diverse input streams, as you identify all data sources that contain information that could be beneficial to users.

  2. Content enrichment capabilities

    Not all content is born alike. Therefore, to deliver consistent information, particularly in a multisource environment, you need tools to clean, align, and enrich the content—particularly the metadata. Because building content processing workflows can be complex, make sure that the dynamic content delivery solution you select offers the necessary functions and can be extended.

  3. Top-notch search engine

    This is the most important aspect of a dynamic delivery solution. Regardless of the money you spend in creating content, if you can’t find it, it doesn’t exist. On the other side, burying users with tons of irrelevant content won’t serve. The search engine has to be:

    Semantic: support for grammar and synonyms
    Full-text: every bit of every document must be indexed
    Tuned, tunable, and versatile: relevance tuning is an art, so it has to be good out of the box, but it must be extensible so that additional parameters can be part of the ranking equation (think about user profile, document popularity, freshness, etc.)
    Faceted: filters are automatically extracted from the metadata and can be offered to users for narrowing down their search
    Suggestive: with type-ahead suggestions (autocomplete)
    Typo-tolerant: the spell checker must be capable of learning
    Explicit: each result must show why it’s in the list with keywords and synonyms highlighted
    Multilingual: all the above capabilities must be equally good in any language that matters to you
    Fast: millisecond-quick, regardless of the traffic

  4. Efficient documentation reading

    Once the user has found the info, how easy is it to read, to navigate inside the document (particularly large books), to follow internal and external links, to retrace your steps…? Is multimedia content rendered correctly, do large images and vector graphics have zoomability? Can you tune the look and feel of the content to match your branding?

  5. Online content interactivity

    Bookmarks, Alerts, Rating, Annotations, Feedback, Personal Books… features that allow you users to customize your content are essential. Pick only the ones you need: too many bells and whistles may clutter up your UX. Define what makes sense in your situation and to your users.

  6. Multi-device, multi-channel publishing output

    Of course, the user interface has to work perfectly on common connected devices (laptops, tablets, smartphones), so it needs a responsive interface. Integration capabilities for other solutions such as customer helpdesk tools are essential. Inline/in-device help, chatbots, or augmented reality are not just buzzwords; they are our tomorrow. So, your Dynamic Delivery platform has to be future-proof, not just by tossing in APIs, but by giving you ready-to-use add-ons and extensions.

  7. Open and turnkey publishing solution

    You don’t want a bare framework and have to spend months developing something that will need to be revamped anyway in 18 months because everything is evolving so rapidly. Therefore, make sure that the solution you choose is:
    Turnkey: it provides you with 90% of your needs in matter of days or weeks. It comes pre-tuned, ready for your brand.
    Open: it offers connectors, add-ons, modules, APIs, etc. It can be extended and it talks to other products so that it can be integrated into your IT landscape
    Evolving: it has a vision and a roadmap, with the ability to follow the trends and needs. AR, VR, virtual agents, predictive… you’ll be prepared.

  8. Data driven documentation

    Tracking users’ behavior while searching and reading is a unique opportunity to learn about them, their problems, and your products. But for that, analytics needs to be tailored specifically, not by regular web analytics tools. Check the capabilities of the dynamic content delivery solution, how it tracks and logs, what metrics it provides by default, if it’s extensible, and how the derived insight can enhance your customer support strategy (data visualization, personalization, predictive, etc.).

  9. Online documentation security

    Should be by design at the core of the dynamic content delivery solution (not as an added layer that can be bypassed). It should be everywhere, from search suggestions, to facets displayed, to results and documents accessed, enforced through each web service and every interface. SSO integration with support of multiple simultaneous backends must be ready out of the box. The solution should also provide mechanisms for mapping user profiles to content metadata.

  10. Content publishing regulation compliance

    Regulations are putting more and more pressure on every industry. Which ones apply to you? Do you have users in Europe? Then you must comply with GDPR for data privacy. Or with Privacy Shield for US users. What about WCAG for accessibility? The list is not going to get any shorter, so it pays to be prepared.

Want to learn more?

These ten points, and much more information on Dynamic Delivery, are detailed in a white paper Dynamic Delivery: What it is and Why it Matters.
Get the White Paper Now!
Documentation Publishing with Fluid Topics

New White Paper:
Dynamic Delivery Explained

Dynamic delivery, also known as dynamic content publishing, provides personalized, customized, on-the-fly documentation, that lets the reader gain insight through contextual and complete information.

Fresh from the press, Dynamic Delivery: What it is and Why it Matters is a white paper produced by the Fluid Topics team, that dives into the principles and benefits of dynamic delivery.

The first section of this white paper helps the reader understand the value brought by dynamic delivery, and why it has become a must-have not only for producers but also for consumers of information.

It then drills down into the benefits of dynamic delivery:

  • How it adds findability to content that is diverse in nature, diverse in source, and not consistently structured.
  • How it brings readability to information by adding context, security, and suitability to delivery channels.
  • How it provides shareability of content through documentation virtualization.
  • How it creates interactivity with users through a better understanding of their behavior.

As a bonus, the white paper also contains tips on how to choose a solution that aligns with an organization’s strategic goals, with 10 key points to check.

Get 24 pages of content-rich, illustrated insights into the why and the how of technical documentation!

Fluid Topics - knowledge management software

How we manage the roadmap

Or “The world is not perfect, but it’s not a reason for not trying to make it better.”

In a previous blog post, we explained how we manage releases in Fluid Topics but it didn’t tell you much about how we decide what we put in these releases, how we make tradeoffs in the roadmap, and who makes the choices.

This is certainly the most strategic, interesting and yet frustrating exercise for a software vendor.

  • It’s strategic because it shapes the product, defines our positioning, and enables (or hinders) go-to-market options.
  • It’s interesting because it forces us to examine, sort, and articulate all the demands coming from the different stakeholders, whether internal or external (our clients, partners, prospects).
  • It’s frustrating because, as Gide said, “to choose is to renounce”. We would dream of being able to develop all the requested improvements and new features right away, without having to postpone any. But our resources are limited, and we have to make choices. Not to mention some technical dependencies (feature A requires feature B to be implemented before).

Interestingly, each internal stakeholder —customer support, professional services, R&D, sales, product manager— represents a force driving the development of Fluid Topics, hence of the company, in one specific direction, and finding the right balance between these forces should propel us with optimized efficiency.

So how do we choose?

Because we develop Fluid Topics for you, we had to find a way for defining the short-term roadmap (3 to 5 months) that is applicable and explainable. We have opted for a game coming from agile methodology called “Buy a Feature”. Here is how it works. As said, we have 4 groups in the company representing different sets of stakeholders with different needs and expectations:

  • The Support team represents existing customers and their demands for improvements (the so called “feature requests”).
  • The Professional Services team represents the new customers and the on-going projects, focusing on features that are needed for the go-live.
  • The Sales team dreams of functions that would ease the sales process, increase the ROI and unlock market segments.
  • The Product team defends the long-term vision and the major innovations that nobody has (yet) asked for.

Before each session (2 to 3 times a year), each team proposes the list of features that it wants to be done in the next 4 to 6 months. The complexity of each feature is evaluated by the dev team so that it has a cost. As you can imagine the total workload of the 4 combined lists represents at least 3 times the capacity that we have.

Here starts the game

The teams gather and each is given the same amount of money. But the total sum of money distributed is far less than the sum of all the features, and slightly less the total number of development days available in the next 6 months (roughly 80%). With that money each team will buy features. Because a team doesn’t even have enough money to buy all its own proposed features (maybe 25% of them) it will have to negotiate and to join forces with other teams to buy features. It’s a very interesting moment and certainly one of the biggest benefit of the game because each team explains the value that a feature brings, thereby sharing its concerns and perspective. This helps aligning the vision and creates transparency and engagement.

Once the final list if settled, it is ordered according to technical, organizational and commercial constraints. We are then able to update the roadmap and to communicate it to our customers. We know some of them may be frustrated not seeing their request or suggestion scheduled, but since it’s impossible to satisfy everyone, at least we know that we have a method that is balanced, explainable and transparent.

So don’t hesitate to reach out to us to feed the roadmap and to help us develop and refine our vision. We cannot guarantee that every request will be done immediately, but we can promise that we will always listen and make our best to provide you with the best dynamic content delivery solution on the market.

technical documentation publishing - fluid topics - knowledge management software

Upload your content from oXygen to Fluid Topics in one click

Oxygen XML Editor

oXygen is one of the most popular tools for authoring structured content, particularly in the DITA community. With the support of Syncro Soft — the company behind oXygen — we have developed a plug-in that allows oXygen enthusiasts to publish content in one click, directly from the oXygen interface to Fluid Topics.

See how simple it is: a “Publish to Fluid Topics” button is displayed in the oXygen DITA Maps Manager window, and in the right-click menu.

Just click on any of these, and the plug-in does it all for you: it collects all elements needed (map and DITA files, images), creates a package and uploads it securely to your Fluid Topics instance.

Regardless of which back-end storage you use with oXygen (file system, Git or SVN version control system, CMS, …) you can publish directly from your favorite authoring environment. As soon as you have validated your edits, they can be online within seconds.

If you have multiple Fluid Topics instances, one for production and one for staging content for example, this is also handled by the connector: you can create and manage multiple publishing profiles, and select the one to use at upload time.

As for installing the plug-in itself, it takes less than a minute. It just requires oXygen version 18.1 or above. If you are already a Fluid Topics customer and you are interested, contact us and we will be happy to provide you with the plug-in.

If you are an oXygen user and are thinking about how to elevate the value of your nicely engineered content, it’s time for you to look at dynamic delivery and test Fluid Topics. It will transform your world!


Special thanks to George and Radu from oXygen for their support and friendship.

roads flash - Fluid Topics

New version management
for Fluid Topics

Starting with the release of v3.2, we are streamlining our version management and release process for Fluid Topics. Please find highlights of this new process below.

We are DevOps

We compile, test and deliver one new version of Fluid Topics every week, usually on Monday. Each released version is valid for production. However to avoid unnecessary disruption, we only upgrade our SaaS production environment every 2 to 4 versions, depending on needs, which include for example the roll out of a fix or of a new feature awaited by customers. On-premises customers can install any released version, not necessarily one of those we have deployed in our cloud environment.

We are also always ready for a hot-fix since at any time we can correct a critical bug or a security issue, compile and certify a new version, and put it in production in a few hours, even if it’s not Monday. And because we deploy frequently (every 2 to 4 weeks) we are fully confident in our processes and tools.

We are Agile

We develop using Agile principles. Our sprints last 3 weeks. Each modification or new feature, whether it can be developed within one sprint or over multiple sprints, is held in a distinct commit or in a separate code branch. This way, we have tight control over what can be included in any new version.

We apply Version Numbering

Fluid Topics releases each have a distinct version number of the form P.M.m where:

  • P is the platform (currently P=3),
  • M is the major version,
  • m is the minor version.

For a given platform, minor and major versions are always backward compatible.

For each “new version of the week” we increase the minor number: 3.2.1, 3.2.2, 3.2.3, etc. As we update release notes synchronously, you can easily follow the news.

Starting in May 2017 with availability of v3.2, we will release a major version 3 times a year: in January, May and September. Hence version 3.3 should arrive in September. This release schedule is a significant change: there have been 42 minor versions of Fluid Topics v3.1 and nine months has elapsed before we released Fluid Topics v3.2 — from 3.1.0 in July 2016 to 3.1.42 in April 2017.

So why are we making this change? Why can’t we stick to continuous delivery? Why do we need minor and major versions anyway? Simple reason: Change Management.

We back Change Management

Some of the features we develop are invisible to your users: a performance improvement, a new admin interface, a technical enhancement, a new supported format, etc. These new capabilities are integrated into Fluid Topics and officially released as soon as they are ready. Hence some minor versions may in fact bring huge things. But usually your users won’t notice because it does not change the UX and the existing interface.

Conversely, some changes or added features may impact the UI and the habits of the users: a button moved, a label changed, a layout redesigned, etc. And you need time to warn them, maybe update some screenshots and documentation, or just prepare a newsletter to advertise the function.

Therefore, visible changes don’t get integrated on the fly in minor releases, even if they are ready. We wait until the next major version. And in this case, to give you time to organize the change management, we make the features available in preview environments at least 3 weeks in advance; for those of our SaaS customers having a test environment, we will also update it 3 weeks before the general availability of the major release.

Hence the timeline of a major release is organized like this:


In all cases, whether visible or not, we publish the list of enhancements, modifications and new features 3 weeks before General Availability; and we update the roadmap for the upcoming major version.

Regardless of the delivery schedule, we still are eager to get feedback on each version and feature and to have our customers help us make Fluid Topics the best Dynamic Delivery solution in the market.

books technical content

You talkin’ to me?
Language Management in Fluid Topics

In this blog post, we sift through the complexity of multilingual content delivery, especially when not all documents are translated and available in every language.

The situation

Your technical documentation is published in multiple languages: 2, 10, even 20 or more for some of you. The easy case would be to have all documents — meaning here any piece of content such as topics, files, images, entire books, etc. — existing in every language and always in sync. But we know that the reality is far from being that simple.

Usually, your tech writing team creates content in one reference language. Let’s say English to keep it simple, because it’s the lingua franca of technical documentation and true at 90%. This could mean that even if an expert provides some content in another language, it is likely translated in English. Let’s call it the Reference Corpus.

In this example, our reference corpus is made of 20,000 topics assembled in 100 manuals.
In this example, our reference corpus is made of 20,000 topics assembled in 100 manuals.

The challenge

You are a global company with some end-users who are not speaking English. Let’s say they speak Japanese and French. You have two possibilities:

  1. You translate everything -> “English Corpus” = “French Corpus” = “Japanese Corpus”.
    Then it’s easy. You are sure that whatever language they speak, your users have access to all information. Your users read in their native language, and you can automatically set the appropriate search language based on the browser language or the user preference.
  2. Only a subset of the entire corpus is translated (the content dedicated to end-users), and part of the documentation is not translated but available only in English (the content for experts and partners for example). Even worse, each country adds some documentation that exists only in its language (regulatory and compliance notices, local features, etc.).
    The situation looks like this:

Now the question is very simple: How should your portal behave? When a user searches for information, what content blocks (see image above) should be looked upon?

Let’s keep it simple and say that our user, Jean, is French. Jean chooses the French UI. As in case 2, part but not all of your content is translated into French. Jean needs to know how to fix a component and runs a search for “X124 installation”. What should happen?

A- The search is run on all documents (En, Fr and Ja)



  • No risk of missing any information since, by default, each and every byte of content is included in the search.


  • Results are possibly duplicated, or even irrelevant. Keywords exist in different languages (here in Fr en En) and an ambivalent query could lead to misleading results.
  • Returning results in a language a user does not understand is inappropriate. How would you feel about a page stuffed with Japanese text if you don’t read Japanese? Surfacing unnecessary and overwhelming information creates a poor and inefficient user experience. Users could refine their searches by filtering in or out desired languages. But, why not make it automatic (see next case)? (see next case).

B- The search is run in one selected language (Fr for Jean)



  • Consistent search results and easy user experience, as only French results are displayed.


  • Possibly missing information: Some content has not been translated into French and exists only in English, which means that some critical information relevant to the situation may be missing.
  • Even worse, the search could return “0 result”, making the user believe that there is nothing to help him. And, the probability of having no search results is increased significantly.

We see that both ways of searching are causing problems. And, the intermediate approach of searching in the French and English corpus (user and reference languages) leads to the same issues described in A. Then what should we do?

It’s obvious that the search should run on a set of documentation made of the French elements plus the English elements applicable but NOT translated. Let’s call this the Full Virtual Corpus:

  • Full, because it now encompasses all content relevant to a user
  • Virtual, because it is made of different blocks of content from different languages


But it’s not that simple. Running a search with French keywords on English content does not make much sense, as the keywords and the content won’t match. However, we can assume that the user reads English as the choice of not translating everything was made, because this specific content is meant for people used to English (e.g. sysadmin in IT).

Then there are two ways of doing it:

  • Low-tech: Execute the query on the French Corpus. Warn the user that some content may be missing and that they should also do a query with English keywords (and run the query on the English corpus, or on the English content part of the Full Virtual Corpus).


  • High-tech: Automatically translate the user query and run two queries at the same time … with the initial keywords on the French subpart of the Full Virtual Corpus and the translated keywords on the English subpart.


The high-tech approach is what should be done, but it can be quite complex. How do you translate the query? Multiple solutions:

  • Rely on external generic translation web services. Certainly the approach requiring the least amount of work, but it may lead to poor results, as those web services are not aware of the specificities and vocabularies of your domain and content.
  • As users search with keywords and not phrases, you can provide a dictionary of terms — mixing generic dictionaries and specific dictionaries for your business term — and do word-for-word translation. This requires significant work, as you must create and maintain those dictionaries.
  • Here is the trick: Those dictionaries already exist! They are the translation memories. Since part of your content has been translated, this has generated aligned dictionaries of terms and expressions. Just use it.

Important: Even if we know the language of that user (from their browser or preferences), it’s important to keep in mind that there is a difference between the UI language and the content language. The UI may be available in 10 languages, but your documentation only in 3. A German user may set their UI to German but still has to select a different “preferred language for content” if none of your documentation is available in German (as in our example). Either you declare a default fallback language for German (“if UI in German than search in English”), or the user selects it manually.

As you have read in this blog post, multilingual content delivery is not that simple. It doesn’t just mean that you must index all content. It requires understanding and technology to best match your situation. Being on the edge of dynamic content delivery, Fluid Topics already embeds the tools that you need. We continue to research and investigate with our partners specialized in translation, such as WhP, to always push the limits and make your content the cornerstone of your customer support strategy.


Fluid Topics CEO’s Take On Trendy Intelligent Content and Semantics

Intelligent Content is the new trendy topic in the technical documentation galaxy. I had the occasion to assist to numerous presentations and panels around this subject during the fall 2015 conferences and tradeshows like IDW in San Jose, Lavacon in New Orleans or Tekom in Stuttgart, and I want to share some feelings and disappointment.

Do you dare to ask what is intelligent content? Because somehow, it speaks for itself: It’s in the name. The keywords that gravitate around this concept are: semantic tagging, classified content, taxonomies, cross-silos, linked information, structured content, reusability, multi-channel delivery.

In fact the answer is pretty vague, and I feel that there is a huge confusion between content, structure and semantics. And this is what I would like to clarify.

Content is what you say
It’s the story.

Structure is how you say it
Paragraphs, lists, images, tables, … how you split your content in multiple fragments in the case of structured authoring so that you can more easily reuse fragments in different places without having to duplicate it. Like using the same product description in a tech doc, in a blog article and in a catalog.

Semantics is why you say it
It’s the purpose of the content: your intent summarized and formalized through keywords, tags or links. It’s what allows keeping the “right content at the right time” promise.

Content is about language and words.
Structure is about format: Docbook, DITA, Framemaker, etc.
Semantics is about metadata and ontologies.

There is no real link between Content and Structure

I heard: “DITA is more suited to intelligent content than other formats”. I think it’s wrong and misleading to say that. There is no real link between those two things. I know that the DITA community will hate me for saying that, but a little controversy is good sometimes (at least this gives opportunity for new panels and flames).

Let’s start with this more obvious assertion: There is no real link between Content and Structure. It’s not because you write in Docbook that you tell better stories than if you write with Microsoft Word. Was Shakespeare a poor writer because he was not writing in DITA?

Your skills as a storyteller have nothing to do with the tool — the structure— that you use for writing.

The same way, Structure is not bound to Semantics. Adding a tag to a piece of content that is in DITA, in Docbook or in HTML is not different. It’s still a tagattached toa piece of content.

A “tag” is any additional information added to content to describe its nature, subject, purpose, use, etc. It can be a free keyword, an entry from a controlled list or a taxonomy, or even an RDF assertion complying an ontology. I won’t dig further on the subject of semantics technics and technologies, but will just give few real life examples to illustrate what those tags could be:

  • Subject=installation, service, search
  • ‘Audience=expert’ —implied “this content is meant to be read by experts”
  • ‘Product=Software/AFS’ —implied: “this content applies to the software product AFS”
  • See also topic X’ —implied “this content is closely related to content X”

The point I want to insist on is the second part of the phrase: “attached to”. The tags are not inside the content. They are external to it and live separately. Putting them inside the content may even be an error.

Consider these two examples:

  • Audience=expert’ may depend on the context and the same content may be used by ‘novice’ as part of a task description.
  • See also topic X’ is a link that becomes invalid and must disappear if topic X is deleted. And you shouldn’t have to open all content pieces pointing to remove the links. Therefore it’s quite clear that the link should be external to the content itself.

Last but not least, this added descriptions (tags) are relevant only when they are attached to a piece of information that is coherent, focused and self-sufficient.

  • Tagging a word or a phrase does not make sense. It’s not an information, surely not self sufficient, and too ambiguous.
  • Tagging globally a 300 pages manual may not be relevant either. It is too generic, certainly dealing with a lot of different subjects.

This is why tagging mostly makes sense on fragments of content (topics), and this is also why intelligent content is closely related to structured content authoring. But this should not be a reason to blur the lines and mix the two subjects. This would be wrong and confusing for people, which won’t help the necessary move towards intelligent content.

Let’s always keep this distinction clear between Content, Structure and Semantics.

My claim to DITA Experts: As explained above, most of the semantics —i.e. tags— should be managed and stored alongside the content itself, and this is a capability that should be offered by the CCMS (Component Content Management System). The different features, possibilities of this management would create a distinction between the different CCMS on the market. But at the end of the day, we miss in DITA the possibility to export (serialize) this semantic layer. DITA offers the possibility to exchange the content without loosing the structure through the topic and map DTDs. But there is no recommendation to publish the semantics. This is certainly a missing part if DITA wants to be a comprehensive standard for intelligent content.

secure publishing solution - Fluid Topics

How To Secure Access To Technical Content

Among the new features of Fluid Topics version 2.7 release, access control is certainly the trickiest one and it deserves some clarification. Most of our clients are interested in securing access to their content. Some of the motivations and use cases for secure access include:

  • Make part of the documentation publicly accessible, while the rest is limited to customers.
  • Centrally expose all content related to product support, but restrict access to some content to internal users only, based on fine-grained roles and profiles such as support agents, subject matter experts, field operators, etc.
  • Limit access to preview content for upcoming products to partners and resellers.
  • Deliver customer-tailored documents: some content may be specific to one customer based on its product configuration.
  • Make part of the content —such as training material— accessible on a subscription-based model.

Whether the need is immediate or planned for future implementation, this capability should be part of your checklist when searching for a content delivery solution. In fact, many companies choose Fluid Topics for its incredible flexibility and ability to cope with various situations.

To set up this security mechanism, clients often have to answer many questions and sometimes solve functional and technical issues. Even if Fluid Topics makes it easy, there are two issues that clients must address.

1. Understand the difference between authentication and authorization.

Authentication consists in knowing who the user is, and making sure that he is who he claims to be. This is usually achieved in the login phase with a username or email, plus password. In more demanding environments, this can be accomplished through security devices such as secure cards. To avoid forcing the user to authenticate again, Fluid Topics can connect to SSO systems and check if the user is already authenticated. Fluid Topics also supports out-of-the-box multiple standard SSO protocols —such as SAML, OpenID, OAuth— and can be extended to support proprietary protocols.

customer portal support with fluid topics

Authorization determines the permissions of authenticated users: resources they are authorized to access and features they can use. Access rights are usually modeled by a set of profiles and groups a user belongs to, and user credentials are authorized by obtaining a list of those profiles and groups. Authorization requires organizations to answer essential questions:

  • What is our permission model?
  • What groups, profiles or roles already exist?
  • Where are groups and profiles defined and stored?
  • Do groups and profiles match what we are trying to achieve in terms of content access? Or do we need to create new ones?

Example: You want to limit access to some content to your certified partners and resellers.

  • Do you have such groups defined? Are you sure that the corresponding users are properly tagged as being part of those groups?
  • Where is this information stored? Is it already in the directory used for access control? Or is this information managed in a separate application such as your CRM?
end users access technical documentation

User credentials are usually provided as part of the SSO dialog (through SAML assertions, for example). If the group’s definition and membership exist in an application such as your ERP or CRM that is not connected to your security infrastructure, you will either have to connect these systems or expose them through a Web Service. If the credentials are already defined in the directory used also for authentication, you’re ready to go. This is an issue to investigate with your IT department.

2. Define the relationship between credentials and content attributes.

The groups and roles used for defining user credentials usually don’t match the metadata existing on the content side. This is normal, but still it’s an issue.

Content such as technical documentation is usually written in advance, independently of delivery strategy and regardless of the security mechanisms that may be put in place. This content is usually tagged (type, related product, version, tasks, audience, etc.) with some metadata put inside the content, while other metadata is defined and stored “outside” the content, such as in the CMS.

metadata technical content - dynamic delivery

The decision to include metadata inside —when metadata are content-related and not dependent of the context— or alongside a piece of content, is very important but it is not the subject of this discussion. In both cases, metadata is usually not included in the groups used for defining user credentials, and it’s a good practice not to do so. In some cases, you may find the profiles you want to define for content access may not even exist in your directory.

Imagine that you have 10,000 topics and that you have to tag 20% of them as “partners” because they are accessible to partners. One day, marketing decides to create Silver and Gold Partnership levels. Do you think it make sense to manually modify the tagging of 2,000 topics? And by the way, the group “partner does not exist. Instead, the CRM holds attributes such as reseller or integrator.

This is a typical situation you can resolve by mapping between the user groups and the existing metadata. Fluid Topics makes it easy to dynamically and flexibly use mapping capabilities, but you must define mapping parameters based on your system.

Remember: securing access to your content will require you to answer two questions:

  • What are the existing credentials and how are they made accessible?
  • How do the access groups map to the existing content metadata?
For more information about the other features of version 2.7, see the release note.

Please give this new version a spin and let us know what you think. Comments are always appreciated and encouraged.