🔎  Notice something different? Get the story behind our exciting new website

Learn more

Fluid Topics Glossary

The Fluid Topics glossary defines Fluid Topics-related terms and abbreviations. This includes concepts related to technical documentation, product knowledge, AI, and more.

A

  • Admin Console: The Fluid Topics admin console or user interface is an interface that allows an administrator to configure and monitor the portal and its components. You can manage your content processing and rendering, UI branding, security, analytics, and users rights and roles, from the same control panel.
  • AI Bias: AI bias, also known as machine learning bias, is the tendency of AI algorithms to reflect and perpetuate human biases. During the machine learning process the algorithm may make erroneous assumptions leading to biased results. The bias and fairness of algorithms usually depends on the corpus of data they are trained on. It’s important to look for and correct any biases you find so your AI doesn’t contribute to prejudices in society.
  • AI Governance: AI governance refers to the frameworks, policies, standards, and practices used to ensure AI tools and systems remain ethical, safe, and fair. From research and development to data training and implementation, AI governance is crucial for achieving compliance, trust, and efficiency.
  • AI Hallucination: A hallucination happens when a Large Language Model such as a GenAI chatbot, generates outputs that are inaccurate or that aren’t based on the input data. In other words, when AIs produce realistic, yet incorrect information. Engineers must look out for these mistakes in order to moderate the hallucination while using the model. Some notable examples of AI hallucination include:
    • Google’s chatbot Gemini, incorrectly claimed that the James Webb Space Telescope took the first image of a planet outside the solar system.
    • ChatGPT invented a number of court cases to be used as legal precedents in a legal brief Steven A. Schwartz submitted in a case. The judge tried to find the cited cases, but found they did not exist.
  • AI Model: An AI model is a complex set of parameters sometimes called “weights”. A predefined machine learning architecture may execute the model, creating a program trained to recognize patterns within a specific data set. These programs are mathematical equations that use data inputs to draw conclusions about real-world processes.
  • AI Prompt: An AI prompt is any input that a user communicates to the AI to generate the intended output. A prompt can be in the form of text, a question, code snippets, or commands. With multimodal models (those that include images, videos, etc…) the prompt may contain formats other than text.The prompt is essentially the program (mostly written in natural language) that the AI model or Large Language Model interprets to produce its output.
  • Apache Cassandra: Apache Cassandra is an open-source NoSQL distributed database designed to process large volumes of data.
  • Apache Spark: Apache Spark is an open-source unified analytics engine for large-scale data processing.
  • Application Programming Interface (API): An API is a set of functions and procedures that allow two applications to talk to one another. They also enable the creation of applications and can interfere with the data and features of other applications, services, or operating systems. Fluid Topics’ API-first architecture enables unlimited integration capabilities with your existing systems, tools and sources. We offer APIs and SDKs to build bespoke connectors for proprietary data sources.
  • Artificial Intelligence (AI): AI refers to the development of theories, techniques, and systems that work together to imitate the cognitive abilities of a human being. It performs tasks including understanding natural language, recognizing patterns, learning from experience, making decisions, solving problems, and more. AI techniques include machine learning, natural language processing, computer vision, and robotics, among others. The ultimate goal of AI is to create systems that can mimic or outperform human capacity in various domains. AI is not more intelligent than humans, just way faster.
  • Authoring Tools: An authoring tool or software is a program used by technical writers, and documentation and learning professionals to create and arrange content into a standardized structure. Fluid Topics comes ready with a set of processing pipelines to ingest content written with different authoring tools (i.e. Madcap Flare, Author-IT, Confluence, Adobe FrameMaker,..) and easily process it.

C

  • Cascading Style Sheets (CSS): CSS is a style sheet language used for describing the presentation of a document written in a markup language such as HTML or XML. CSS is designed to enable the separation of presentation and content, including layout, colors, and fonts. This separation can improve content accessibility.
  • Chatbot: A chatbot is a computer program made to simulate real human conversations with humans over digital devices. While not all chatbots use AI, modern ones often function as complex digital assistants that use Natural Language Processing to understand user queries and provide automatic, personalized, and contextualized responses to them.
  • Clustering: Clustering is the action of grouping a set of objects in such a way that objects in the same group (or cluster) are more similar to each other than to those in other groups. In Fluid Topics it is possible to group together documents that share metadata so that they appear as a single result in the Search page. Documents that are clustered together are more user-friendly than a long list of individual documents.
  • Community Portal: A community portal is a website or an online platform that provides information, knowledge articles, self-service modules, and a dedicated place for discussion. With Fluid Topics, you can serve your content, in your community portal or software (e.g. Salesforce Service Cloud or Community Cloud).
  • Component Content Management System (CCMS): A CCMS is a relational database with content broken down and stored in components. This variant architecture is highly searchable and robust due to the granularity of the content being managed, which means you can store and easily manage large volumes of content without losing control of your documentation. CCMS are designed to simplify and streamline an organization’s technical documentation process.
  • Connector: A connector is a processing pipeline that ingests any type of content, whether structured or unstructured. Fluid Topics comes with a set of ready-to-use connectors for the most common sources (i.e. MadCap Flare, Paligo, FrameMaker…) and formats (i.e. DITA, Word, Markdown, AsciiDoc, XML…). Since Fluid Topics is an open platform, ad hoc connectors can also be added to handle specific formats.
  • Content Analytics: Content Analytics encompasses a wide variety of metrics to give you a visual display of exactly what your users are doing with your documentation. Fluid Topics Analytics are designed from the ground up to optimize authoring and improve the user experience. We capture every user interaction with high levels of detail and deep context. You’ll see what content was viewed down to the topic level, and for how long, every keyword search, selected facet, most popular searches and searches with no results, and much more. The main goals of Content Analytics are to identify content gaps, evaluate the alignment of the documentation with the users’ needs and prioritize content work on assets that will be truly useful to your users.
  • Content Enrichment: Content Enrichment is the action of applying new and modern content processing techniques such as natural language processing, machine learning, and AI to add structure, context, and metadata to content and make it more accessible, findable, and useful to both humans and computers.
  • Context-sensitive Help: Context-sensitive help is information provided to users based on the context of the task in which they are involved and the environment they’re in. Fluid Topics provide deep linking functionalities that automatically associate help topics with the context in which the user is working, enabling efficient online help.
  • Customer Journey: A Customer Journey refers to the path followed by a customer via different touch points before making a purchase decision.

D

  • Darwin Information Typing Architecture (DITA): DITA is an XML standard that describes the architecture for creating and managing information. It is used for authoring, producing, and delivering technical documentation. Fluid Topics comes ready with a set of processing pipelines to ingest your content in its native format. You can easily publish DITA content without reformatting it first.
  • Data Augmentation: Data augmentation is a technique commonly used in machine learning and deep learning to artificially increase the size of a dataset by applying various transformations to the existing data samples. The purpose of data augmentation is to increase the diversity of the training data, which can help improve the generalization and robustness of machine learning models. It is especially useful in scenarios where the available dataset is limited or lacks diversity.
  • Deep Learning: Deep learning is a subset of machine learning that allows AI to mimic the way human brains process complex patterns and recognize objects. It uses multi-layered neural networks where each layer helps the AI better understand data patterns. Today, deep learning has various applications, with some of the most common being data refining, fraud detection, computer vision, speech processing, and natural-language understanding.
  • Documentation Portal: A documentation portal is a knowledge center that hosts specific, detailed product content owned by a company. It can either be public or restricted to specific users and may serve several purposes such as product support and maintenance, or sales assistance. Fluid Topics helps companies quickly build rich content experiences through their documentation portal to serve various audiences: internal and external users, support services, partners, and more. In complete autonomy, you can implement an out-of-the-box portal to serve relevant, targeted, and personalized information to your users.

E

  • Embeddings: These are mathematical representations that try to convey meaning in the form of a vector (a list) of numeric values. Embeddings are also called semantic vectors.
  • Enterprise Search: The term “enterprise search” describes the software used to search for information inside a corporate organization. Fluid Topics has developed its proprietary search engine and our technology scores 25% better in academic benchmarks than classic, open-source solutions such as Elasticsearch and SolR.
  • Explainability: Explainability, also called interpretability, refers to the extent to which someone can understand how a model works and what decision it will make. In other words, how well humans can predict any AI output for a given input. Meanwhile, explainability goes beyond this to understand how the AI made the decision. These notions are crucial for trust and transparency between humans and AI systems. This remains a vast research domain since LLMs were not built to explain their decisions nor cite their sources.

F

  • Facets: Facets allow users to filter search results in order to zero in on specific information quickly. When a user selects a filter, Fluid Topics scans the facets associated with each document and topic and displays only those with the corresponding metadata.
  • Fine-tuning: Fine-tuning is a technique commonly used in deep learning to refine the results of a pre-trained model – typically one that has been trained on a large dataset – by adjusting its parameters to better fit the new data or task at hand. This is done to enhance and improve a model’s capabilities without training a new model from scratch. As a result, the model will be better adapted to work for specialized use cases.
  • Fuzzy Logic: Fuzzy logic is a way to compute degrees of truth. Modern computers are built on Boolean logic which sees everything as a binary true or false. However with fuzzy logic, AI can identify a range of logical conclusions that resemble human reasoning. For example, in response to the question “Is it cold outside?” fuzzy logic allows for results like “very little, somewhat, moderately, fairly, very much” rather than a simple “yes or no”. This is useful for Natural Language Processing so the AI can understand semantic relations between concepts that are worded differently.without training a new model from scratch. As a result, the model will be better adapted to work for specialized use cases.

G

  • Generalization: Generalization refers to a model’s ability to apply past knowledge learned from training data to new, unseen data. This determines how well the algorithm works in new settings.
  • Generative Adversarial Networks (GANs): A GAN is a type of machine learning framework that consists of two neural networks: the generator and the discriminator. The generator takes random noise as input and tries to generate data samples that resemble the real data as output. The discriminator then determines if the output is actually real or fake. This process allows the generator to fine-tune its outputs and create more authentic data. The cycle continues until the discriminator is unable to identify fake data.
  • Generative AI (GenAI): GenAI refers to a category of artificial intelligence that produces new content including text, images, audio, or code, that mimics human creativity, making it a valuable tool for many industries. It uses datasets to study patterns and then create new, similar data in response to prompts. Often, GenAI uses Large Language Models to understand and/or produce natural language. Examples of GenAI platforms include ChatGPT or DALL-E2.
  • Generative Pre-trained Transformer (GPT): GPT models are a type of Large Language Model and framework for GenAI. They use a neural network architecture and deep learning techniques to create human-like, natural language text.

H

  • Human in the Loop (HITL): HITL is a process in which humans oversee and give feedback to AI models during both the training and testing phases. This is important for ensuring models produce accurate, ethical results.

K

  • Keyword Search: A keyword search is based on specific words typed in a query. The search engine retrieves all documents from a database that contain one or several of the keywords in the query.
  • Knowledge Graph: A knowledge graph, or semantic network, is an organized collection and network of real-world entities (i.e. concepts, events, situations). The graph stores and illustrates the relationship between data entities. These data structures are core to AI-powered tools. They influence how information is stored, retrieved, and presented. Though not new, this term is resurfacing as it is core to AI-powered tools.
  • Knowledge Hub (KHUB): The Knowledge Hub is a central repository of all the product resources owned by a company. It serves as a single access point for all digital channels and ensures that the product information is delivered securely when and where it’s needed. Fluid Topics’ technology collects product content from all sources and formats, unifies it, and transforms it into a smart knowledge hub that provides users with consistent, reliable product information across all touch points.

L

  • Language Models (LMs): LMs are a type of AI model that predict, comprehend, generate, and interpret human language. They use Natural Language Processing and train on large data sets to decipher human languages.
  • Large Language Models (LLMs): LLMs are the result of an algorithm whose training produces the model. During the execution of an LLM, it processes data and produces outputs from a specific input which may ask it to recognize, summarize, translate, predict, or generate content using very large datasets. LLMs can be adapted for use across a wide range of industries and fields. They’re most closely associated with Generative AI. Developed by OpenAI, ChatGPT is one of the most recognizable LLMs.
  • LLM Gateway: The LLM Gateway, or AI Gateway, refers to the technical layer between app interfaces (chatbots, virtual assistants, in-app troubleshooting, support platforms) and the LLM itself.

M

  • Machine Learning (ML): ML is a subtype of AI that consumes data and algorithms to build systems and enable AI to imitate how humans learn. By continuously learning, it’s goal is to improve the accuracy of its outputs.
  • Markdown: Markdown is a lightweight markup language for creating formatted text using a plain-text editor.
  • Metadata: Metadata is data that describes other data, providing a structured reference that helps sort and identify attributes of the information it describes. In the field of technical documentation, it can be defined as a piece of information associated with a document, such as a title, version, or product. Fluid Topics uses all metadata available in your content to enhance the search relevance and to provide a more personalized experience.

N

  • Natural Language Processing (NLP): NLP, is a computer program’s ability to understand spoken and written human language. NLP is used as opposed to programming languages (java, C++, Python, etc.) which are not “natural”. This allows humans to successfully interact with computers using natural sentences. NLP technology is used in Fluid Topics to enhance search.
  • Neural Networks: A neural network is a type of ML model that replicates how the human brain functions. Its interconnected nodes or neurons are designed to process data and recognize patterns.

O

  • Offline Documentation: Documentation that is accessible without any internet connexion, on an internal network or directly on the user’s device. With Fluid Topics Offline mode, it is easy to make your technical content available under any circumstances. Whether your users work in secured environments, have poor connectivity, or no network coverage,  Fluid Topics provides them the same content experience online and offline.
  • OpenAPI: The OpenAPI Specification (OAS) defines a standard, programming language-agnostic interface description for APIs. OpenAPI is widely used by developers and, as such, represents a common format for API documentation. Fluid Topics offers a native connector to collect and ingest OpenAPI content.
  • Out-of-the-box Portal: Fluid Topics offers a turnkey portal that can de be deployed in minutes. The portal can then be easily configured using our WYSIWYG editor.

P

  • Prompt Engineering: Prompt engineering is the process of creating and optimizing inputs for AI tools. Inputs are natural language text commands describing the task that the user wants the AI to perform. The goal is to refine inputs so that the AI completes a specific task or generates ideal outputs.
  • Prompt Tuning: Prompt tuning is a cost-efficient way to improve an AI model’s outputs. While fine-tuning focuses on a model’s training data, prompt-tuning is the process of reworking the instructions given to AI so that the task is more clear. As a result, the AI produces better results without retraining the model.
  • Prompt-based Learning: Prompt-based learning is a strategy within machine learning that engineers use to train LLMs. This is often referred to as few-shot learning. This strategy uses information from pre-trained LMs so the same model can complete new tasks without needing retraining. This is useful for tasks such as text classification, machine translation, text summarization, and more.

R

  • Recurrent Neural Networks (RNNs): RNNs are artificial neural networks that use sequential data inputs or time series data. These deep learning models are often used for language translation, NLP, speech recognition, and image captioning.
  • Responsive Design: Responsive web design is about creating web pages that look good on all devices. Fluid Topics’ documentation portals provide a fully responsive web design experience.
  • Retrieval Augmented Generation (RAG): RAG is the process of enhancing the outputs of a LLM. This is done by allowing it to retrieve data from an external knowledge base. For example, Fluid Topics’s platform enhances an LLM’s outputs with your product content. As a result, the LLM has access to specific, accurate, and up-to-date information without needing retraining.
  • Role Prompting: Role prompting is a technique that prompt engineers use to control the AI output style by asking the algorithm to take on a specific role, character, or viewpoint when responding to a question or problem.

S

  • Semantic Search: Semantic search is an information retrieval technique that aims to determine the contextual meaning of a search query and the intent of the person running the search. Fluid Topics leverages more than two decades of research and expertise in data enrichment and semantic search engine technology, which makes it the most relevant search engine for technical documentation on the market.
  • Single Sign-On (SSO): SSO is a centralized authentification mechanism in which the client application fully delegates authentication to a trustworthy external service. At Fluid Topics, our Professional Services team is here to help you with SSO configuration.
  • Small Language Model (SLM): SLMs are AI models that use NLP and are designed to perform specific tasks within a focused domain. SLMs have a relatively small number of parameters, simpler architecture, and limited computing power which are ideal for resource-constrained environments. Some popular SLMs are Mistral’s 7B, Microsoft’s Phi-2, and Google’s Gemma.
  • Software as a Service (SaaS): SaaS is a software licensing and delivery model in which software is licensed on a subscription basis and is generally hosted on cloud systems. It is currently opposed to “on-premise” models based on installations on the company’s own servers and IT environment. Fluid Topics is available on both “SaaS” and “On-Premise” models.
  • Software Development Kit (SDK): An SDK is a collection of software development tools in one installable package. Fluid Topics provides you with multiple SDKs to add connectors from any in-house or proprietary source.
  • Static Documentation: Static documentation is a documentation that comprises statically generated web pages or files that are read-only and allow no user interaction or feedback.
  • Structured Content: Structured content refers to information or content that has been broken down into individual component parts and classified using metadata. Creating structured content enables us to assemble, reuse, or personalize it for different users or platforms.

T

  • Taxonomy: Taxonomy for search engines refers to classification methods that improve relevance in vertical search. Taxonomies of entities are tree structures whose nodes are labeled with entities likely to occur in a search query. Fluid Topics leverages your own taxonomy in order to provide the best search experience.
  • Technical Documentation: Technical documentation can be any document created to describe the use, functionality or architecture of a product, system or service. Fluid Topics collects and unifies all the technical documentation a company owns, no matter the source and format, and transforms it into a smart knowledge hub that smartly delivers information that is tailored to the user and fits to the channel.
  • Tokenization: This is the algorithm which splits a written text into small sequences of letters. For keyword search, one token is one word. But for semantic, or similarity, search and embeddings creation, a token is just a sequence of letters that can be a part of a word or span over two words.
  • Topic Modeling: Topic modeling is a technique that uses unsupervised learning to detect word clusters and phrase patterns in a document. This textual analysis is used to understand unstructured documents without the help of tags or training datasets.
  • Training: Training is an iterative process where an AI model learns from a data set. The goal is to teach the AI system how to perceive and interpret the data in order to perform a specific task, make predictions, or choose decisions.
  • Transformers: Transformers are essentially RNNs with a special neuron layout unit called an attention unit. The architecture of transformers is useful for processing language because it can train faster and memorize a much longer context than typical RNNs (up to thousands of words). As a result, transformers allow you to process models with thousand of parameters, which was nearly impossible with a basic RNN architecture.
  • Turnkey Solution: Turnkey solutions are ready-to-go solutions that are easily deployed in a business. Fluid Topics comes pre-tuned, ready for your brand, and provides you with a working solution for your technical documentation publishing in a matter of days or weeks.

U

  • Unstructured Content: An unstructured document (UD) is any document that is not in a structured format (e.g., PDF file,  JPG image, etc.). Fluid Topics supports all types of unstructured documents and fully indexes the vast majority of them to improve findability. Additionally, Fluid Topics automatically restructures common formats to provide the same content experience as with structured content.
  • User Experience: The user experience is how a user interacts with a product, system or service. It includes a person’s perceptions of utility, ease of use, and efficiency. At Fluid Topics, we put all our efforts into offering the best user experience including a high-level readability, increased content findability, and rich engagement capabilities.

W

  • Web Application: A Web application is an application program that is stored on a remote server and delivered over the Internet through a browser interface. Thanks to a rich API set, Fluid Topics’ functionalities such as searching, reading, configuring or monitoring are designed to integrate with your own web applications.
  • Wiki: A Wiki is a website or database developed collaboratively by a community of users, allowing any user to add and edit content. Wikis are one of the multiple sources of content that Fluid Topics can ingest and restructure.
  • WYSIWYG Editor: WYSIWYG, an acronym for What You See Is What You Get, is a system in which editing software allows content to be edited in a form that resembles its appearance. Fluid Topics has its own WYSIWYG Editor, with out-of-the-box widgets and an easy-to-use interface to help you fully customize your portal.

X

  • XML-based Formats: The Extensible Markup Language (XML) is a simple text-based format for representing structured information: documents, data, configuration, and much more. Most of the technical writing solutions are based on XML content models (such as DITA or DocBook).

Y

  • YAML: YAML is a human-readable data-serialization language commonly used for configuration files and applications where data is being stored or transmitted. YAML content is one of the multiple sources of structured content that Fluid Topics can ingest.

Z

  • Zero-shot Prompting: Zero-shot prompting, also called direct prompting, refers to using an LLM to execute a task it wasn’t trained for and without any explicit examples of the desired output. In other words, it occurs when models receive prompts that are not part of their training data, requiring them to rely on their pre-existing knowledge and natural language understanding to generate accurate results.