The Comprehensive Guide to Retrieval Augmented Generation LLMs: Transforming Data into Knowledge

The Comprehensive Guide to Retrieval Augmented Generation LLMs: Transforming Data into Knowledge

Dorota Owczarek - January 31, 2024

The rise of Large Language Models (LLMs) like GPT4 or BARD has been nothing short of revolutionary, offering unparalleled capabilities in natural language understanding and generation. However, these sophisticated models are not without their limitations. A notable constraint lies in the finite scope of their training data. For instance, ChatGPT’s knowledge is bounded by a cutoff date, beyond which its awareness of world events, advancements, and current information ceases. This temporal boundary often leads to responses that, while coherent, lack the most recent updates and developments, potentially impacting the relevance and applicability of the information provided.

Addressing this challenge are Retrieval Augmented Generation (RAG) systems, an innovative solution designed to complement and enhance the capabilities of LLMs like ChatGPT. RAG systems address another critical issue prevalent in LLMs – ‘hallucinations’, a term used to describe instances where these models generate plausible but factually incorrect information in the absence of adequate data. By integrating external, up-to-date knowledge sources, RAG systems empower LLMs to deliver responses that are not only contextually rich but also anchored in accurate and current information. This synergy between LLMs and RAGs marks a significant stride in overcoming the inherent limitations of traditional generative models, paving the way for more reliable, informed, and relevant AI-driven interactions.

Retrieval augmented generation enhances Large Language Models by equipping them with external data retrieval capabilities, elevating their intelligence and performance. This guide elucidates how RAG functions, its effects on NLP, and its real-world applications, presenting a deep dive ideal for anyone looking to leverage this powerful AI integration.


Retrieval-Augmented Generation (RAG) enhances LLMs by integrating them with external data, allowing for more accurate and contextually relevant responses.

RAG systems consist of a document retriever, augmentation component and answer generation component, transforming large information sets into actionable insights while reducing computational costs and improving knowledge-intensive task performance.

External data in RAG systems enables LLMs to access real-time information beyond their training data, amplifying their capabilities and establishing connections with external data sources.

The mechanics of a RAG system involve loading documents, splitting text into chunks, transforming text into numerical representations, and the interaction between LLMs and vector databases.

RAG’s use cases transform information handling across various domains, including question answering, information retrieval, document classification, and more.

Building a proof of concept for a RAG application is straightforward, but making it production-ready is challenging, necessitating an architectural blueprint for successful implementation.

For businesses looking to implement state-of-the-art RAG - LLM models, partnering with nexocode AI experts is key. With extensive experience in the GenAI space, nexocode can guide you through the complexities of RAG system implementation. Contact nexocode to explore how we can assist in building your production-ready RAG system and elevate your AI strategy.

Decoding Retrieval-Augmented Generation (RAG)

State-of-the-art Large Language Models have revolutionized our interaction with AI technology. These models are trained on massive datasets, encompassing a wide array of general knowledge embedded within the neural network’s weights, also known as parametric memory. However, their vast knowledge base has limitations, particularly when faced with requests for information beyond their training scope. This includes newer updates, proprietary details, or specific domain knowledge, often leading to what’s termed as ‘hallucinations’ or factual inaccuracies.

We have all seen memes with inaccurate or simply made-up responses from ChatGPT. This limitation underscores the need to bridge the gap between an LLM’s general knowledge and the ever-evolving external context, aiming to enhance accuracy and relevance in the model’s responses while minimizing hallucinations.

A traditional solution to this challenge involves fine-tuning the neural network to adapt to specific domains or proprietary information. Although effective, this approach is resource-intensive, costly, and requires significant technical expertise. It lacks the agility to swiftly adapt to new or evolving information. To address this, a more flexible and efficient technique known as Retrieval-Augmented Generation was proposed by Facebook AI Research, University College London, and New York University team in 2020 by Lewis et al. in their seminal paper. RAG innovatively integrates a generative model with a retrieval module, allowing the system to pull in additional, up-to-date information from external sources. It’s a system that integrates external data with Large Language Models, thus enabling these models to access and leverage information beyond their original training data. This method not only enhances the model’s responsiveness to current data but also makes it more adaptable and less reliant on extensive retraining.

What does the RAG approach stand for? Two significant components constitute to this workflow:

  • Retrieval component: Extracts supplemental context from an external data source to assist the LLM in responding to the inquiry.
  • Augmentation component: The user query and the retrieved supplemental context are joined into a prompt template.
  • Answer generation component: The LLM model forms a response utilizing a prompt enriched with the newly gathered information.

This integration of external data enhances the precision and relevance of the outputs generated by LLMs, thus allowing them to deliver more accurate and contextually appropriate responses, thanks to the incorporation of LLM training data and new data.

The Role of External Data in RAG

In Retrieval-Augmented Generation systems, external data plays a pivotal role in expanding the capabilities of Large Language Models. This external data integration allows LLMs to:

  • Access real-time information that goes beyond their initial training datasets, addressing the limitation of outdated or static knowledge.
  • Amplify the LLM’s capabilities by introducing dynamic, up-to-date content through access to relevant documents, thus enhancing the model’s responsiveness and relevance.
  • Bridge the gap between the language model and various external data sources, such as comprehensive document repositories, databases, or APIs, for a richer knowledge base.

This integration fundamentally shifts the nature of knowledge within the RAG system. It introduces a dual-structure approach to knowledge management:

  • Parametric Knowledge: This is the knowledge acquired during the LLM’s training phase, implicitly embedded within the neural network’s weights. It forms the basis of the model’s understanding and reasoning capabilities.
  • Non-Parametric Knowledge: Contrary to parametric knowledge, this involves data stored in an external knowledge source, like a vector database constructed from an internal knowledge base. This separation of factual knowledge from the LLM’s reasoning ability offers a significant advantage. The external knowledge source can be regularly updated and accessed, ensuring that the LLM remains current and accurate in its responses.

By incorporating external data in this manner, RAG systems not only enhance the quality and relevance of LLM outputs but also ensure that these models can adapt and stay up-to-date with the latest information and trends. This approach represents a significant leap in making LLMs more practical and useful for real-world applications where current and context-specific knowledge is crucial.

The Mechanics of a RAG System

RAG systems represent a revolutionary approach in the field of natural language processing, blending the capabilities of LLMs with advanced data retrieval techniques. Here’s a breakdown of the key components and processes involved in a RAG system:

Loading Documents and Splitting Text into Chunks

  • The first step in a RAG system involves loading extensive document sets from various sources.
  • These documents are then segmented into smaller chunks, making the text more manageable for processing. This segmentation is crucial for efficient data handling and ensures that the system can rapidly access and analyze specific sections of text.

Transforming Text into Numerical Representations (Text Embedding Model)

  • Central to the RAG system is the transformation of text into numerical representations, a process known as text embedding.
  • Utilizing embedding language models such as BERT, GPT, or RoBERTa, that generate context-aware embeddings, the system converts text data into numeric vectors, enabling the machine to interpret and analyze language.

Interaction Between LLMs and Vector Databases

  • A pivotal aspect of RAG systems is how LLMs interact with vector databases.
  • Vector databases efficiently store and manage the vectorized text data, providing a structured vector store or index to house transformed document chunks and their associated IDs that LLMs can query.
  • Popular vector stores include FAISS, Milvus, Chroma, Weaviate, Pinecone, or Elasticsearch.
  • This setup allows LLMs to retrieve relevant information quickly, enhancing their ability to generate informed and contextually appropriate responses.

The Information Retrieval Component

  • The information retrieval component acts as the system’s investigative tool, tasked with searching through the vector database to find data relevant to a given query.
  • The main goal of the retrieval process is to identify and return document segments that are pertinent to the query received. The exact criteria for what constitutes ‘relevant’ varies depending on the retrieval method employed.
  • This component employs algorithms to scan the database, identifying and retrieving the most pertinent text chunks based on the query context.
  • Retrieval mechanisms in RAG systems employ various search types, each with unique features. ‘Similarity search’ identifies documents closely matching the query based on cosine similarity, while ‘Maximum Marginal Relevance’ (MMR) adds diversity to the results, avoiding redundancy. The system also uses a similarity score threshold method, returning only documents that meet a set minimum score.
  • Additionally, ‘self-query’ or LLM-aided retrieval proves advantageous when queries involve both semantic content and metadata, enabling efficient filtering. Compression is another key method, focusing on reducing document size for enhanced storage and retrieval efficiency. This method, although requiring more LLM interactions, ensures that responses are centered around the most critical information.

Answer Generation Component

  • The final step in a RAG system involves generating answers based on the retrieved information and the initial query.
  • The LLM synthesizes the retrieved data with its pre-existing knowledge, crafting responses that are not only accurate but also contextually rich and relevant.
  • In RAG systems, the “Stuff” method processes prompts and returns answers directly from the LLM, ideal for simple queries. However, this approach can struggle with complex queries involving large document volumes.
  • To address this, alternate methods like “Map-reduce,” “Refine,” and “Map-rerank” are used. The “Map-reduce” method individually processes document chunks for answers, then combines them for a comprehensive response. While effective for complex queries, it can be slower and less optimal in some cases. The “Refine” method iteratively updates the prompt for evolving contexts, improving accuracy. The “Map-rerank” method ranks documents by relevance, prioritizing the most pertinent answers.
  • Each method offers distinct advantages and can be selected based on the complexity and nature of the query, enhancing the accuracy and relevance of the language model’s responses.
  • This process is where the RAG system truly shines, merging the depth of LLMs with the specificity of targeted data retrieval to provide comprehensive and precise answers.

The seamless integration of these various stages in the RAG process creates an efficient system capable of automating document handling and producing detailed answers to a diverse range of inquiries.

RAG’s Impact on Natural Language Processing

Undoubtedly, RAG significantly impacts natural language processing. By integrating an information retrieval system into LLMs, RAG enhances the reliability of language models, delivering more relevant responses to users.

RAG Use Cases in Transforming Information Handling

  1. Question Answering RAG systems are adept at pinpointing pertinent information to respond to queries with precision and brevity, effectively distilling complex data into clear, concise answers.
  2. Information Retrieval These systems excel at navigating through vast datasets, retrieving relevant information or documents in response to specific queries, thereby streamlining data access.
  3. Document Classification RAG can categorize documents into designated labels, utilizing context extracted from the corpus to accurately determine their thematic relevance.
  4. Information Summarization Generating succinct summaries from the relevant details identified in large documents, RAG systems help in condensing extensive information into digestible formats.
  5. Text Completion Utilizing the context extracted from relevant sources, RAG aids in completing partial texts, enhancing their cohesiveness and relevance.
  6. Recommendation Systems Offering tailored suggestions or advice based on a user’s prompt, RAG systems provide context-aware recommendations that align with user needs and preferences.
  7. Fact-Checking RAG is instrumental in validating or debunking statements by cross-referencing them with facts extracted from a comprehensive corpus, ensuring accuracy and credibility.
  8. Conversational Agents Chatbots and virtual assistants leverage RAG for generating informed and contextually relevant dialogue responses, elevating the quality of user interactions.

From Semantic Search to Accurate Answer Generation

RAG goes beyond merely enhancing the semantic search in Natural Language Processing. Instead, it bridges the gap between semantic search and accurate answer generation, thus transforming the process from basic semantic search to active and accurate answer generation.

Addressing Knowledge-Intensive Tasks Efficiently

Natural language processing involves a variety of knowledge-intensive tasks, such as answer questions, fact-checking, open-domain QA, textual entailment, textual similarity, and duplicate detection. By mastering such knowledge intensive tasks, the technology can provide valuable insights and solutions to users.

RAG effectively addresses these tasks by integrating pre-trained parametric and non-parametric memory to produce responses that are influenced by external knowledge sources.

Real-World Applications of RAG

RAG extends beyond theory, boasting a wide range of practical applications. From enhancing search engines to improving customer support and automating content creation, RAG is reshaping the landscape of many industries.

Domain-Specific Knowledge Enhancement

RAG’s ability to enhance domain-specific knowledge is particularly noteworthy. By customizing it to a domain-specific knowledge base and refreshing all components of the external knowledge, RAG effectively refines LLMs. It is not only limited to the knowledge library itself but also the very specific and niche language that maybe used in this kind of problems and might not be as straightforward for the LLMs to comprehend and utilize withouth the RAG approach.

Streamlining Customer Queries with RAG Models

In the realm of customer service, RAG models emerge as a game-changing force. They optimize customer queries by:

  • Enhancing customer experience
  • Automating routine tasks
  • Ensuring consistency and accuracy in responses
  • Improving the sales process

Architectural Blueprint for Implementing RAG

Embarking on the journey to build a Retrieval-Augmented Generation application can be a venture filled with contrasting experiences. While constructing a proof of concept for an RAG application might seem straightforward, evolving it into a production-ready system presents a complex challenge. The leap from a functional prototype to a scalable, efficient, and reliable RAG application involves navigating a labyrinth of technical intricacies, demanding both strategic planning and robust architectural design.

This section delves into the architectural blueprint essential for implementing RAG systems. It aims to guide through the critical components and considerations necessary to transform a basic RAG concept into a robust, production-grade application. We will explore the key architectural elements, from data processing and model integration to scalability and reliability, providing a comprehensive roadmap for successfully deploying a RAG system in a real-world environment.

Building Blocks of RAG Workflows

RAG workflows consist of three main components:

  1. Retrieval model: This component is responsible for processing user prompts and retrieving relevant information from databases.
  2. Generative model: The generative model generates coherent responses based on the retrieved information.
  3. Data pipeline creation and orchestration workflows: These workflows ensure the smooth flow of data between the retrieval and generative models.

These components work in harmony to provide accurate and coherent responses to user queries.

Selecting the Right LLM for RAG Integration

Choosing the appropriate LLM for RAG integration is of paramount importance and should be based on the following factors:

  • The reliability of the model in pulling in relevant and up-to-date data
  • The quality of the model
  • The computational and financial costs of the model
  • The latency of the model
  • The customization options of the model

Developing an Effective Document Retrieval Strategy

An effective document retrieval strategy is crucial for optimizing the performance of Retrieval-Augmented Generation systems. Here are key tips to consider:

  1. Understand your data: Start by comprehensively understanding the type of documents and information your system will handle. Consider factors like the size, format, complexity, and domain of the documents. This understanding will guide the development of an effective retrieval strategy.
  2. Choose the right retrieval model: Select a retrieval model that aligns with your data characteristics and application requirements. Models vary in their approach to indexing and searching – some prioritize speed, while others focus on accuracy or the ability to handle complex queries.
  3. Indexing strategy: Develop a robust indexing strategy to efficiently manage your documents. This may include segmenting large documents into smaller, more manageable chunks and using metadata effectively. A good indexing strategy ensures faster and more accurate retrieval of information.
  4. Use of advanced embedding techniques: Implement advanced embedding techniques like BERT, Word2Vec, or GloVe to convert your text data into meaningful vector representations. These embeddings help in capturing the context and semantics of the text, essential for effective retrieval.
  5. Optimize query processing: Fine-tune the way queries are processed. This may include implementing natural language processing techniques to understand user intent, employing spell check algorithms, and understanding the context of the query.

Evaluating Performance of RAG System

Production-ready RAG applications cannot thrive without a proper evaluation system. Measuring and enhancing the performance of RAG systems is a dynamic and evolving challenge. A quantitative assessment is pivotal to determine whether an RAG system is meeting its intended goals effectively. This assessment hinges on two fundamental components: a well-chosen evaluation metric and a robust evaluation dataset.

The field of RAG performance evaluation is a hotbed of ongoing research and innovation. We are currently witnessing the development and adoption of various evaluation frameworks, each designed to offer a unique lens through which the effectiveness of a RAG system can be measured. Some notable approaches include:

  1. RAG Triad of Metrics: This framework encompasses a trio of metrics, each focusing on a different aspect of the RAG system’s performance. It provides a holistic view, assessing the system’s accuracy, efficiency, and relevance.
  2. ROUGE (Recall-Oriented Understudy for Gisting Evaluation): This metric is widely used for evaluating automatic summarization and machine translation, making it particularly relevant for RAG systems focused on summarization tasks.
  3. ARES (Automatic Retrieval Evaluation System): ARES is a more recent metric, designed specifically for evaluating the performance of information retrieval components within RAG systems. It assesses how effectively the system retrieves relevant information from its database.
  4. BLEU (Bilingual Evaluation Understudy): Traditionally used in machine translation, BLEU can be adapted to evaluate the linguistic quality and coherence of RAG-generated responses.
  5. RAGAs (Retrieval-Augmented Generation Assessment): This is a comprehensive framework that combines aspects of retrieval accuracy and the quality of generated text, offering a nuanced evaluation of RAG systems.

Selecting the right combination of these metrics and developing a suitable evaluation dataset is crucial for a comprehensive assessment of a RAG system. The choice of metrics should align with the specific objectives and functionalities of the RAG application. Moreover, the evaluation dataset must be representative of the real-world scenarios in which the RAG system will operate, ensuring that the performance insights gained are both relevant and actionable.

Despite being a groundbreaking advancement in AI, RAG comes with its own set of challenges. This section addresses the key issues, including balancing up-to-date data with model stability, mitigating inaccurate responses, and implementing human-in-the-loop flow.

Balancing Up-To-Date Data with Model Stability

In a RAG architecture, it’s critical to strike a balance between up-to-date data and model stability. You need to ensure your knowledge base is regularly updated with the latest information. This helps maintain the relevance and accuracy of the data being retrieved. Techniques such as Hypothetical Document Embeddings (HyDE), semantic caching, and pre/post-filtering contribute to improving the model stability balance.

Mitigating Inaccurate Responses

RAG makes continuous efforts to keep inaccurate responses at a minimum. This is achieved through:

  • Data cleaning
  • Exploring various index types
  • Enhancing the chunking approach
  • Summarizing entire documents
  • Storing the summaries to optimize accuracy.

Implementing Human-in-the-loop Flow

RAG systems can greatly benefit from a human-in-the-loop approach, ensuring regulatory compliance, accuracy, adherence to ethical frameworks, elimination of bias, and explainability. Incorporate user feedback to continuously improve the retrieval process. Feedback loops can help identify areas where the retrieval strategy may be falling short and provide insights for further refinement.

Peering into RAG’s Future

Looking ahead, RAG is poised to play a significant part in broadening the scope of foundation models and ushering in a new frontier of actionable intelligence. RAG, already transforming how foundation models interact with real-time data, is set to break new ground. Expect to see advancements in domain-specific adaptability, enabling RAG to provide tailored, precise applications across various industries. Moreover, enhancements in natural language processing will make AI interactions more nuanced and human-like, significantly improving user experience.

The future of RAG also hints at a synergistic blend of AI with human intelligence, fostering collaborative problem-solving and innovation. Ethical AI and transparency will become paramount, addressing challenges related to bias and privacy. These developments, coupled with improvements in scalability and efficiency, suggest a future where RAG not only enhances the capabilities of AI systems but also aligns them more closely with human needs and ethical standards, heralding a new era of intelligent and responsible AI solutions.

Building Production-Ready RAG Systems with Nexocode AI Experts

Embarking on the journey to develop a RAG system can be complex and challenging, especially when transitioning from a proof of concept to a production-ready application. This is where partnering with experienced AI specialists becomes invaluable. nexocode AI experts bring a wealth of knowledge and expertise in creating robust, efficient RAG systems tailored to your specific needs.

Our team at nexocode understands the intricacies of RAG technology and is equipped to guide you through every step - from conceptualization to deployment. We focus on ensuring that your RAG system is not only advanced in terms of technology but also aligns seamlessly with your business objectives. Whether it’s enhancing the accuracy of your AI applications, expanding their capabilities, or ensuring they are scalable and ethically aligned, our experts are here to help.

Don’t let the complexities of RAG systems hinder your AI ambitions. Contact us to explore how we can assist in building your production-ready RAG system and take your AI strategy to the next level. Let’s innovate together!

Frequently Asked Questions

What is Retrieval Augmented Generation (RAG)?

Retrieval Augmented Generation (RAG) is a technique that enhances Large Language Models (LLMs) by integrating them with external data sources. It allows these models to access and use real-time, up-to-date information, enhancing their accuracy and contextual relevance.

How do RAG systems improve the performance of LLMs like GPT-4 or BARD?

RAG systems improve LLM performance by supplementing their training data with external, current information. In other words, RAG systems are models which combine pre-trained parametric and non-parametric memory for language generation. This integration helps overcome the limitations of outdated knowledge in the LLMs and reduces the occurrence of factual inaccuracies or hallucinations in the model's responses.

What are the main components of a RAG system?

A RAG system typically consists of a retriever component that extracts additional context from external databases and a generator component that creates responses based on this augmented information.

How does RAG address the issue of 'hallucinations' in LLMs?

RAG addresses 'hallucinations' – instances where LLMs generate plausible but incorrect information – by providing access to external, factual data sources. This ensures that the model's responses are grounded in accurate and current information.

What are some real-world applications of RAG systems?

Real-world applications of RAG systems include improving customer service through more informed chatbots, automating content creation, enhancing domain-specific knowledge in various industries, and providing more accurate information retrieval and summarization.

What challenges are associated with implementing RAG systems?

Implementing RAG systems involves challenges like balancing the recency of data with model stability, mitigating inaccurate responses, and integrating a human-in-the-loop approach for ethical and accurate outcomes.

How can businesses benefit from using RAG systems?

Businesses can benefit from RAG systems by enhancing the quality of customer interactions, improving decision-making through accurate information retrieval, and staying up-to-date with the latest data in their respective fields.

About the author

Dorota Owczarek

Dorota Owczarek

AI Product Lead & Design Thinking Facilitator

Linkedin profile Twitter

With over ten years of professional experience in designing and developing software, Dorota is quick to recognize the best ways to serve users and stakeholders by shaping strategies and ensuring their execution by working closely with engineering and design teams.
She acts as a Product Leader, covering the ongoing AI agile development processes and operationalizing AI throughout the business.

Would you like to discuss AI opportunities in your business?

Let us know and Dorota will arrange a call with our experts.

Dorota Owczarek
Dorota Owczarek
AI Product Lead

Thanks for the message!

We'll do our best to get back to you
as soon as possible.

This article is a part of

Becoming AI Driven
92 articles

Becoming AI Driven

Artificial Intelligence solutions are becoming the next competitive edge for many companies within various industries. How do you know if your company should invest time into emerging tech? How to discover and benefit from AI opportunities? How to run AI projects?

Follow our article series to learn how to get on a path towards AI adoption. Join us as we explore the benefits and challenges that come with AI implementation and guide business leaders in creating AI-based companies.

check it out

Becoming AI Driven

Insights on practical AI applications just one click away

Sign up for our newsletter and don't miss out on the latest insights, trends and innovations from this sector.


Thanks for joining the newsletter

Check your inbox for the confirmation email & enjoy the read!

This site uses cookies for analytical purposes.

Accept Privacy Policy

In the interests of your safety and to implement the principle of lawful, reliable and transparent processing of your personal data when using our services, we developed this document called the Privacy Policy. This document regulates the processing and protection of Users’ personal data in connection with their use of the Website and has been prepared by Nexocode.

To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at the level which ensures compliance with applicable Polish and European laws such as:

  1. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (published in the Official Journal of the European Union L 119, p 1); Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item 1000);
  2. Act of 18 July 2002 on providing services by electronic means;
  3. Telecommunications Law of 16 July 2004.

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.

1. Definitions

  1. User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal person, or an organizational unit which is not a legal person to which specific provisions grant legal capacity.
  2. Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
  3. Website – website run by Nexocode, at the URL: whose content is available to authorized persons.
  4. Cookies – small files saved by the server on the User's computer, which the server can read when when the website is accessed from the computer.
  5. SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary methods of data transmission encrypts data transmission.
  6. System log – the information that the User's computer transmits to the server which may contain various data (e.g. the user’s IP number), allowing to determine the approximate location where the connection came from.
  7. IP address – individual number which is usually assigned to every computer connected to the Internet. The IP number can be permanently associated with the computer (static) or assigned to a given connection (dynamic).
  8. GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals regarding the processing of personal data and onthe free transmission of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
  9. Personal data – information about an identified or identifiable natural person ("data subject"). An identifiable natural person is a person who can be directly or indirectly identified, in particular on the basis of identifiers such as name, identification number, location data, online identifiers or one or more specific factors determining the physical, physiological, genetic, mental, economic, cultural or social identity of a natural person.
  10. Processing – any operations performed on personal data, such as collecting, recording, storing, developing, modifying, sharing, and deleting, especially when performed in IT systems.

2. Cookies

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet. The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end device.
Cookies are used to:

  1. improve user experience and facilitate navigation on the site;
  2. help to identify returning Users who access the website using the device on which Cookies were saved;
  3. creating statistics which help to understand how the Users use websites, which allows to improve their structure and content;
  4. adjusting the content of the Website pages to specific User’s preferences and optimizing the websites website experience to the each User's individual needs.

Cookies usually contain the name of the website from which they originate, their storage time on the end device and a unique number. On our Website, we use the following types of Cookies:

  • "Session" – cookie files stored on the User's end device until the Uses logs out, leaves the website or turns off the web browser;
  • "Persistent" – cookie files stored on the User's end device for the time specified in the Cookie file parameters or until they are deleted by the User;
  • "Performance" – cookies used specifically for gathering data on how visitors use a website to measure the performance of a website;
  • "Strictly necessary" – essential for browsing the website and using its features, such as accessing secure areas of the site;
  • "Functional" – cookies enabling remembering the settings selected by the User and personalizing the User interface;
  • "First-party" – cookies stored by the Website;
  • "Third-party" – cookies derived from a website other than the Website;
  • "Facebook cookies" – You should read Facebook cookies policy:
  • "Other Google cookies" – Refer to Google cookie policy:

3. How System Logs work on the Website

User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The information collected in the Logs is processed primarily for purposes related to the provision of services, i.e. for the purposes of:

  • analytics – to improve the quality of services provided by us as part of the Website and adapt its functionalities to the needs of the Users. The legal basis for processing in this case is the legitimate interest of Nexocode consisting in analyzing Users' activities and their preferences;
  • fraud detection, identification and countering threats to stability and correct operation of the Website.

4. Cookie mechanism on the Website

Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful information and are stored on the User's computer – our server can read them when connecting to this computer again. Most web browsers allow cookies to be stored on the User's end device by default. Each User can change their Cookie settings in the web browser settings menu: Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings > Advanced. In the "Privacy and security" section, click the Content Settings button. In the "Cookies and site date" section you can change the following Cookie settings:

  • Deleting cookies,
  • Blocking cookies by default,
  • Default permission for cookies,
  • Saving Cookies and website data by default and clearing them when the browser is closed,
  • Specifying exceptions for Cookies for specific websites or domains

Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options > Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with the OK button.

Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field. From there, you can check a relevant field to decide whether or not to accept cookies.

Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site data. From there, adjust the setting: Allow sites to save and read cookie data

In the Safari drop-down menu, select Preferences and click the Security icon.From there, select the desired security level in the "Accept cookies" area.

Disabling Cookies in your browser does not deprive you of access to the resources of the Website. Web browsers, by default, allow storing Cookies on the User's end device. Website Users can freely adjust cookie settings. The web browser allows you to delete cookies. It is also possible to automatically block cookies. Detailed information on this subject is provided in the help or documentation of the specific web browser used by the User. The User can decide not to receive Cookies by changing browser settings. However, disabling Cookies necessary for authentication, security or remembering User preferences may impact user experience, or even make the Website unusable.

5. Additional information

External links may be placed on the Website enabling Users to directly reach other website. Also, while using the Website, cookies may also be placed on the User’s device from other entities, in particular from third parties such as Google, in order to enable the use the functionalities of the Website integrated with these third parties. Each of such providers sets out the rules for the use of cookies in their privacy policy, so for security reasons we recommend that you read the privacy policy document before using these pages. We reserve the right to change this privacy policy at any time by publishing an updated version on our Website. After making the change, the privacy policy will be published on the page with a new date. For more information on the conditions of providing services, in particular the rules of using the Website, contracting, as well as the conditions of accessing content and using the Website, please refer to the the Website’s Terms and Conditions.

Nexocode Team


Want to unlock the full potential of Artificial Intelligence technology?

Download our ebook and learn how to drive AI adoption in your business.