The Promises of Automated Mining Adverse Drug Reactions in EHRs and Research Publications. NLP in Pharmacovigilance

The Promises of Automated Mining Adverse Drug Reactions in EHRs and Research Publications. NLP in Pharmacovigilance

Dariusz Jacoszek - February 23, 2022

Pharmacovigilance is designed to identify, evaluate and respond to adverse drug reactions. Adverse drug events are a serious health concern, and pharmaceutical companies are legally obliged to track if they occur constantly. They can range from mild to life-threatening, and the only way to know which ones are happening is by monitoring them. But how do you monitor adverse drug reactions? You need to track all research papers and electronic health records for potential adverse events related to drugs, including side effects and warnings. However, the large volume of data published in medical papers and stored in Electronic Health Records (EHRs) creates another challenge: how do we find information relevant for pharmacovigilance?

This blog post will describe how natural language processing (NLP) techniques are used to automate literature mining from research publications and electronic health records for adverse drug reactions and support pharmacovigilance activities.

Present Situation

Identifying, collecting, reviewing, and translating relevant data into information for regulators and firms to address safety concerns and inform the public and prescribers currently takes up a lot of time in pharmacovigilance. Now, the sector is dealing with rising data volumes by adding more safety team members and experimenting with various internal management strategies. However, the expansion sustained without hiring outsourcing services to keep up with the rising data burden has a limit.

Beyond volume and logistics, a significant obstacle is the performance constraints of humans entrusted with processing substantial amounts of data, affecting the quality and consistency of data interpretation. These restrictions are amplified as teams grow and expand across geographies.

Future Prospects

Early adopters in the pharmaceutical business and health authorities and technology suppliers have begun testing technologies such as cognitive computing and machine learning to achieve artificial intelligence to solve data volume issues. As AI and cognitive computing make their way into production systems and are used at scale, these early experiences might give us cautionary insights and crucial concerns to think about. To read more on the subject head over to our articles on the current state of AI in Pharma and AI in Drug Discovery and Development.

New Ways of Detecting Adverse Drug Events

Most healthcare data is stored in unstructured media types such as emails and online records. Unstructured data on adverse events (AEs) must be pooled and correlated from various divergent and extensive data sources, such as social media, email, online communities, and other digital forms. Most pharmacovigilance and safety specialists must examine AE data manually, which is time-consuming and ineffective. This can hinder clinical studies and postpone the introduction of new treatments to patients.

The advent of artificial intelligence and natural language processing tools into the life sciences business allows these enormous, unstructured data volumes to be transformed into usable insights at previously unheard-of rates. AI-Powered Solution for Monitoring ADEs

Efficient AI-Powered Solution for Monitoring Drug Adverse Events - based on Ontotext

The Role of AI and ML in Pharmacovigilance

Artificial intelligence and machine learning are beneficial for reading, processing, and extracting large sets of unstructured data. The pharma industry is still at the beginning of integrating AI-based solutions.

When applied effectively, AI tools have saved significant amounts of time and money, reduced risk, and saved Pharmacovigilance professionals from wasting time on monitoring challenging tasks which are highly manual routine. These tools are especially vital for managing growing PV workloads and getting the most out of PV teams’ human resources when talent shortages are making it difficult to maintain these fully equipped teams.

Applications and Benefits of Introducing AI to PV

Artificial intelligence implementation might be intimidating initially, but it can be tackled by approaching it in phases. A few techniques of AI/ML might increase PV operating efficiency. AI/ML, in conjunction with NLP, unearths and swiftly surfaces data. It can extract medical concepts, substances and drug names, drug use, patient data, and finally, possible adverse reactions. As a result, PV leaders will spend less time identifying adverse event patterns, including the severity and frequency of AEs. There are currently technologies available that employ AI/ML and NLP to:

  • Automatic extraction of medical terms and medical case reports.
  • Increase the speed with which you search the literature for pertinent information.
  • Examine social media data and patient forums for AEs.
  • Listen to and absorb audio calls (for example, into a call center) for any references of a company or medicine.
  • Translating large volumes of up-to-date information from one language into another.
  • Convert scanned papers on AEs into actionable structured data.
  • Case narratives may be read and interpreted with little human assistance.
  • Determine whether any patterns in adverse reaction data provide new, previously unknown knowledge that may improve patient safety.
  • Case follow-ups should be automated to validate the information and capture any missing data.
  • Enhancing critical drug safety monitoring processes.
  • Create a more efficient, automated flow for regulatory activities.

For more information on the use of artificial intelligence in pharmacovigilance, head over to our recent article: Augmenting Drug Safety and Pharmacovigilance Services with Artificial Intelligence (AI).

Benefits of the Technological Shift

As PV technology progresses with the application of AI/ML, the capacity to track long-term safety implications and the feasibility of specific treatments will become increasingly automated. By shifting this job from human data analysis and processing to AI-based insights, time will be saved in spotting trends from AEs. These patterns are critical for determining which population groups might benefit most from (or should avoid) a particular therapy. They can even pave the way for future discoveries that could lead to better understanding and aid in novel treatments and drug discoveries.

AI/ML will never replace human knowledge and skill; nevertheless, when used properly, these tools can assist in expediting the capacity to process and evaluate data. The resultant actionable insights can help bring pharmaceuticals to market faster and more safely than ever.

pharmacovigilance, big data and AI

Big data, social media sources, medicines safety, pharmacovigilance and AI | Journal of Pharmaceutical Policy and Practice - based on Full Text

Introduction to Natural Language Processing in Pharmacovigilance

Pharmacovigilance seeks to detect, monitor, describe, and prevent adverse drug events (ADEs) associated with pharmaceutical products. The capacity to effectively capture both semantic and syntactic elements in clinical narratives is becoming more critical to detecting ADEs efficiently and reliably.

NLP tasks

Various NLP tasks

Since 2000, significant progress has been achieved in algorithm innovation and resource creation. Statistical analysis and ML techniques have gained pace in the automation of ADE mining from EHRs narratives. Current cutting-edge approaches for NLP-based ADE identification from EHRs show potential inclusion into production pharmacovigilance systems.

Although adverse events and drugs are extensively documented in electronic health record narratives, using natural language processing to clinical narrative mining for ADE identification is vital for pharmacovigilance. Significant progress has been achieved using NLP approaches on EHR-based pharmacovigilance, particularly in statistical analysis and ML-based method creation and data integration across many heterogeneous sources. There are still challenges in developing NLP methodologies and implementing ADE identification from EHRs, particularly in identifying ADEs induced by off-label medication usage and polypharmacy.

Natural Language Processing as a Dominant Tool in Mining Adverse Drug Events

Natural language processing is a crucial technique in mining adverse drug events from scientific literature, spontaneous reports, and, in particular, EHRs due to the quantity of information in these data sources. According to the different measures, only a fraction of adverse drug events documented in EHRs is reported to government databases, making EHRs an essential source of ADE-related data. The coded diagnoses in EHRs have limited sensitivity for adverse reactions. Even though signs and symptoms, illness status, and severity are all commonly documented as narrative text in EHRs, including information from clinical reports can help with ADE identification.

Furthermore, compared to passive spontaneous reporting systems, clinical notes in EHRs might be a valuable source for proactive pharmacovigilance. Because of the current underuse of EHRs, most research attention has been devoted to mining unstructured clinical narratives for ADE identification. Mining clinical narratives in EHRs supplements spontaneous reporting and leads to a considerable improvement in ADE identification.

Recent works on Pharmacovigilance NLP based on clinical narratives attempt to increase the approaches’ accuracy and scalability through limiting human intervention. The researchers observed three distinct trends during the screening and evaluation process: increased use of statistical analysis and machine learning approaches, integration of assertion categorization and temporal resolution, and merging numerous data sources for improved pharmacovigilance.

Named Entity Recognition (NER) and Named Entity Disambiguation (NED) of Drug Names and Medical Concepts

Identifying mentions of drugs and medical concepts in the text is one of the most fundamental areas of NLP applications for pharmacovigilance. It is also an essential first step in understanding the content of documents concerning ADEs. The Drug Name Recognition (DNR) task involves recognizing drug mentions in a sentence and disambiguating them from other terms that might appear in the text. The task of word sense disambiguation is to determine the correct entities referred to in a given sentence. However, named entity recognition is not limited to drug names and can be applied to any named entity, such as medical conditions, symptoms, adverse drug event, and so forth.

NER and NED are essential for identifying mentions of drugs and medical concepts in text because they allow us to determine which text strings correspond to specific medications or medical concepts. This information can then be used as input for further downstream tasks such as ADE identification.

Most DNR methods are based on string matching or tokenization, followed by pattern-matching techniques. ML approaches used for DNR tasks include support vector machines (SVMs), Bayesian classifiers, decision trees, bagging, boosting, and random forests.

In the past ten years, statistical analysis and machine learning-based technologies in NLP pharmacovigilance have received much interest. Both statistical analysis and machine learning techniques, albeit with differing focus, are concerned with the subject of how to learn from data. The advantage of NLP methodologies is that, in addition to making a definite conclusion on drug–adverse event links, they may be used to quantitatively assess the likelihood of a potential drug-disease or drug–symptom relation being a real adverse drug event.

Statistical analysis is roughly defined as an approach that tabulates a statistic by using an explicit expression of variables (an approach based on searching exact usage of terms from a medical dictionary or a unified medical language system). Statistical analysis for text is a process of going through several NLP tasks one by one to understand the text and extract data of interest (e.g., tokenization, stemming, lemmatization, or part of speech tagging).

In contrast, deep learning techniques based on neural networks are defined as algorithms that explicitly minimize the difference (in the form of a loss function) between actual and projected outcomes. These models can perform advanced analytics and operations on a given text and can provide reasonable predictions for unseen examples. A supervised learning algorithm uses training data to learn how to recognize patterns in another dataset.

The growth of deep learning-based ADE detection studies in 2015 demonstrates the scientific community’s interest in the topic. This may not be a coincidence, considering that the rising availability of EHRs has resulted in significant growth in the number of pharmacovigilance structured data, which will boost the performance of deep learning methods. On the other hand, random forests are a popular ML model due to their strong classification performance and ease of interpretation compared to other models. While making a systematic comparison is challenging, it appears that both statistical analysis and ML-based approaches are of particular interest as they tend to improve with time. Despite their late start, deep learning methods seem to be swiftly catching up to the performance of state-of-the-art statistical analytic methods.

Differentiating Assertions and Relations Extraction

Thoroughly understanding the content of documents concerning ADEs is essential for any pharmacovigilance activity. However, this task can be pretty tricky, especially when dealing with large volumes of data. To facilitate extracting information from text, we need a way to classify different types of assertions and relations between entities.

An assertion is simply a statement that something is true. For example, in the sentence “John took aspirin,” John taking aspirin would be an assertion. Assertions are essential because they provide us with basic information about what is happening in a document. On the other hand, relations describe how entities are related to each other. Assertions in a text can be used to identify the key topics that the document is discussing. In addition, relations between entities can be used to discover how those entities are related. For example, if we wanted to know why John took aspirin, we could look for assertions that describe why he took it and then extract the corresponding causal relationship.

An excellent way to think of assertions and relations is as follows: Assertions are like nouns, while relations are like verbs. Just as we use nouns and verbs to build sentences, we can use assertions and relations to create knowledge graphs (a collection of nodes and edges) representing information extracted from text documents.

The context of medicine, illness, and symptom references are frequently crucial in evaluating if a prospective drug-disease or drug–symptom relationship is a legitimate adverse effect. Such context might contain claims about medical ideas and the chronological relationships between medical concepts. Assertions about a disease mentioned, for example, could include whether it is beneficial or harmful, probable or unlikely.

Temporal relationships could include whether an illness or symptom happened before or after the administration of a drug, indicating that the disease/symptom is probably an indication for the drug administration rather than an ADE, or whether the disease/symptom took place after the administration of the drug, indicating that this may be a true ADE. To capture these settings, assertion and temporal information must be extracted from natural language text, accompanied by temporal inference over the data collection.

Entity recognition and disambiguation and relation extraction

Example of entity extraction and disambiguation and relation extraction outcomes for pharmacovigilance

The relative efficacy and scalability of built-in assertion classification vs. bespoke assertion classifiers (e.g., post-processing rules) is still being investigated. It is a rather challenging task to be automated. There are still issues with temporal resolution, particularly the implicit and unspecified aspect of temporal representation in human language.

Integrating Data Sources

In addition to EHR data, NLP pharmacovigilance frequently incorporates many heterogeneous data sources, such as spontaneous reports, clinical trials, product labeling, search logs, social media, chemical and biological databases, and the biomedical literature. Combining these disparate data sources with EHR data can offer more information to NLP pharmacovigilance algorithms.

It should be noted that multi-source techniques have outperformed single-source approaches. Integrating EHR data (discharge summaries, admission notes, and outpatient office visits) with spontaneous reports to identify adverse effects resulted in a considerable performance increase, with an average improvement of more than 30%.

Targeted Detection of Drug-Drug Interactions

Prioritizing drug-drug interactions (DDI) detected in EHRs utilizing four additional data sources, covering public databases (e.g., DrugBank and, spontaneous reports, literature, and non-EHR DDI prediction algorithms output to increase the DDI ranking’s stability. It was shown that combining numerous independent and vetted data sources improved the precision of ADE identification.

Since 2010, researchers have been combining several data sources for ADE recognition. The increasing proportion of multi-source ADE identification reflects the efficacy of the multi-source technique for enhancing EHR data. The rising availability of big, diverse datasets for ADE identification, along with the growing use of machine learning models, will force academics and clinicians to reconsider pharmacovigilance. New statistical approaches may be required for analysts. Physicians’ clinical decision tools may need to move beyond basic scoring algorithms and into considerably more complicated ones that use machine learning models to filter through many heterogeneous datasets and provide evidence-based ADEs.

If you are curious how natural language processing works in detail make sure to read our Definitive Guide to NLP.

Signal Detection Algorithms as a Critical Component in Pharmacovigilance

A key component of pharmacovigilance is data-mining algorithms that can provide reliable signals of possibly unique adverse drug reactions (ADRs). One of the proposals is a signal-detection technique that integrates the Food and Drug Administration’s adverse event reporting system (AERS) with EHRs by requiring signaling in both sources. When the goal is to develop a highly selective ranked collection of candidate ADRs, it is argued that this strategy improves signal detection accuracy.

Computational approaches are known as ‘signal-detection’ or ‘screening’ algorithms, which enable drug safety evaluators to shift through enormous amounts of data to find risk signals of probable ADRs, which have proved to be an essential element in pharmacovigilance.

The FDA frequently employs a signal-detection engine to generate signal scores (statistical reporting relationships) for all of the millions of drug–event pairings in its adverse event reporting system. Nonetheless, these indicators do not establish a causal ADR link on their own; somewhat, they are regarded as preliminary warnings that require additional evaluation by domain specialists to prove causation. This further examination often consists of a complex procedure in which drug safety evaluators search for supportive evidence such as temporal linkages, published case reports, and so on.

Signal detection in Pharmacovigilance

Signal detection in Pharmacovigilance

Post-marketing Spontaneous Reporting of ADRs

The FDA’s AERS, a database of over 4 million reports of suspected ADRs provided to the FDA by healthcare professionals, consumers, and pharmaceutical firms, is the current cornerstone in post-marketing drug safety surveillance. AERS reports are examined and analyzed by FDA drug safety evaluators using quantitative signal-detection algorithms to discover signals of possibly novel ADRs that require further study. AERS conveys genuine medical issues, covers vast populations, and is easily analyzed. AERS has helped regulatory decisions for almost 6000 marketed pharmaceuticals since its founding.

Nonetheless, AERS suffers from several drawbacks that may impede its practical usage, most notably under-reporting (only 10% of significant ADRs are recorded) and over-reporting (drugs with known and publicized ADRs are over-reported) publicized ADRs are more likely to be reported than other medicines. Other constraints include incorrectly ascribed drug–event combinations, missing and incomplete data, duplicate reporting, unexplained causal linkages, and a lack of exposure information.

Mining Electronic Health Records

History of drug safety incidents, such as the Vioxx example, which resulted in approximately 88 000 cases of myocardial infarction, have emphasized the need to find new data sources and enhanced processing methodologies to develop a more comprehensive pharmacovigilance system. These innovations are based on the increased secondary usage of electronic healthcare data, such as EHRs and administrative claims. Electronic healthcare data, instead of spontaneous reports, represents regular medical treatment collected over lengthy periods. As a result, they have a more comprehensive record of the disease and its treatment, therapies, diseases, and potential risk factors.

They are also not limited to those who have ADRs. As a result, they have numerous advantages that may be utilized to supplement SRS-based (spontaneous reporting systems) monitoring in terms of both fair reporting and signal strengthening or confirmation. They also have the ability for active surveillance, which is essential. Other retrospective investigations have previously shown that utilizing this sort of data. The Vioxx case may have been recognized sooner. However, research on the use of electronic healthcare data in drug safety is still in its early phases and has not yet been incorporated into normal pharmacovigilance. To present, electronic healthcare data have mainly been employed in pharmacovigilance to corroborate signals generated by SRSs on an ad hoc basis.

ADR signal-detection techniques have always focused on data from a single source. A growing concept in Pharmacovigilance research is that integrating data from many sources can lead to more effective and accurate ADR detection. Depending on the data sources used, how they are combined, and the scientific function they are programmed to deliver, it is generally assumed that the resulting system will either start increasing the evidence or statistical power of findings or will facilitate new findings that would not be possible with either source separately. It was created as a predicting method of novel ADRs by combining existing ADRs with chemical and ontological information in current investigations. AERS and literature data were integrated to develop quantitative structure-activity relationship (QSAR) models for some significant ADRs. It was presented as a method for prioritizing/enriching ADR signals derived from AERS based on chemical similarity. In contrast, others proposed a plan for accomplishing the same goal by linking AERS signals to ADR signals derived through literature mining.

Confrontation With Other EHR Methods

Most automated procedures were created for spontaneous reporting systems (SRSs); in these databases, the original reporter has already checked the data for confounders or variables. Current initiatives to leverage EHR data, on the other hand, will need to overcome the confusing issue. Other techniques primarily use EHRs to amplify signals discovered in SRSs. In contrast, one strategy entails immediately locating ADRs in the EHR. It is thought that this will help us to find ADRs that might otherwise go unreported to an SRS due to variables such as reporting bias. Others have studied the performance of statistical approaches using simulated EHR data. This technique was developed and validated using fundamental EHR data, including narrative records, rather than simulated data because simulated data may lack the multilayer confounding that exists in real EHRs.

The suggested technique benefits from not relying on a preset hypothesis to identify ADRs. Potential medication candidates are not discovered and then examined; instead, they are isolated as the last attributable cause remaining. For example, we do not need to designate the medications to be researched in advance, such as “patient on medication X AND patient’s serum lab > Y,” which would need explicitly coding the specified variables for X into the automated process. A significant advantage of this strategy is that not only will recognized medicines be identified, but there is also the possibility of discovering novel (e.g., those not found in clinical trials or other approaches) and previously undiscovered ADRs.

Exploring and Understanding Adverse Drug Reactions

Another advantage of this strategy is that it is not reliant on an aberrant lab signal and may be used to adverse effects documented elsewhere in the EHR, including textual comments. The EU-ADR (Exploring and Understanding Adverse Drug Reactions) initiative identified 23 significant AE to monitor in EHRs, only 10 of which are detectable using abnormal laboratory signals (ALS.) The remaining 13, which include disorders such as bullous skin eruptions, might be recognized using NLP or ICD-9 codes (albeit certain illnesses, such as heart valve fibrosis, lack ICD-9 numbers). In this situation, the related disease identifier (RDI) system might be utilized to extract cases suspected of being ADRs.

One approach to pharmacovigilance is to investigate the use of disproportionality algorithms in structured health datasets like claims data. The usage of these data has inherent difficulties since they are relatively restricted, and only some parts are tagged for billing reasons. The NLP part of the solution can utilize comprehensive data found in narrative notes. This automated solution can be effective in the second stage, which involves identifying ADRs in actual, unstructured EHR data. Finally, it is predicted that several technologies will be required to monitor and identify the extensive range of medications and ADRs that exist.


Recent trends, challenges and future prospects in literature surveillance of Pharmacovigilance - based on Pubrica Knowledge Works


Pharmacovigilance activities are increasingly reliant on automated data mining methods, and natural language processing offers a promising avenue for identifying adverse drug reactions in post-marketing data collected from electronic health records, social media, and traditional literature sources. NLP allows for the identification of ADRs in textual comments and structured data, and the technique described in this article has the potential to discover previously undiscovered ADRs.

In addition, this approach is not reliant on an aberrant lab signal, making it potentially useful for monitoring a wide range of medications and adverse events. Finally, pharmacovigilance activities are becoming increasingly dependent upon automated data mining methods, and natural language processing offers a versatile tool that can significantly affect these efforts.

If you’re looking for an NLP solution that will support your business, head over to our services page dedicated to natural language processing or contact us! Our team of ML experts will take care of your project.


Let me read it for you. The Definitive Guide to Natural Language Processing (NLP) - nexocode

Augmenting Drug Safety and Pharmacovigilance Services with Artificial Intelligence (AI) - nexocode

Leveraging Natural Language Processing (NLP) for Healthcare and Pharmaceutical Companies - nexocode

How AI and Machine Learning Can Revolutionize Drug Safety Monitoring, Dosanjh, Updesh. 2020, RTInsights

Detection of Pharmacovigilance-Related Adverse Events Using Electronic Health Records and Automated Methods - Haerian, K., D. Varn, S. Vaidya, L. Ena, H. S. Chase, and C. Friedmann. 2012

About the author

Dariusz Jacoszek

Dariusz Jacoszek

Life Sciences Expert

Linkedin profile

Dariusz is a qualified Pharmacist who has completed an Executive MBA at the Polish-American School of Business. He has worked in top pharma companies, being involved in activities focused on operational aspects of drug development and digital health, mainly in the oncology space.  Dariusz's experience spans building central services in Pharmacovigilance (PV), Trial Master File (TMF), and Transformation Management Office (TMO) with end-users focus, optimization, and data-driven approach.
As an enthusiast of emerging technologies, he’s keen to find new ways of implementing AI solutions in pharma and recommend digital solutions addressing customer needs at nexocode.

Would you like to discuss AI opportunities in your business?

Let us know and Dorota will arrange a call with our experts.

Dorota Owczarek
Dorota Owczarek
AI Product Lead

Thanks for the message!

We'll do our best to get back to you
as soon as possible.

This article is a part of

AI in Pharma
14 articles

AI in Pharma

The pharmaceutical industry is one of the most regulated industries in the world. It's also one of the most expensive and challenging industries to work in. Pharma companies, like all other businesses, are looking for ways to reduce costs while improving quality and efficiency. This is where artificial intelligence comes into play!

Follow our article series to find out what are the benefits of AI in pharma and why this tech could be considered a game changer for the pharmaceutical sector.

check it out

Pharma & Life Sciences

Insights on practical AI applications just one click away

Sign up for our newsletter and don't miss out on the latest insights, trends and innovations from this sector.


Thanks for joining the newsletter

Check your inbox for the confirmation email & enjoy the read!

This site uses cookies for analytical purposes.

Accept Privacy Policy

In the interests of your safety and to implement the principle of lawful, reliable and transparent processing of your personal data when using our services, we developed this document called the Privacy Policy. This document regulates the processing and protection of Users’ personal data in connection with their use of the Website and has been prepared by Nexocode.

To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at the level which ensures compliance with applicable Polish and European laws such as:

  1. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (published in the Official Journal of the European Union L 119, p 1); Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item 1000);
  2. Act of 18 July 2002 on providing services by electronic means;
  3. Telecommunications Law of 16 July 2004.

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.

1. Definitions

  1. User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal person, or an organizational unit which is not a legal person to which specific provisions grant legal capacity.
  2. Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
  3. Website – website run by Nexocode, at the URL: whose content is available to authorized persons.
  4. Cookies – small files saved by the server on the User's computer, which the server can read when when the website is accessed from the computer.
  5. SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary methods of data transmission encrypts data transmission.
  6. System log – the information that the User's computer transmits to the server which may contain various data (e.g. the user’s IP number), allowing to determine the approximate location where the connection came from.
  7. IP address – individual number which is usually assigned to every computer connected to the Internet. The IP number can be permanently associated with the computer (static) or assigned to a given connection (dynamic).
  8. GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals regarding the processing of personal data and onthe free transmission of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
  9. Personal data – information about an identified or identifiable natural person ("data subject"). An identifiable natural person is a person who can be directly or indirectly identified, in particular on the basis of identifiers such as name, identification number, location data, online identifiers or one or more specific factors determining the physical, physiological, genetic, mental, economic, cultural or social identity of a natural person.
  10. Processing – any operations performed on personal data, such as collecting, recording, storing, developing, modifying, sharing, and deleting, especially when performed in IT systems.

2. Cookies

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet. The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end device.
Cookies are used to:

  1. improve user experience and facilitate navigation on the site;
  2. help to identify returning Users who access the website using the device on which Cookies were saved;
  3. creating statistics which help to understand how the Users use websites, which allows to improve their structure and content;
  4. adjusting the content of the Website pages to specific User’s preferences and optimizing the websites website experience to the each User's individual needs.

Cookies usually contain the name of the website from which they originate, their storage time on the end device and a unique number. On our Website, we use the following types of Cookies:

  • "Session" – cookie files stored on the User's end device until the Uses logs out, leaves the website or turns off the web browser;
  • "Persistent" – cookie files stored on the User's end device for the time specified in the Cookie file parameters or until they are deleted by the User;
  • "Performance" – cookies used specifically for gathering data on how visitors use a website to measure the performance of a website;
  • "Strictly necessary" – essential for browsing the website and using its features, such as accessing secure areas of the site;
  • "Functional" – cookies enabling remembering the settings selected by the User and personalizing the User interface;
  • "First-party" – cookies stored by the Website;
  • "Third-party" – cookies derived from a website other than the Website;
  • "Facebook cookies" – You should read Facebook cookies policy:
  • "Other Google cookies" – Refer to Google cookie policy:

3. How System Logs work on the Website

User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The information collected in the Logs is processed primarily for purposes related to the provision of services, i.e. for the purposes of:

  • analytics – to improve the quality of services provided by us as part of the Website and adapt its functionalities to the needs of the Users. The legal basis for processing in this case is the legitimate interest of Nexocode consisting in analyzing Users' activities and their preferences;
  • fraud detection, identification and countering threats to stability and correct operation of the Website.

4. Cookie mechanism on the Website

Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful information and are stored on the User's computer – our server can read them when connecting to this computer again. Most web browsers allow cookies to be stored on the User's end device by default. Each User can change their Cookie settings in the web browser settings menu: Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings > Advanced. In the "Privacy and security" section, click the Content Settings button. In the "Cookies and site date" section you can change the following Cookie settings:

  • Deleting cookies,
  • Blocking cookies by default,
  • Default permission for cookies,
  • Saving Cookies and website data by default and clearing them when the browser is closed,
  • Specifying exceptions for Cookies for specific websites or domains

Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options > Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with the OK button.

Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field. From there, you can check a relevant field to decide whether or not to accept cookies.

Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site data. From there, adjust the setting: Allow sites to save and read cookie data

In the Safari drop-down menu, select Preferences and click the Security icon.From there, select the desired security level in the "Accept cookies" area.

Disabling Cookies in your browser does not deprive you of access to the resources of the Website. Web browsers, by default, allow storing Cookies on the User's end device. Website Users can freely adjust cookie settings. The web browser allows you to delete cookies. It is also possible to automatically block cookies. Detailed information on this subject is provided in the help or documentation of the specific web browser used by the User. The User can decide not to receive Cookies by changing browser settings. However, disabling Cookies necessary for authentication, security or remembering User preferences may impact user experience, or even make the Website unusable.

5. Additional information

External links may be placed on the Website enabling Users to directly reach other website. Also, while using the Website, cookies may also be placed on the User’s device from other entities, in particular from third parties such as Google, in order to enable the use the functionalities of the Website integrated with these third parties. Each of such providers sets out the rules for the use of cookies in their privacy policy, so for security reasons we recommend that you read the privacy policy document before using these pages. We reserve the right to change this privacy policy at any time by publishing an updated version on our Website. After making the change, the privacy policy will be published on the page with a new date. For more information on the conditions of providing services, in particular the rules of using the Website, contracting, as well as the conditions of accessing content and using the Website, please refer to the the Website’s Terms and Conditions.

Nexocode Team