How Artificial Intelligence Solutions Can Keep Track of Network Traffic and Detect Anomalies

How Artificial Intelligence Solutions Can Keep Track of Network Traffic and Detect Anomalies

Konrad Fulawka - October 25, 2021 - updated on April 25, 2023

Today’s telecom companies are expected to handle, manage, optimize, monitor, and troubleshoot multi-technology and multi-vendor networks in a competitive and unforgiving market with minimal time to resolution and high costs for errors. With the ongoing growth in operational complexities, effectively managing radio networks, current and legacy core networks, services, and transport and IT operations is becoming a radical challenge.

On the other hand, customers expect flawless service and availability and are quick to seek other options. Operators, therefore, need to reduce manual monitoring and increase network automation. That’s why leading Communications Service Providers (CSPs) are already relying on advanced anomaly detection solutions as the brain on top of the Operations Support Systems (OSS), giving them a holistic view across domains for real-time detection of service-impacting incidents.

Digital Transformation Agenda for Network Behavior Analysis

Global Telecommunication research performed recently by many organizations to monitor and evaluate evolving views captured three emerged messages from industry:

1. Digital business models, customer experience, and cost control lead the 2021 and beyond strategic agenda.

When asked the survey participants to name the top three strategic priorities, three focus areas were highlighted as an example:

  • Digital Business Models (71%)
  • Customer Experience (61%)
  • Cost control and business efficiencies(53%)
Digital transformation in the telecom industry
Digital transformation in the telecom industry

And it would not be surprising because these are apparent conclusions, if not for the fact that addressing these challenges step by step causes to create a list of optimization tasks in their transformation agenda that telecommunications companies must implement to remain competitive on the market, to respond to the needs of customers and ultimately take care of own future development.

While the focus on Customer Experience (CX) is well understood as a critical strategic goal of the telecommunications industry, Digital Business Models (DBM’s) and services are meaningful in themselves.

As operators move beyond adjacent growth segments, business model adjustments are more critical than ever before if operators are to create winning customer experiences in both legacy and new service domains. At the same time, digital business models are unlocking a new wave of efficiencies.

2. Process automation leads the list of long-term enablers for proactive security.

Process automation quoted by 71% of respondents and analytics cited by 51% are expected to lead the way as enablers to Customer Experience improvements. With process automation, the focus is likely to be mainly on cost reduction, though some telcos are looking increasingly at the opportunities it opens up in driving monetization and growth.

Software-Defined Networks (SDN) and IT systems— such as network functions virtualization (NFV) — will also have a growing role to play. The benefits being targeted here center on increased agility and efficiency, with upside in terms of improved network security and more flexible vendor relationships ranking further down the list of projected outcomes.

While the industry recognizes the benefits of SDN, the concept should be higher up the list of technology enablers. At the same time, respondents highlight newer concepts such as Machine Learning (ML) and Artificial Intelligence (AI), figuring as a driver of long-term efficiency gains.

From traditional solutions through automation towards predictive network behavior analysis
Automation journey in network monitoring from traditional security towards automatic detection and cognitive IT security

3. Analytics and virtualization are the top innovation drivers, but legacy IT systems and a lack of skills are acting as brakes.

When asked about the most critical enablers of innovation, analytics-based insights and virtualized networks plus IT were the most desirable drivers of innovative capabilities.

The directions mentioned above could not develop without the employment of talents. Only skilled employees who can push technological boundaries and bear legacy can meet the technical challenges to build a solution that utilizes sophisticated algorithms.

Meanwhile, opening up Application Programming Interfaces (APIs) underscores as an innovation driver. Going forward, APIs and engagement with developers may require more consideration as part of digital business model development.

A Holistic Approach to Advanced Security Intelligence and Network Behavior Analysis

For new digital and operational models to be effective, telecom operators must be supported by ubiquitous analytics and advanced security intelligence-based solutions. This will mean operators looking beyond the traditional analytics domain in developing customer insight and working out how to weave network analytics into digital service propositions.

The broad spectrum of analytics use cases open to telcos relies on:

  • Network Operations and Behavior - Capacity planning, Optimization in real-time, Network Monitoring, Protection and fault detection, Intrusion detection system, Intrusion prevention systems.
  • Operations - Improve Customer Experience (CE), Service Level Agreement (SLA) Management, Billing, and revenue assurance.
  • Customer - Personalized services, Plan Optimization, Social analysis.
  • Marketing - Targeted content, Campaign Management, Upsell, and Cross-sell.
  • Digital services - Selling customer data to third parties, Partner Performance management, AI managed services.
Holistic approach to network monitoring and analytics
A holistic approach to network behavior analytics and network operations. Analytics and virtualization are the top innovation drivers, but legacy and a lack of skills are acting as brakes

Each brings its potential benefits and is at a different level of implementation maturity. To widen analytics use cases successfully, more sophisticated strategies are crucial. These require improvement and optimization steps:

1. Speed the execution of analytics initiatives

Leverage metadata as part of data integration and centralization to improve decision-making. Only a detailed awareness about data and a transparent IT security landscape makes it possible to move forward.

2. Remove barriers to use case extension

Improve communication with customers and stakeholders and improve data collection and reuse.

3. Widen analytics competencies

Prioritize dynamic partnerships that can help add analytics and proactive detection to digital services capability.

Evergrowing Mobile Data Network Traffic

According to the latest insights, total global mobile data traffic – excluding traffic generated by Fixed Wireless Access (FWA) – reached 49EB per month at the end of 2020 and is projected to grow by a factor of close to 5 to reach 237EB per month in 2026. Including FWA traffic took the total mobile network traffic to 58EB per month at the end of last year.

The total mobile network traffic is forecast to exceed 300EB per month in 2026. Traffic growth can be very volatile between years and vary significantly between countries, depending on local market dynamics.

Mobile network traffic
Mobile data traffic growth (in EB per month)

By year-end 2026, only 5G subscriptions will reach approximately 3.5 billion with roughly 60 percent population coverage.

Mobile and Computer Networks of the Future

While industry forecasts continue to highlight an exponential increase in data demand due to 5G network deployments, operators will also have to ensure that the network of the future is responsive to a range of use cases. Latency and bandwidth requirements will vary widely depending on the specific application involved, meaning that operators must provide a range of capabilities to meet the needs of consumers and enterprises of the future.

Network monitoring system
Network monitoring system (NITO solution)

Future technologies will enable a fully digitalized, automated, and a programmable world of connected humans, machines, things, and places. All experiences and sensations will be transparent across the boundaries of physical and virtual realities. Traffic in future networks will be generated not only by human communication but also by connected, intelligent machines and bots embedded with Artificial Intelligence (AI). As time goes on, the percentage of traffic generated by humans will drop as that of traffic generated by machines and computer vision systems – including autonomous vehicles, drones, and surveillance systems – rises.

Artificial Intelligence to the Rescue

AI has become a vital enabler of the digital transformation journey for service providers in the telecoms industry, providing them with the insights and capabilities they need to be more agile and take a more software-centric approach to their role.

Telcos currently implementing AI tend to focus on:

  • Optimizing existing networks, streamlined network monitoring, improving operations, and building future networks.
  • Improving sales and marketing.
  • Enhancing the customer experience.

All of these are concerned with improving current business processes — an understandably attractive area of opportunity for telcos under increasing pressure to reduce operating costs (e.g., cost per transferred bit) and enhance performance (e.g., response times) across their businesses. Vendor solutions in the business process domains are also more mature than in other areas of AI, giving telcos the confidence that they can unlock benefits in the short term.

AI to the rescue
Artificial intelligence and machine learning for cognitive network infrastructure and smart network operations

Solutions

Telco infrastructure is a complex heterogeneous network. To optimize operations, ensure availability and reliability and deliver more business value, telcos need to stay top of hundreds of metrics. But with the ongoing growth in operational complexities, effectively managing and monitoring connections, devices, radio networks, current and legacy core networks, services, and transport and IT operations is becoming a considerable challenge.

Key elements of the Telecom infrastructure stack
Key elements of the Telecom infrastructure stack

The concept known as Self-Organizing Networks (SON) has been developed for modern Radio Networks, encompassing different aspects of network operation. In such highly complex and dynamic networks, changes to the Configuration Management (CM) parameters for network elements could unintentionally affect network performance and stability. To minimize accidental effects, the coordination of configuration changes before they are carried out and verifying their impact promptly is crucial.

SON framework includes three main functionalities which describe network characteristics:

1. Self-Configuration

This operation aims at the automation of the installation procedures. For instance, a newly deployed evolved Node B (eNodeB) can be automatically configured based on its embedded software and the data downloaded from the network without any human intervention.

2. Self-Optimization

Self-optimization is the process of the automatic tuning of the network based on performance measurements.

3. Self-Healing

Self-healing refers to detecting network issues and solving them in an automated and dynamic manner.

Self-Organizing Network framework
Self-Organizing Network framework

The requirements to develop the SON functionalities can be classified broadly into the following two categories: technical and business needs.

The purpose of specifying the technical requirements is to help develop novel algorithms and functionalities and highlight the relevant network characteristics for self-organization. The list of the technical requirements to be addressed comprises performance and complexity, stability, robustness, timing, interaction among SON functionalities, architecture and scalability, and required inputs.

Defining the business requirements helps consider factors related to the involved operational costs and incorporate them while developing the solutions. These requirements can be broadly classified into cost efficiency and deployment. SON as the future solution, in addition to self-organization and Optimization of management, will make it easier for the networks of the future to manage any kind of malicious and unusual events.

Network Behavior Anomaly Detection

Many data sets (logs, configurations, mobile traffic, etc.) continuously stream from the telecommunication networks, referred to as “Big Data,” a term that describes the large and distributed nature of the data sets.

There are many statistical definitions of Big Data. Most of them describe them as high-volume, high-velocity, and high-variety data sets that demand cost-effective novel data analytics for decision-making and infer valuable insights. In recent years, the core challenges of Big Data have been widely established. These are contained within the five V’s of big data—Value, Veracity, Variety, Velocity, and Volume.

Value refers to the benefits associated with data analysis; veracity refers to the accuracy of the data; and variety refers to the many types of data, such as structured, semi-structured, or unstructured. Volume is the amount of data accumulated (i.e., the size of the data) - the larger the dimensionality of the data, the larger the volume.

Dimensionality refers to the number of features or attributes, or variables within the data available for analysis. By contrast, velocity refers to the “speed” at which the data are generated and may contain many dimensions. These elements of the current definition of Big Data address fundamental challenges.

Anomaly detection aims to detect abnormal patterns deviating from the rest of the Big Data, called anomalies or outliers. High dimensionality creates difficulties for network anomalies detection. When the number of attributes or features increases, the amount of data needed to generalize accurately also grows, resulting in data sparsity in which data points are more scattered and isolated.

What is Anomaly Detection?

The question has taken many names such as “anomaly detection”, “outlier detection”, “novelty detection”, “intrusion detection” and “peak detection”. There is no universally accepted definition of anomaly or outlier. In general, Anomaly Detection (AD) identifies unexpected items or events in data sets. From a statistical view, assuming a distribution of events, anomalies are considered unlikely events concerning a defined threshold. There is also a more event-oriented view of anomalies, as anomalous events have some unexpected and typically adverse effects on the processes we are interested in.

Typical workflow for Network Anomaly Detection
Typical workflow for Network Anomaly Detection

Anomaly Detection can be used to achieve improvements in essential areas, such as:

  • Reducing turnover time for application errors
  • Ensuring high solution performance
  • Identifying security threats before they happen

In the mobile networking industry, the definition of anomalies is tightly related to business considerations. An anomaly is a data sample pointing to an event or a dysfunctional (software/hardware) component resulting in a financial loss, either directly or indirectly.

This applies to many businesses, and it is significant in the area of telecommunications. As we expand into 5G and Industry 4.0, there is a growing need to manage and monitor a large amount of data coming from more complex systems. An operator cannot extract insights from the fluctuations of thousands of different performance indicators, and anomaly detection with such interconnected processes becomes impossible to visualize with the naked eye, even for domain experts. A typical instance of an anomaly would be a sudden spike in base station load or an increase in call drops in a network.

Owing to the recent developments in Machine Learning and AI, there is a growing need to democratize Anomaly Detection frameworks for a wide range of users. Businesses will need their people to become more data-savvy in different aspects of their work, ranging from domain experts to business analysts to software engineers.

What Are Anomaly Detectors Used For?

Before deploying an Anomaly Detection system, it’s essential to set realistic expectations. This can be achieved by framework definition, that:

  • Automatically detect unusual changes in network behavior;
  • Predict major failures with 100% accuracy;
  • Provide easy-to-understand root cause analysis and advanced security intelligence so that service providers know exactly how to fix the issues at hand.

In reality, network behavior analysis is not that easy:

  • No anomaly detector can provide 100% correct yes/no answers. False positives and false negatives will always exist, and there are trade-offs between the two. Yet unknown security threats bypassing traditional security may be left undetectable;
  • No anomaly detector can provide 100% correct root cause analysis, possibly due to low signal-to-noise ratios and correlation between performance indicators. Service providers must often infer causality by combining the results of network traffic anomalies detection with their domain or institutional knowledge.

Additional challenges make this task difficult:

  • The amount of data for training and testing the model may be limited, and it may not be labeled (i.e., we don’t know which data points are anomalies). Machine Learning, deep learning, and other sophisticated algorithms typically require large amounts of data, since network anomalies are by definition statistically unlikely during regular traffic (i.e., anomalous behavior is less likely than normal behavior), datasets are often imbalanced (i.e., there are more occurrences of normal behavior than of anomalous behavior or network attacks), which presents additional challenges in training models that accurately identify or predict suspicious behavior.
  • Anomaly detectors may be built on dynamic systems with rapidly growing user bases. As a result, anomaly detectors have to adapt their behavior over time, learn how to detect unknown security threats undetectable before becoming predictable as the underlying IT environment evolves.

As anomalies in information systems most often suggest some security breaches or violations, Anomaly Detection has been applied in various industries to advance the IT security landscape and detect malicious behavior, unknown malware, insider threats, abuse of security circle, or network attacks to provide proactive security systems. One of the use cases where automatic detection is applied in this context is modern cyber threats and other threats bypassing traditional security.

Cyber-security, in this case, is guaranteed with the help of network behavior anomaly detection (NBAD) technology. The system analyzes packet signatures to reveal suspicious behavior, detect yet unknown security threats, automatically identify network anomalies, and block incoming/outgoing data that compromises internal systems. NBAD also conducts continuous monitoring to detect malicious behavior or trends in significant infrastructure and critical networks.

Network Behavior Analysis with Big Data

Anomaly Detection is an essential technique for recognizing fraud activities, network anomalies, suspicious activities, malicious behavior, network intrusion, modern cyber threats, and other unusual events that may have great significance but are challenging to detect. The importance of Anomaly Detection is that the process translates data into critical actionable information and indicates proactive detection insights in a variety of application domains.

Anomaly Detection that addresses problems of high dimensionality can be applied in either online or offline modes. In an offline mode, network anomalies are detected in historical data sets known as “batch processing.” This relates to the “volume” feature of Big Data. By contrast, new data points, known as “dataset,” are continually introduced in online mode while anomalies are being detected. This relates to the “velocity” feature of Big Data. Several existing surveys and reviews highlight the problem of high dimensionality for various fields, such as Machine Learning and data mining.

Detecting anomalies in general presents many challenges in many aspects:

  • The notion of standard data is very domain-specific. There is no universal procedure to model average data. The concept of normality is still subjective and difficult to validate. In addition, data classes (normal/abnormal) are, in general imbalanced. And since statistical tools do not apply to a few samples, modeling anomalies is not easy because of their rarity.
  • The boundary between normal and abnormal data is fuzzy. There is no rule to decide for the points in this grey zone except subjective judgment. Added to that, the data may contain noise which makes the distinction between normal and abnormal data more difficult.
  • The concept of normality evolves with time. What is expected in a time span can turn anomalous in the future. Novel anomalies that are not contained in the model can also appear.
  • It is not common to have labeled data to train the model in many domains.

Network Behavior Anomaly Detection Systems (ADS), Methods and Techniques

Methods

The choice of anomaly detection approach depends on the training data and test data measures used. Currently, three types thereof are known:

  • Supervised detection is used with fully labeled training and test data sets. This method works well with unbalanced classes and performs data labeling because anomalies are well-known and already labeled. Thus, it is not applicable in cases where outliers are yet to be identified.
  • Semi-supervised detection applies training and test datasets, but training data is devoid of anomalies. This approach presupposes that a system will identify an anomaly once it learns a standard dataset and sees its deviations.
  • The unsupervised machine learning model for AD is the most flexible of the three in terms of presenting no labels to the system and drawing no distinctions between the training and test dataset. This way, the system scores data within the dataset only based on its units’ characteristics, without any predetermined normalcy values.

Advanced Security Intelligence-Based Systems

Although anomaly identification is subjective, many Anomaly Detection Systems (ADSs) have been developed. Some ADSs are very domain-specific and cannot be applied to other domains due to constraints (input/output schemes, resource utilization, etc.). Other ADSs are very flexible and can be used in many different disciplines. Unlike traditional solutions, the newly proposed solutions based on Artificial Intelligence (AI) are sufficiently generic to be integrated into different IT environments (i.e., critical networks).

AI to the rescue
Anomaly Detection Systems (ADS), methods and techniques

In general, an ADS collects a large amount of data. It identifies abnormal points based on prior knowledge (expert systems) or knowledge deduced from the data (such as Machine Learning solutions). Depending on the application field and the reliability of the ADS, the detected anomalies can either be reported to an expert to analyze them or fed to another system that runs mitigation actions automatically. Here are the most popular techniques used by the telco industry:

Machine Learning Techniques

Knowledge-Based

This approach is based on the human knowledge of the domain. It assumes that the patterns of anomalies are known and that it is possible to encode them in a way that the machine can understand. Based on these assumptions, the creation and functioning of a knowledge-based system (also called an expert system) involve three steps:

  • An expert has to analyze a large amount of data manually and identify anomaly patterns.
  • Anomaly patterns are implemented in an automated system.
  • The system runs automatically and detects anomalies in new data.

Regression

This approach is based on the principle of forecasting. An anomaly can be defined as a gap between reality and what was expected. An AD system based on regression operates in the following mode:

  • First, it estimates the coming data.
  • It quantifies the gap between the actual value and the predicted one. If the gap is sufficiently large (higher than a predefined threshold), the new data point is considered an anomaly.

Neural networks can be used to perform regression.

Classification

This approach assumes that data points can be grouped into classes. There are two possible ways to view anomalies:

  • As scattered points far from a dense central group of data points (two-class classification) or
  • Dense groups of data are distinct from the groups of standard data (multi-class classification).

The classification techniques suppose that we have a training data set with labeled data. In this approach, the anomaly detection process can be broken into the following steps:

  • We classify the training data and identify classification attributes.
  • We learn a model using the training data.
  • We can classify new data using the learning model.

Clustering

Clustering is an unsupervised technique to group similar data. This approach assumes that anomalies are either points not belonging to any cluster or belonging to small clusters compared to large and dense clusters of normal data. Clustering algorithms have two phases:

  1. Training phase - During the training phase, the data points are grouped into clusters over a large number of iterations until convergence or attaining predefined criteria (maximum number of iterations, etc.)
  2. Test phase - In the test phase, a new data point is assigned to the closest clusters based on the distance/similarity measure used in training.

Network Anomalies Detection Case Studies

1. Anomaly Detection Application on Google Cloud Platform (GCP)

There are many applications developed by telco vendors and CSPs which capture exciting incidents in Mobile Networks, and one of the crucial ones addresses anomaly detection. Here is one example of a service launched by Nokia Bell Labs technology and rolled out across Vodafone’s pan-European network.

The product quickly detects and troubleshoots irregularities, such as mobile site congestion and interference, and unexpected latency, impacting customer service quality. Vodafone expects around 80 percent of its anomalous Mobile Network issues and capacity demands to be automatically detected and addressed using Anomaly Detection Service.

Business Challenges and Key Drivers

Vodafone’s move from its on-premises Big Data Platform (BDP) to Google Cloud Platform (GCP) provided a new opportunity to rethink data management and focus on use cases and application development.

Vodafone’s BDP consisted of many different clusters across operating countries, creating multiple ‘data lakes’ and making data analytics inefficient. The disparity in data across the data lakes and the lack of robust algorithms led to undetected network issues and service degradations. Vodafone decided to move to GCP to solve these data-related problems by creating a single cloud-based ‘data ocean.’

Vodafone also wanted to use GCP to transform its network management and drive cost efficiencies across the business by increasing its levels of automation. To achieve this, they increased their focus on developing use cases and applications hosted on GCP because GCP allows for easy scaling across the network and supports region-specific customizations.

Anomaly Detection Application

Developed Anomaly Detection application is expected to provide Vodafone with significant benefits in the radio domain and forms part of Vodafone’s broader goal of delivering network and cost efficiencies across.

Vodafone uses the Anomaly Detection use case to support network planning and optimization with a plan to expand to network operations. It is the first step towards achieving full automation of network lifecycle management. The Neuron platform runs within GCP to render and store the RAN data, and the Nucleus algorithms are used for Machine Learning based pattern recognition, clustering, and classification.

Nokia developed the anomaly detection application to be consumed as-a-service. It shared its vision with Vodafone and exhibited flexibility and openness for ongoing collaboration (some of Vodafone’s essential requirements). Nokia worked on an equal partnership basis and provided the algorithm source code, which increased trust within the partnership.

Anomaly Detection Systems
Anomaly Detection system based on Google Cloud Platform

Key Benefits

Easy deployment of new applications across operating companies. GCP enables Vodafone to focus on use cases and business outcomes. It has allowed Vodafone to adopt a ‘develop once, deploy many times’ method, which has resulted in a 60–70% reduction in effort, thereby allowing Vodafone to deploy apps in different markets rapidly.

Joint innovation partnership and global access practices. The equal partnership with Nokia alongside a shared vision and collaboration with GCP also provides a strong foundation for future collaboration. There is a clear separation of responsibilities between Vodafone and Nokia: Vodafone provides the network data, and Nokia builds the models and applications.

Anomaly Detection is an enabler for efficient everyday network planning, optimization, and operations. Anomaly detection delivers the most significant benefits in terms of automating root-cause analysis (25–30% operational efficiency improvement). It uses Machine Learning to identify issues such as call set-up failures automatically.

2. Anomaly Detection Framework (ADF) For Quick and Easy Prototyping

One project from Ericsson’s Global Artificial Intelligence Accelerator (GAIA) aims to make AI tools more accessible for anomaly detection. It introduces the E-ADF framework.

The idea behind E-ADF developed from multiple anomaly detection projects coming into the GAIA pipeline, many of which would need to start from scratch. Collaboration between data scientists working on those projects resulted in leveraging reusable components from the code and building an ADF.

How Does It Work?

The project benefitted from contributions from the best anomaly detection practices in GAIA and Ericsson Research. The initial goal to create a reusable asset for data scientists evolved into a plan to create an easy-to-use AI platform for data scientists and non-data scientists alike.

E-ADF facilitates faster prototyping for anomaly detection use cases, offering its library of algorithms for anomaly detection and time series, with functionalities like visualizations, treatments, and diagnostics.

A set of E-ADF features allows users to prototype their anomaly detection use cases effectively, all within the same framework. Included algorithms enable users to handle both univariate and multivariate data, features like rolling window and segmentation, a detector explainer to help find the root cause for certain anomalies, and pipelines.

Closing Remarks

Future Networks are and will continue to be exposed to occurring incidents. There are many forth mentioned reasons for this, but the answers are also increasing because solutions in AI enter the game. Although many mobile network operators still use traditional methods of finding and tracking incidents on the network, the time has come for AI to help solve critical problems.

However, before deploying any Anomaly Detection Systems with AI, it’s essential to set realistic expectations for a system to:

  • detect unusual changes, network attacks, or insider threats,
  • predict major failures with 100%
  • provide easy-to-understand root cause analysis, that CSPs know exactly what and how to fix the issues at hand.

Once Anomaly Detection models are developed, the next step would be to integrate them into a production system. This can present data engineering challenges in that anomalies should be detected in real-time, continuously, with a potentially high volume of streaming data.

About the author

Konrad Fulawka

Konrad Fulawka

Strategic Advisor and Telco Expert

Linkedin profile

Konrad Fulawka graduated from the University of Technology in Wroclaw and has almost 20 years of experience in the Telecommunications Industry.
For the last 11 years, he works for Nokia. Over the time, Konrad was responsible for leading international and multicultural teams working on many complex telecommunication projects, delivering high-quality software worldwide. During the last few years, he is heading the Nokia Garage - Innovation Hub, which helps Nokia drive cutting-edge innovative projects.
At nexocode, Konrad acts as a strategic advisor and Telco Expert with unparalleled insight into global business trends and best practices across all verticals. He loves DIY (Do It Yourself) activities besides Political Economy and Financial Services Markets.

Would you like to discuss AI opportunities in your business?

Let us know and Dorota will arrange a call with our experts.

Dorota Owczarek
Dorota Owczarek
AI Product Lead

Thanks for the message!

We'll do our best to get back to you
as soon as possible.

This article is a part of

AI in Telecommunications
7 articles

AI in Telecommunications

Artificial Intelligence and ML are disrupting and transforming telecom businesses. Telecommunications companies can leverage these technologies to improve customer retention, enable self-service, improve equipment maintenance, and allow for an undisrupted flow of the evergrowing amounts of telecom data.

These advancements will also reduce operational costs, which means you're likely going see more savings than ever before! Click here for our article series about how AI revolutionizes the Telco industry across all areas.

check it out

Telecommunications

Insights on practical AI applications just one click away

Sign up for our newsletter and don't miss out on the latest insights, trends and innovations from this sector.

Done!

Thanks for joining the newsletter

Check your inbox for the confirmation email & enjoy the read!

This site uses cookies for analytical purposes.

Accept Privacy Policy

In the interests of your safety and to implement the principle of lawful, reliable and transparent processing of your personal data when using our services, we developed this document called the Privacy Policy. This document regulates the processing and protection of Users’ personal data in connection with their use of the Website and has been prepared by Nexocode.

To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at the level which ensures compliance with applicable Polish and European laws such as:

  1. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (published in the Official Journal of the European Union L 119, p 1); Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item 1000);
  2. Act of 18 July 2002 on providing services by electronic means;
  3. Telecommunications Law of 16 July 2004.

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.

1. Definitions

  1. User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal person, or an organizational unit which is not a legal person to which specific provisions grant legal capacity.
  2. Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
  3. Website – website run by Nexocode, at the URL: nexocode.com whose content is available to authorized persons.
  4. Cookies – small files saved by the server on the User's computer, which the server can read when when the website is accessed from the computer.
  5. SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary methods of data transmission encrypts data transmission.
  6. System log – the information that the User's computer transmits to the server which may contain various data (e.g. the user’s IP number), allowing to determine the approximate location where the connection came from.
  7. IP address – individual number which is usually assigned to every computer connected to the Internet. The IP number can be permanently associated with the computer (static) or assigned to a given connection (dynamic).
  8. GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals regarding the processing of personal data and onthe free transmission of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
  9. Personal data – information about an identified or identifiable natural person ("data subject"). An identifiable natural person is a person who can be directly or indirectly identified, in particular on the basis of identifiers such as name, identification number, location data, online identifiers or one or more specific factors determining the physical, physiological, genetic, mental, economic, cultural or social identity of a natural person.
  10. Processing – any operations performed on personal data, such as collecting, recording, storing, developing, modifying, sharing, and deleting, especially when performed in IT systems.

2. Cookies

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet. The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end device.
Cookies are used to:

  1. improve user experience and facilitate navigation on the site;
  2. help to identify returning Users who access the website using the device on which Cookies were saved;
  3. creating statistics which help to understand how the Users use websites, which allows to improve their structure and content;
  4. adjusting the content of the Website pages to specific User’s preferences and optimizing the websites website experience to the each User's individual needs.

Cookies usually contain the name of the website from which they originate, their storage time on the end device and a unique number. On our Website, we use the following types of Cookies:

  • "Session" – cookie files stored on the User's end device until the Uses logs out, leaves the website or turns off the web browser;
  • "Persistent" – cookie files stored on the User's end device for the time specified in the Cookie file parameters or until they are deleted by the User;
  • "Performance" – cookies used specifically for gathering data on how visitors use a website to measure the performance of a website;
  • "Strictly necessary" – essential for browsing the website and using its features, such as accessing secure areas of the site;
  • "Functional" – cookies enabling remembering the settings selected by the User and personalizing the User interface;
  • "First-party" – cookies stored by the Website;
  • "Third-party" – cookies derived from a website other than the Website;
  • "Facebook cookies" – You should read Facebook cookies policy: www.facebook.com
  • "Other Google cookies" – Refer to Google cookie policy: google.com

3. How System Logs work on the Website

User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The information collected in the Logs is processed primarily for purposes related to the provision of services, i.e. for the purposes of:

  • analytics – to improve the quality of services provided by us as part of the Website and adapt its functionalities to the needs of the Users. The legal basis for processing in this case is the legitimate interest of Nexocode consisting in analyzing Users' activities and their preferences;
  • fraud detection, identification and countering threats to stability and correct operation of the Website.

4. Cookie mechanism on the Website

Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful information and are stored on the User's computer – our server can read them when connecting to this computer again. Most web browsers allow cookies to be stored on the User's end device by default. Each User can change their Cookie settings in the web browser settings menu: Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings > Advanced. In the "Privacy and security" section, click the Content Settings button. In the "Cookies and site date" section you can change the following Cookie settings:

  • Deleting cookies,
  • Blocking cookies by default,
  • Default permission for cookies,
  • Saving Cookies and website data by default and clearing them when the browser is closed,
  • Specifying exceptions for Cookies for specific websites or domains

Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options > Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with the OK button.

Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field. From there, you can check a relevant field to decide whether or not to accept cookies.

Opera
Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site data. From there, adjust the setting: Allow sites to save and read cookie data

Safari
In the Safari drop-down menu, select Preferences and click the Security icon.From there, select the desired security level in the "Accept cookies" area.

Disabling Cookies in your browser does not deprive you of access to the resources of the Website. Web browsers, by default, allow storing Cookies on the User's end device. Website Users can freely adjust cookie settings. The web browser allows you to delete cookies. It is also possible to automatically block cookies. Detailed information on this subject is provided in the help or documentation of the specific web browser used by the User. The User can decide not to receive Cookies by changing browser settings. However, disabling Cookies necessary for authentication, security or remembering User preferences may impact user experience, or even make the Website unusable.

5. Additional information

External links may be placed on the Website enabling Users to directly reach other website. Also, while using the Website, cookies may also be placed on the User’s device from other entities, in particular from third parties such as Google, in order to enable the use the functionalities of the Website integrated with these third parties. Each of such providers sets out the rules for the use of cookies in their privacy policy, so for security reasons we recommend that you read the privacy policy document before using these pages. We reserve the right to change this privacy policy at any time by publishing an updated version on our Website. After making the change, the privacy policy will be published on the page with a new date. For more information on the conditions of providing services, in particular the rules of using the Website, contracting, as well as the conditions of accessing content and using the Website, please refer to the the Website’s Terms and Conditions.

Nexocode Team