In today’s era of massive digital transformation, organizations are becoming more data-driven than ever before. The ability to leverage vast amounts of data in real time is no longer a luxury but a necessity for businesses seeking to gain a competitive edge. However, managing, processing, and making sense of this “Big Data” is daunting. Traditional database systems, designed for batch processing, often fall short when it comes to handling real-time data processing. As a part of our “Becoming AI-Driven” series, this article explores a tool designed to tackle these challenges head-on: Apache Storm.
The Apache Software Foundation conceived Apache Storm as a solution to fill this real-time data processing void. It is a free, open-source, distributed real-time computation system that efficiently processes vast data streams. Its robustness and scalability make it a popular choice for a wide range of industries and applications.
In this article, we will delve deep into the world of Apache Storm. We’ll demystify its architecture, key components such as Tuples, Streams, Spouts, and Bolts, and the crucial roles of the Nimbus and Supervisor in the Storm cluster. We will also outline its workflow, discuss key use cases, benefits, and limitations, and look at where Apache Storm fits in the broader Big Data infrastructure stack. You will get insights into who’s using Apache Storm and how it can be leveraged as a fully-managed service.
So, whether you’re an industry professional, a developer, or simply a tech enthusiast looking to enhance your understanding of real-time data processing, stay with us as we traverse the Storm.
TL;DR
•Apache Storm is a powerful open-source computation system developed by the Apache Software Foundation, capable of processing vast amounts of data in real time.
•Real-time data processing is becoming essential in modern big data architectures. It complements traditional batch processing systems, addressing the need for real-time analytics and insights.
• Apache Storm’s key components include spouts and bolts forming the data model which consists of tuples and streams, and together create a Storm topology.
•Storm cluster architecture comprises the nimbus (master node) and supervisors (worker nodes) which coordinate to execute tasks. The nimbus distributes tasks and the supervisors execute them.
• Apache Storm is beneficial for its robust scalability, low latency, fault tolerance, ease of use, and guaranteed data processing. It does, however, require considerable resources and expert setup.
•Key use cases for Storm include real-time analytics, online machine learning, continuous computation, distributed RPC, and ETL.
• Apache Storm plays a crucial role in the big data infrastructure stack, working alongside other technologies like Apache Kafka, Hadoop, and Hive for data ingestion, storage, and querying.
• Several major companies, including Twitter, Spotify, Yelp, and Alibaba, are utilizing Apache Storm for real-time data processing.
• Are you looking to harness the power of real-time data processing in your organization? Our team of
data engineers at nexocode is ready to help.
Contact us to explore how we can assist you in implementing a modern data infrastructure.\
Modern Big Data Architecture
In the contemporary landscape, businesses across the globe generate a staggering amount of data every day. This data, stemming from a wide array of sources such as IoT devices, social media, transactional systems, and more, is multifaceted, comprising both structured and unstructured forms.
To effectively utilize this data, organizations have adopted what we call “modern big data architectures.” These are complex ecosystems consisting of various components, each serving a specific role in the overall data processing pipeline. From data ingestion and storage systems to data processing engines and analytic tools, every component works in harmony to convert raw data into actionable insights.
Diverse data sources coupled with a range of processing requirements mandate that these architectures be versatile. For instance, traditional batch processing models are suitable for tasks where latency is not a major concern, like generating daily reports. However, they are ill-equipped for scenarios where real-time insights are necessary, leading to the advent of stream processing models.
Real-time data processing, in the world of big data, is a game-changer. It involves processing data as soon as it arrives, enabling businesses to take immediate action based on the insights generated. This ability to act on the data instantly, or in ‘real-time,’ often makes the difference between gaining a competitive edge or falling behind.
For instance, in industries like finance or eCommerce, real-time fraud detection can save millions. In social media, real-time data processing allows instant personalization, leading to an improved user experience. Similarly, in sectors such as healthcare or logistics, real-time analytics can help optimize operations, making them more efficient and cost-effective.
Despite its myriad benefits, real-time data processing is not without its challenges. It requires highly scalable and resilient systems that can handle large volumes of incoming data at high velocities. Furthermore, these systems must ensure data accuracy and reliability while delivering results in near real-time. As traditional batch processing systems fail to meet these requirements, the quest for a solution that can handle real-time data processing efficiently and effectively has become a priority in the modern big data era.
Enter Apache Storm Project
In the quest to overcome the challenges associated with real-time data processing, an innovative solution emerged - the Apache Storm project. Conceived by the Apache Software Foundation, this open-source computational engine has revolutionized the way we deal with big data streams.
Apache Storm has been designed from the ground up to process data in real-time. It can ingest and process vast volumes of high-velocity data with ease, maintaining the reliability and accuracy that businesses require. Storm doesn’t replace the traditional big data processing mechanisms; instead, it complements them, filling a gap in the data processing landscape that few other technologies can.
In the upcoming sections, we will unpack the inner workings of Apache Storm, explore its architecture, delve into its key components, and understand its role in the modern data ecosystem. We aim to provide a comprehensive view of this robust tool, providing insights into why it has become a favored choice for real-time data processing across various industries.
Apache Storm Architecture and Key Components
Apache Storm operates through a distinct yet comprehensive architecture that facilitates the real-time processing of big data. Its design is centered around a few key components that work in conjunction to ensure seamless data processing. These components include Tuples, Streams, Spouts, and Bolts, each having a unique role in the Storm data model.
Storm Data Model
At the heart of the Apache Storm architecture is the Storm data model. This model is a hierarchical arrangement that encompasses several elements, starting with Tuples and Streams, and escalating to Spouts and Bolts, which collectively form the Storm Topology.
Tuple
In the Storm data model, the smallest unit of data is a Tuple. A Tuple can be seen as an ordered list of elements. The elements can be of different types, and the Tuple itself is dynamically typed. Tuples are the primary data units that flow through the streams in a Storm Topology, being processed by Spouts and Bolts.
Stream
A stream in Apache Storm is an unbounded sequence of Tuples. In a Storm topology, streams are the “pipelines” that transfer Tuples from one component to another. A single Storm topology can consist of multiple streams, with each stream carrying a particular type of data tuple.
Spouts
Spouts are the data sources in the Storm topology. They read data from an external source, convert it into Tuples, and emit them into the topology. Spouts can process data from various sources, such as a database, a distributed file system, or even real-time message queues.
Bolts
Bolts are the logic units of the Storm topology. They consume Tuples from either Spouts or other Bolts, process them, and possibly emit new Tuples. Bolts can perform functions like filtering, aggregating, joining, interacting with databases, and more. Essentially, Bolts represent any processing step you wish to perform on your data.
Apache Storm Topology
An Apache Storm Topology serves as a map of computation, consisting of stream transformations structured in a network pattern. Every node within this network can either be a Spout (the data source of the input stream) or a Bolt (the data processor for output streams). The basic data unit, Tuple, flows from Spouts to Bolts, undergoing various transformations along the way.
Each transformation or processing step within this pipeline is represented by a task. A task can be thought of as an instance of a Bolt or Spout in the topology. These tasks are distributed across different worker processes running on various nodes in the Storm cluster, leading to parallelism in data processing.
Stream groupings determine how Tuples are routed between tasks. The grouping strategy can affect how data is partitioned and processed, impacting the overall performance and reliability of the topology.
Harness the full potential of AI for your business
Beyond the basic Storm Topology, Apache Storm also supports a higher-level abstraction known as Trident Topology. Trident provides a set of primitives like joins, aggregations, grouping, functions, and more, allowing you to seamlessly piece together complex computations and stateful processing. This is particularly useful for ensuring exactly-once processing semantics, thus enhancing the reliability of data processing.
Once an Apache Storm Topology is defined and deployed to the Storm cluster, it continuously processes data streams until it is manually terminated. This ability to handle infinite streams of data in real-time makes Apache Storm a powerful tool in the modern data ecosystem.
Storm Cluster Architecture
The processing power of Apache Storm is harnessed through a distributed network of machines known as a Storm cluster. In a Storm cluster, the work of processing streams of data is divided among multiple nodes, thereby achieving parallel processing. This master-slave architecture, which consists of two types of nodes - Nimbus and Supervisor, allows Storm to process massive amounts of data in real-time.
Nimbus (Master Node)
The Nimbus node, often referred to as the master node, has a role similar to that of the master in the
Hadoop MapReduce’s JobTracker. It is responsible for distributing the code around the cluster, assigning tasks to machines, and monitoring their performance. If a worker node goes down, the Nimbus node detects this and reassigns its tasks to other nodes in the cluster, ensuring uninterrupted processing. Essentially, Nimbus is the brain of the cluster, orchestrating the entire operation.
On the other hand, Supervisors are the worker nodes of the Storm cluster. They run the processes assigned to them by the Nimbus. Each supervisor node has multiple worker processes, each of which executes a subset of a topology. These worker processes run the actual spouts and bolts that perform the data processing tasks. The Supervisor nodes are monitored by the Nimbus, ensuring their health and functioning. Together, these Nimbus and Supervisor nodes constitute the Storm cluster, effectively processing real-time data streams.
Apache Storm Workflow
Understanding the workflow of Apache Storm is essential to appreciate its efficiency and fault-tolerance in real-time data processing. In a functional Storm cluster, there must be one Nimbus (master node), one or more Supervisors (worker nodes), and Apache ZooKeeper for coordination between them. Here is how Storm processes unfolds:
Topology Submission: The workflow commences with the submission of a “Storm Topology” to the Nimbus. At this stage, the Nimbus simply awaits incoming topologies.
Task Processing: Upon receiving a topology, the Nimbus processes it, identifying all the tasks involved and the sequence in which they must be executed.
Task Distribution: The Nimbus then evenly distributes these tasks across the available Supervisors. This distribution ensures a balanced workload and efficient resource utilization within the cluster.
Heartbeat Monitoring: At regular intervals, each Supervisor sends a heartbeat to the Nimbus. This signal informs the Nimbus that the Supervisor is operational and executing the assigned tasks.
Task Reassignment: If a Supervisor fails and stops sending heartbeats, the Nimbus detects this failure. The Nimbus then reallocates the tasks of the failed Supervisor to another operational one, ensuring no interruption in processing.
Nimbus Failure Handling: In case the Nimbus itself fails, the Supervisors continue processing their current tasks without disruption. This feature brings robustness into the system.
Task Completion: After all tasks are completed, the Supervisors stand by for new tasks.
Nimbus and Supervisor Restart: Service monitoring tools automatically restart any failed Nimbus or Supervisor. On restart, these nodes pick up from where they left off. This mechanism ensures that all tasks are processed at least once, thereby achieving at-least-once processing semantics.
Awaiting New Work: After processing all the topologies, the Nimbus and the Supervisors wait for new topologies and tasks, respectively. This cycle continues as long as there are new data streams to process.
This workflow underlines the resiliency and efficiency of Apache Storm. Its ability to self-heal and ensure continuity of processing makes it a robust tool for real-time stream processing.
Key Use Cases for Storm
Apache Storm, with its ability to process large volumes of data in real time, has a wide variety of use cases across multiple domains. Here are some of the key scenarios where Storm has proven its effectiveness:
Real-Time Analytics: Storm is commonly used to analyze data in real time. It can process live streams of data and produce analytics on the fly, making it an ideal tool for situations where quick decision-making based on real-time data is required. This is particularly useful in scenarios such as fraud detection in financial transactions, real-time monitoring of security logs, or live audience engagement analytics for digital media.
Online Machine Learning: Apache Storm can be used for implementing and serving online machine learning models, where the models learn and adapt in real-time. For example, it can be used to continuously update a recommendation system based on user behavior data or refine predictive models based on real-time sensor data.
Continuous Computation: Storm can be used to continuously query or compute data, offering updated results in real time. For instance, it can be used to maintain a continuously updated leaderboard or live dashboards reflecting the most current data.
ETL (Extract, Transform, Load): Apache Storm is useful for real-time ETL tasks where data needs to be collected from various sources, transformed to a desirable format, and loaded into a database or a data warehouse. Its ability to handle the transformation in real-time makes the latest data available for analysis much faster compared to traditional batch ETL processes.
Data Enrichment: Apache Storm can be used to enrich streams of data in real time. For example, it can add geographical data to incoming IP addresses, or integrate customer profile information to transaction records, all in real time.
Internet of Things (IoT): With the growth of IoT, there’s a massive influx of real-time data from various sensors and devices. Apache Storm can process and analyze this data in real time, enabling real-time monitoring and decision making in IoT systems.
Advantages of Using Apache Storm
Apache Storm offers several compelling advantages, which have made it a popular choice for real-time data processing:
Real-Time Processing: Apache Storm enables the processing of large volumes of data in real time, making it possible to provide instant analytics and insights. Storm is incredibly fast, with the capacity to process over a million records per second per node on a mid size cluster.
Fault Tolerance: Apache Storm is designed to be fault-tolerant. If a node fails, the tasks assigned to that node are automatically reassigned to other nodes, ensuring no data loss or interruption in processing.
Scalability: Apache Storm can seamlessly scale to handle larger workloads. You can add more nodes to a Storm cluster to increase its processing capacity, making it a highly scalable solution.
Ease of Use: Storm is easy to set up and operate. It provides a simple and flexible programming model, which allows developers to write topologies using any programming language.
Guaranteed Data Processing: Storm guarantees that each unit of data (tuple) will be processed at least once, and with Trident, it offers exactly-once processing semantics.
Integration: Storm integrates well with other systems in the big data ecosystem, such as Hadoop for long-term data storage or Kafka for real-time messaging.
Limitations of Apache Storm
While Apache Storm provides numerous benefits, it also has some limitations:
State Management: Apache Storm does not inherently manage state. You must implement state management yourself, which can be complex in real-time systems.
Resource Management: Storm does not provide any built-in mechanism for managing resources, unlike other big data processing frameworks like
Apache Flink or
Spark.
Lack of Machine Learning Libraries: Unlike some other data processing frameworks, Storm doesn’t come with built-in machine learning libraries.
Debugging and Testing: Debugging and testing Storm topologies can be challenging, due to the distributed and asynchronous nature of the system.
Batch Processing: While Apache Storm excels at real-time processing, it is not designed for batch processing. If you need both batch and real-time processing, you might need to pair Storm with another tool like Hadoop.
Apache Storm as Part of the Big Data Infrastructure Stack
In the landscape of big data, Apache Storm occupies a unique position. It is commonly used as a key component in the big data infrastructure stack due to its capabilities in real-time data processing. While traditional big data tools like Hadoop are excellent for batch processing, they fall short when it comes to processing data in real time. This is where Apache Storm comes in, complementing the existing big data infrastructure with its ability to handle
streaming data.
Storm works in conjunction with other big data technologies to form a comprehensive data processing stack:
Data Ingestion: Tools like
Apache Kafka or Flume can be used to ingest real-time data into the system. This data can then be processed by Apache Storm in real time.
Real-Time Processing: Apache Storm performs the real-time processing, analyzing and making sense of data as soon as it enters the system.
Data Storage: Post-processing, the results can be stored in databases like Apache Cassandra or HBase, or in HDFS for long-term storage. Here, Hadoop’s batch processing capabilities can be used for further analysis or machine learning tasks.
Data Querying and Analysis: Finally, querying tools like
Apache Hive or Drill, and business intelligence tools can be used to query the data and generate reports or visualizations.
The ability to seamlessly integrate with other components in the big data ecosystem has made Apache Storm a preferred choice for real-time data processing in a big data infrastructure stack. By providing an immediate layer of insights on streaming data, Apache Storm bridges the gap between data ingestion and data storage, driving the real-time big data revolution.
Who is Using Apache Storm Project?
Apache Storm has found broad adoption across various industries due to its robust real-time data processing capabilities. Here are some notable examples:
Twitter: Apache Storm was originally created at BackType, which was later acquired by Twitter. Twitter uses Storm extensively for a variety of tasks within its messaging infrastructure, including real-time content personalization, spam detection, and real-time analytics.
Spotify: The music streaming giant uses Apache Storm to monitor its user activity and environment in real time. It helps Spotify in providing personalized music recommendations to its users.
Yelp: Yelp uses Apache Storm for various real-time and near-real-time data processing tasks, including updating business attributes and processing photos.
Apache Storm as a Fully-Managed Service
As businesses increasingly adopt big data technologies, the demand for managed services to reduce operational overhead has surged. Apache Storm, too, can be deployed as a fully-managed service, allowing users to focus on application development rather than managing the underlying infrastructure.
Opting for Apache Storm as a fully-managed service brings numerous advantages to businesses and developers. The ease of setup and management simplifies the often complex task of deploying and maintaining a distributed system, allowing teams to concentrate on developing robust applications. Scalability is another significant advantage; resources can be adjusted effortlessly to match processing needs, ensuring efficient handling of data, whether during peak loads or growth over time.
High availability and disaster recovery features ensure business continuity, while comprehensive monitoring and alert systems keep teams informed about their application’s performance. Lastly, the security measures, including encryption and access controls, and compliance readiness offered by managed services, alleviate the burden of meeting various data security and privacy regulations.
Several cloud providers offer Apache Storm as a managed service. For instance, AWS offers Amazon Kinesis Data Analytics for Java Applications, which supports running your Apache Storm applications in a fully-managed environment. Similarly, Microsoft Azure provides Azure Stream Analytics, a real-time analytics service that can serve as an alternative to Apache Storm. Read more about top benefits of AWS cloud computing
here.
Google Cloud Platform (GCP) offers similar solutions for real-time data processing. For instance, Google Cloud Dataflow is a fully-managed service for both
stream and batch data processing. While it does not directly run Apache Storm applications, its powerful processing capabilities can serve as an alternative.
For a more direct usage of Apache Storm on GCP, you can leverage Google Cloud Dataproc, a managed service for running Apache open-source data tools. With Dataproc, you can create a managed Hadoop or Spark cluster, and then deploy Apache Storm on it.
In the ever-evolving landscape of big data, Apache Storm stands out as a powerful tool for real-time data processing. From its unique architecture and key components to its place within the big data infrastructure stack, Apache Storm’s capabilities offer robust solutions for businesses seeking to harness the power of real-time analytics.
While Apache Storm is a potent, fault-tolerant tool in its own right, effectively leveraging its power demands a certain level of expertise. Whether you’re considering the adoption of Apache Storm or you’re looking to optimize your existing data processing pipelines, our team of data engineers at nexocode is ready to assist. With our deep understanding of big data technologies and hands-on experience with Apache Storm, we can help you make the most of your data.
Don’t let your data’s potential go untapped.
Contact us at nexocode to explore how we can transform your data into actionable insights, driving your business forward. Remember, in today’s digital economy, real-time data processing isn’t just an advantage—it’s a necessity. So, reach out to our nexocode data engineers today and take the first step towards becoming a data-driven organization.
Apache Storm is a free and open-source distributed real-time computation system. Developed by the Apache Software Foundation, it's designed for processing large volumes of high-velocity data. It's capable of processing over a million tuples per second per node, making it highly suitable for real-time analytics.
Real-time data processing involves the continuous input, processing, and output of data, providing immediate insights. It is becoming increasingly important in modern big data architectures as it enables businesses to make data-driven decisions in real time.
A Storm topology is a graph of computation designed to process data as it flows from one point to another. Each node in the topology contains processing logic and is composed of either spouts, which are sources of data, or bolts, which transform the data.
In the big data infrastructure stack, Apache Storm plays the role of a real-time data processor. It complements other technologies such as Apache Kafka for data ingestion, Hadoop for batch processing, and Cassandra for data storage, providing an end-to-end solution for big data applications.
Apache Storm is used in a variety of industries for real-time data processing. Examples include Twitter for real-time content personalization, Spotify for music recommendations, Yelp for near real-time data processing tasks, and Alibaba for real-time computation of various business metrics.
Apache Storm offers robust scalability, fault tolerance, ease of use, and guarantees data processing. However, it requires considerable resources and needs expert setup and tuning to work efficiently.
Yes, Apache Storm can be deployed as a fully-managed service. Several cloud providers like AWS and GCP offer services that can run Apache Storm applications in a fully-managed environment, which simplifies setup and management, ensures high availability, and provides scalability.
Wojciech is a seasoned engineer with experience in development and management. He has worked on many projects and in different industries, making him very knowledgeable about what it takes to succeed in the workplace by applying Agile methodologies. Wojciech has deep knowledge about DevOps principles and Machine Learning. His practices guarantee that you can reliably build and operate a scalable AI solution. You can find Wojciech working on open source projects or reading up on new technologies that he may want to explore more deeply.
Would you like to discuss AI opportunities in your business?
Let us know and Dorota will arrange a call with our experts.
Artificial Intelligence solutions are becoming the next competitive edge for many companies within various industries. How do you know if your company should invest time into emerging tech? How to discover and benefit from AI opportunities? How to run AI projects?
Follow our article series to learn how to get on a path towards AI adoption. Join us as we explore the benefits and challenges that come with AI implementation and guide business leaders in creating AI-based companies.
In the interests of your safety and to implement the principle of lawful, reliable and transparent
processing of your personal data when using our services, we developed this document called the
Privacy Policy. This document regulates the processing and protection of Users’ personal data in
connection with their use of the Website and has been prepared by Nexocode.
To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and
technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at
the level which ensures compliance with applicable Polish and European laws such as:
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on
the protection of natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
(published in the Official Journal of the European Union L 119, p 1);
Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item
1000);
Act of 18 July 2002 on providing services by electronic means;
Telecommunications Law of 16 July 2004.
The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.
1. Definitions
User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal
person, or an organizational unit which is not a legal person to which specific provisions grant
legal capacity.
Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court
Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department
of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
Website – website run by Nexocode, at the URL: nexocode.com whose content is available to
authorized persons.
Cookies – small files saved by the server on the User's computer, which the server can read when
when the website is accessed from the computer.
SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary
methods of data transmission encrypts data transmission.
System log – the information that the User's computer transmits to the server which may contain
various data (e.g. the user’s IP number), allowing to determine the approximate location where
the connection came from.
IP address – individual number which is usually assigned to every computer connected to the
Internet. The IP number can be permanently associated with the computer (static) or assigned to
a given connection (dynamic).
GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of individuals regarding the processing of personal data and onthe free transmission
of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
Personal data – information about an identified or identifiable natural person ("data subject").
An identifiable natural person is a person who can be directly or indirectly identified, in
particular on the basis of identifiers such as name, identification number, location data,
online identifiers or one or more specific factors determining the physical, physiological,
genetic, mental, economic, cultural or social identity of a natural person.
Processing – any operations performed on personal data, such as collecting, recording, storing,
developing, modifying, sharing, and deleting, especially when performed in IT systems.
2. Cookies
The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.
The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the
Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end
device. Cookies are used to:
improve user experience and facilitate navigation on the site;
help to identify returning Users who access the website using the device on which Cookies were
saved;
creating statistics which help to understand how the Users use websites, which allows to improve
their structure and content;
adjusting the content of the Website pages to specific User’s preferences and optimizing the
websites website experience to the each User's individual needs.
Cookies usually contain the name of the website from which they originate, their storage time on the
end device and a unique number. On our Website, we use the following types of Cookies:
"Session" – cookie files stored on the User's end device until the Uses logs out, leaves the
website or turns off the web browser;
"Persistent" – cookie files stored on the User's end device for the time specified in the Cookie
file parameters or until they are deleted by the User;
"Performance" – cookies used specifically for gathering data on how visitors use a website to
measure the performance of a website;
"Strictly necessary" – essential for browsing the website and using its features, such as
accessing secure areas of the site;
"Functional" – cookies enabling remembering the settings selected by the User and personalizing
the User interface;
"First-party" – cookies stored by the Website;
"Third-party" – cookies derived from a website other than the Website;
"Facebook cookies" – You should read Facebook cookies policy: www.facebook.com
"Other Google cookies" – Refer to Google cookie policy: google.com
3. How System Logs work on the Website
User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The
information collected in the Logs is processed primarily for purposes related to the provision of
services, i.e. for the purposes of:
analytics – to improve the quality of services provided by us as part of the Website and adapt
its functionalities to the needs of the Users. The legal basis for processing in this case is
the legitimate interest of Nexocode consisting in analyzing Users' activities and their
preferences;
fraud detection, identification and countering threats to stability and correct operation of the
Website.
4. Cookie mechanism on the Website
Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful
information
and are stored on the User's computer – our server can read them when connecting to this computer
again.
Most web browsers allow cookies to be stored on the User's end device by default. Each User can
change
their Cookie settings in the web browser settings menu:
Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings >
Advanced. In
the "Privacy and security" section, click the Content Settings button. In the "Cookies and site
date"
section you can change the following Cookie settings:
Deleting cookies,
Blocking cookies by default,
Default permission for cookies,
Saving Cookies and website data by default and clearing them when the browser is closed,
Specifying exceptions for Cookies for specific websites or domains
Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options >
Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with
the OK
button.
Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field.
From
there, you can check a relevant field to decide whether or not to accept cookies.
Opera
Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site
data. From there, adjust the setting: Allow sites to save and read cookie data
Safari
In the Safari drop-down menu, select Preferences and click the Security icon.From there,
select
the desired security level in the "Accept cookies" area.
Disabling Cookies in your browser does not deprive you of access to the resources of the Website.
Web
browsers, by default, allow storing Cookies on the User's end device. Website Users can freely
adjust
cookie settings. The web browser allows you to delete cookies. It is also possible to automatically
block cookies. Detailed information on this subject is provided in the help or documentation of the
specific web browser used by the User. The User can decide not to receive Cookies by changing
browser
settings. However, disabling Cookies necessary for authentication, security or remembering User
preferences may impact user experience, or even make the Website unusable.
5. Additional information
External links may be placed on the Website enabling Users to directly reach other website. Also,
while
using the Website, cookies may also be placed on the User’s device from other entities, in
particular
from third parties such as Google, in order to enable the use the functionalities of the Website
integrated with these third parties. Each of such providers sets out the rules for the use of
cookies in
their privacy policy, so for security reasons we recommend that you read the privacy policy document
before using these pages.
We reserve the right to change this privacy policy at any time by publishing an updated version on
our
Website. After making the change, the privacy policy will be published on the page with a new date.
For
more information on the conditions of providing services, in particular the rules of using the
Website,
contracting, as well as the conditions of accessing content and using the Website, please refer to
the
the Website’s Terms and Conditions.
Nexocode Team
Want to unlock the full potential of Artificial Intelligence technology?
Download our ebook and learn how to drive AI adoption in your business.