A Deep Dive into Apache Hive Architecture: From Data Storage to Data Analysis with SQL-like Hive Query Language

A Deep Dive into Apache Hive Architecture: From Data Storage to Data Analysis with SQL-like Hive Query Language

Wojciech Gębiś - April 19, 2023

Apache Hive is a popular data warehousing tool built on top of the Hadoop ecosystem, designed to make data querying and analysis easy and efficient. With its SQL-like query language, Hive allows users to perform ad-hoc querying, summarization, and data analysis on large datasets stored in Hadoop Distributed File System (HDFS) or other compatible file systems.

In this article, we will take a deep dive into the architecture of Apache Hive, from data storage to data analysis, and examine the key components that make Hive such a powerful tool for big data processing. By the end of this article, you will have a thorough understanding of how Hive manages and processes data, as well as its capabilities and limitations, allowing you to make informed decisions when designing your big data processing pipeline.

Modern Big Data Architecture

Big data is more than just a buzzword- it’s a reality for businesses of all sizes. And to take advantage of big data, you need a modern big data ecosystem.

A modern big data ecosystem includes hardware, software, and services that work together to process and analyze large volumes of data. The goal is to enable businesses to make better decisions faster and improve their bottom line.

Several components are essential to a thriving big data ecosystem:

  • Data variety: Different data types from multiple sources are ingested and outputted (structured data, unstructured data, semi-structured data).
  • Velocity: Fast ingest and processing of data in real-time.
  • Volume: Scalable storage and processing of large amounts of data.
  • Cheap raw storage: Ability to store data affordably in its original form.
  • Flexible processing: Ability to run various processing engines on the same data.
  • Support for streaming analytics: Streaming analytics refers to providing low latency to process real-time data streams in near-real-time.
  • Support for modern applications: Ability to power new types of applications that require fast, flexible data processing like BI tools, machine learning systems, log analysis, and more.

What is Batch Processing?

Batch processing is a type of computing process that involves collecting data and running it through a set of tasks in batches. Data is collected, sorted, and there are usually multiple steps involved in the process. The result of the batch process is typically compressed data stored for future use.

Batch processing has been used for decades to manage large volumes of data and still has many applications. Still, it isn’t suitable for real-time applications where near-instantaneous results are required.

Batch processing

Batch processing

Enter Apache Hive Project

Apache Hive is an open-source data warehousing and analysis system built on top of the Apache Hadoop ecosystem. It provides a SQL-like interface for querying large data sets stored in Hadoop’s HDFS or other Hadoop-supported storage systems. Hive translates SQL-like queries into MapReduce jobs for execution on Hadoop clusters, allowing users to perform ad hoc analysis and aggregate queries on big data without having to write complex MapReduce code. Hive also provides tools for managing, storing, and retrieving large data sets, making it a popular choice for data warehousing and business intelligence workloads.

Apache Hive Architecture and Key Components

The architecture of Apache Hive is built around several key components that work together to enable efficient data querying and analysis. These components include the Hive driver, compiler, execution engine, and storage handler, among others.

Let’s take a closer look at each of these components and how they contribute to the overall functionality of Hive.

Apache Hive architecture

Apache Hive architecture

Hive Driver

The driver acts as the entry point for all Hive operations, managing the lifecycle of a HiveQL query. It is responsible for parsing, compiling, optimizing, and executing the query.

Compiler

The compiler takes the Hive queries and translates them into a series of MapReduce jobs or Tez tasks, depending on the configured execution engine. It also performs query optimization, such as predicate pushdown and join reordering, to improve performance.

Execution Engine

The execution engine is responsible for running the generated MapReduce jobs or Tez tasks on the Hadoop cluster. It manages all the data, flow, resource allocation, and task scheduling.

Storage Handler

The storage handler is an interface between Hive and the underlying storage system, such as HDFS or Amazon S3. It is responsible for reading and writing data to and from the storage system in a format that Hive can understand.

Hive Clients

Hive clients provide a way for users to interact with the Hive ecosystem and submit queries. Different clients offer various interfaces and protocols to connect to Hive, ensuring compatibility with a wide range of applications and systems. Some of the most common Hive clients include:

  • Hive Thrift: The Hive Thrift client is based on the Apache Thrift framework, which is a scalable cross-language service development framework. Thrift allows Hive to communicate with clients using different programming languages, such as Java, Python, C++, and more. The Hive Thrift client provides a comprehensive API to interact with Hive and execute HiveQL queries, making it a popular choice for developers looking to integrate Hive with their applications.
  • Hive JDBC: The Hive JDBC client is a Java-based client that uses the Java Database Connectivity (JDBC) API to connect to Hive Server2. JDBC is a widely used standard for connecting Java applications to databases, and the Hive JDBC driver allows users to leverage existing JDBC-based tools and applications to interact with Hive. With the Hive JDBC client, users can submit queries, fetch results, and manage resources using familiar JDBC methods and interfaces.
  • Hive ODBC: The Hive ODBC client uses the Open Database Connectivity (ODBC) API to connect to Hive Server2. ODBC is a standard programming interface that enables applications to access data in relational databases, regardless of the specific database management system. The Hive ODBC driver allows users to connect Hive to a variety of ODBC-compliant tools, such as Microsoft Excel, Tableau, and other data analytics and visualization applications. With the Hive ODBC client, users can easily integrate Hive with their existing data analysis workflows and tools.

Hive also provides a simple user interface (UI) to query results and monitor processes.

Apache Hive Ecosystem

Key components of Apache Hive

Key components of Apache Hive

Hive Metastore

The metastore is a centralized repository for storing metadata about the data stored in the Hadoop cluster, such as the location of data files, the schema of tables, and the structure of partitions. The metadata storage is used by Hive to keep track of the data being analyzed.

Hive Server 2 (HS2)

Hive Server 2 is the service component of Hive that provides a Thrift API for accessing Hive from remote clients. HS2 supports multi-client concurrency and authentication. It is designed to provide better support for open API clients like JDBC and ODBC to submit HiveQL queries and retrieve results from the Hadoop cluster.

Hive CLI and Beeline

The Hive Command Line Interface or command line tool (CLI) is a shell-based interface for interacting with Hive. It provides an easy way to execute the structured query language - HiveQL queries - and perform other operations on the data stored in the Apache Hadoop cluster. The CLI is suitable for ad hoc queries and debugging, but it doesn’t support concurrency and may not be ideal for production use cases.

Beeline Shell is an alternative to the Hive CLI that connects to Hive Server 2 using JDBC. It is a more robust and flexible option, as it supports concurrency and multi-user access. Beeline can execute HiveQL queries and manage resources just like the Hive CLI, but it is better suited for production environments due to its more advanced features and improved stability.

Hive UDFs (User-Defined Functions)

Hive UDFs are custom functions created by users to extend the capabilities of the SQL-like Hive Query Language. These functions can be used to perform complex calculations, data transformations, or other tasks that are not supported by the built-in Hive functions. UDFs can be written in Java, and once created, they can be registered with Hive and used in queries just like any other built-in function.

Hive supports three types of UDFs:

  • Scalar UDFs: These functions take one or more input values and return a single output value. Scalar UDFs can be used in SELECT, WHERE, and HAVING clauses of a query.
  • Aggregate UDFs: These functions perform calculations on a group of rows and return a single output value. Aggregate UDFs can be used in GROUP BY clauses.
  • Table-Generating UDFs: These functions take one or more input values and return a table as output. Table-generating UDFs can be used in the FROM clause of a query.

Hive supports extending the UDFs, by creating custom UDFs, users can extend Hive’s functionality and tailor it to their specific data engineering needs, making it a highly versatile tool for data analysis and processing in the Hadoop ecosystem.

Apache Hive Data Storage System

Apache Hive works as a data warehouse software with its data storage system built on top of the Hadoop Distributed File System or other compatible file systems, such as Amazon S3 or Azure Blob Storage. Hive does not have its own storage layer; instead, it leverages Hadoop’s storage infrastructure to manage and store data.

When data is ingested into Hive, it is typically stored as tables, which are organized into databases. Each table consists of rows and columns, similar to a traditional relational database system. However, unlike traditional databases, Hive tables can be stored in various file formats, such as CSV, Avro, Parquet, ORC, and others. Users can choose the file format that best suits their data and query requirements, with each format offering different trade-offs in terms of storage efficiency, query performance, and compatibility.

Hive’s data storage system also supports partitioning, which is a technique used to divide a table into smaller, more manageable pieces based on the values of one or more columns. Partitioning can significantly improve query performance, as it allows Hive to read only the relevant partitions when processing a query, instead of scanning the entire table. This is particularly beneficial when dealing with large datasets.

Additionally, Hive allows users to create external tables, which are tables that reference data stored outside the Hive warehouse directory. External tables allow Hive to manage metadata and query data stored in other locations, such as HDFS directories or cloud storage, providing flexibility in how data is organized and stored within the Hadoop ecosystem.

Apache Hive data storage system

Apache Hive data storage system

Hive’s SQL-Like Query Language (HiveQL)

HiveQL (Hive Query Language) is a SQL-like data query language developed for Apache Hive to simplify querying and analyzing structured data stored in HDFS or other compatible storage systems. It provides users with a familiar syntax and semantics, making it easier for those with SQL queries experience to work with big data stored in Hadoop.

How Hive Works With Hadoop and Hadoop Distributed File System (HDFS)

Apache Hive works closely with Hadoop and the Hadoop Distributed File System to provide a powerful data warehousing and querying solution for large datasets. Let’s revise it step-by-step and see how Hive interacts with Hadoop:

Job execution flow in Hive with Hadoop

Job execution flow in Hive with Hadoop

  1. Data storage: As mentioned above, Hive does not have its own storage layer. When users create tables in Hive, the data is stored as files within HDFS directories. Users can choose from various file formats, based on their specific requirements and use cases.
  2. Metadata management: Hive maintains metadata about the tables, such as schema information, partitioning details, and data locations, in a component called the Hive Metastore. The metadata helps Hive track the structure and location of the data stored in HDFS, enabling it to process queries efficiently.
  3. Query execution: When users submit HiveQL queries, Hive translates these SQL-like queries into a series of MapReduce, Tez, or Apache Spark jobs that can be executed on the Hadoop cluster. The query execution engine processes the data stored in HDFS and returns the results to the user.

Key Use Cases for Hive

Apache Hive is a versatile data processing solution for a variety of big data use cases, ranging from data analysis and warehousing to ETL processing and machine learning. Its SQL-like query language and distributed processing capabilities make it an attractive choice for organizations looking to harness the power of big data. Here are some popular use cases of applying Hive in modern big data infrastructures:

  • Data analysis and reporting: Hive enables users to perform ad hoc querying, summarization, and data analysis on large datasets starting by applying structure to unstructured data sets. Its SQL-like query language makes it easy for analysts and data scientists to extract insights from big data using familiar SQL syntax, without the need to write complex MapReduce code. It also supports work for data science teams and non-programmers to process and analyze petabytes of data.
  • ETL (Extract, Transform, Load) processing: Hive can be used to perform ETL tasks on large datasets, such as data cleaning, transformation, aggregation, and loading into other systems or storage formats. Its support for User-Defined Functions (UDFs) allows users to extend Hive’s functionality and perform custom data processing tasks as needed.
  • Data integration: Hive’s support for external tables enables users to manage metadata and query data stored in other HDFS directories or cloud storage systems. This allows for seamless data integration and access to data stored in various locations within the Hadoop ecosystem.
  • Machine learning and data mining: Hive can be used as a preprocessing step for machine learning and data mining tasks, as it enables users to prepare and transform large datasets into suitable formats for further analysis using machine learning algorithms or data mining techniques.
  • Log and event data analysis: Hive is well-suited for analyzing log and event data generated by web servers, applications, or IoT devices. Users can store and process large volumes of log data in Hive, and perform various batch analysis tasks such as anomaly detection, trend analysis, and user behavior analysis.

Advantages of Using Apache Hive

Apache Hive offers several advantages for organizations dealing with large datasets and looking for an efficient data warehousing and analysis solution. Some key advantages of using Apache Hive include the following:

  • SQL-like Query Language: HiveQL enables users with SQL experience to easily query and analyze big data.
  • Scalability: Hive leverages Hadoop’s distributed processing capabilities to handle large datasets efficiently.
  • Flexibility: Hive supports various file formats, partitioning, and bucketing techniques for optimized storage and query performance.
  • Data Integration: Hive’s external tables allow querying data stored in other HDFS directories or cloud storage systems.
  • Extensibility: Custom User-Defined Functions (UDFs) enable tailored data processing tasks to fit specific needs.
  • Ecosystem Compatibility: Hive integrates seamlessly with other big data tools and frameworks within the Hadoop ecosystem.
  • Open-source & Community Support: Apache Hive’s active community ensures continuous development, improvements, and support.

Limitations of Apache Hive

There are also some limitations and disadvantages to using Apache Hive, including the following:

  • Latency: As Hive translates queries into MapReduce, Tez, or Apache Spark jobs, it may not be suitable for real-time or low-latency data processing. Hive is more suited for batch processing and analytical workloads where query response times in the order of minutes or hours are acceptable.
  • Limited support for transactional operations: Hive’s support for transactional operations like UPDATE and DELETE is limited compared to traditional relational databases. Additionally, its ACID (Atomicity, Consistency, Isolation, Durability) properties support is relatively recent and may not be as mature as in other databases.
  • No support for stored procedures and triggers: Unlike traditional relational databases, Hive does not support stored procedures and triggers, which can be a limitation for users who rely on these features for complex business logic and data processing tasks.
  • Resource management: Hive relies on the underlying Hadoop cluster for resource management, and it may require tuning and optimization to achieve the desired performance. Users may need to invest time and effort in configuring and managing Hadoop resources for optimal Hive performance.
  • Limited indexing: Hive does not offer extensive indexing options like traditional databases, which can lead to slower query performance in some cases. Users may need to rely on partitioning and bucketing strategies to optimize query performance.

Apache Hive as Part of the Big Data Architecture Stack

Apache Hive serves as an essential component in the big data architecture stack, providing data warehousing and analytics capabilities. Apart from already thoroughly explained Hadoop and HDFS integrations, Hive integrates seamlessly with other Hadoop ecosystem tools such as Pig, HBase, and Spark, enabling organizations to build a comprehensive big data processing pipeline tailored to their specific needs.

Who is Using Apache Hive Project?

Apache Hive has been adopted by many organizations across various industries for their big data processing needs. As one of the initial developers of Apache Hive, Facebook uses it extensively for data warehousing and analytics to optimize their advertising platform and gain insights into user behavior. Other notable companies, including Netflix, Airbnb, Uber, and LinkedIn, also use Apache Hive for processing and analyzing large amounts of data related to user behavior, preferences, and platform optimization. These examples illustrate the widespread adoption of Apache Hive by major organizations worldwide, showcasing its versatility and effectiveness in processing and analyzing big data across various industries.

Apache Hive as a Fully-Managed Service

In addition to being an open-source project, Apache Hive is also available as a fully-managed service through various cloud providers, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These cloud services provide a fully-managed Hive environment, which means that users do not have to worry about configuring, provisioning, and managing the underlying infrastructure.

  • Amazon Web Services (AWS): Amazon EMR (Elastic MapReduce) is a fully-managed service that provides a Hadoop cluster environment, Apache Hive included, along with other big data processing tools. Users can quickly provision EMR clusters with the desired configuration and scale them up or down as needed.
  • Microsoft Azure: Azure HDInsight is a fully-managed cloud service that provides a Hadoop cluster environment, which includes Apache Hive, along with other big data processing tools. HDInsight also integrates with other Azure services such as Azure Data Lake Storage, Azure SQL Database, and Azure Machine Learning.
  • Google Cloud Platform (GCP): Google Cloud Dataproc is a fully-managed service that provides a Hadoop cluster environment, which includes Apache Hive, along with other big data processing tools. Dataproc also integrates with other GCP services such as BigQuery, Cloud Storage, and Cloud AI Platform.

Managed Hive services offer several advantages over running Hive on-premises or in a self-managed Hadoop cluster, including simplified deployment, reduced maintenance, and lower operational costs. Users can quickly provision Hive clusters with the desired configuration and scale them up or down as needed to handle changes in workload or data volume.

Conclusion

Apache Hive is a powerful tool for processing and analyzing big data, enabling users to execute complex data queries and run ad hoc SQL queries efficiently. With its support for compressed data storage, optimized row columnar format, and integration with other data mining tools, Hive has become a popular choice for organizations handling large datasets. Data analysts and data scientists can leverage Hive’s SQL-like query language and graphical user interfaces to extract insights and create interactive visualizations from huge datasets stored in Hadoop data nodes.

While Apache Hive is a data warehouse software well suited for analytics, it is not designed for online transaction processing (OLTP) or real-time data processing. Organizations requiring real-time data processing or OLTP should consider other tools or frameworks, such as Apache Flink or Apache Kafka.

In comparison to more traditional database and data warehouse systems, Apache Hive offers a cost-effective and scalable solution for processing big data. Its ability to work with various file formats and integrate with other Hadoop ecosystem tools makes it a versatile and powerful tool for big data processing.

To sum all this up, Apache Hive has revolutionized big data processing by enabling organizations to manage and analyze massive amounts of data efficiently and cost-effectively, making it an essential tool for any modern big data architecture. And if you need help getting started, don’t hesitate to contact our team of experts. We’d be happy to walk you through the basics and help get your Hive implementation up and running in no time!

References

Apache Hive

Apache Hive Github

About the author

Wojciech Gębiś

Wojciech Gębiś

Project Lead & DevOps Engineer

Linkedin profile Twitter Github profile

Wojciech is a seasoned engineer with experience in development and management. He has worked on many projects and in different industries, making him very knowledgeable about what it takes to succeed in the workplace by applying Agile methodologies. Wojciech has deep knowledge about DevOps principles and Machine Learning. His practices guarantee that you can reliably build and operate a scalable AI solution.
You can find Wojciech working on open source projects or reading up on new technologies that he may want to explore more deeply.

Would you like to discuss AI opportunities in your business?

Let us know and Dorota will arrange a call with our experts.

Dorota Owczarek
Dorota Owczarek
AI Product Lead

Thanks for the message!

We'll do our best to get back to you
as soon as possible.

This article is a part of

Becoming AI Driven
98 articles

Becoming AI Driven

Artificial Intelligence solutions are becoming the next competitive edge for many companies within various industries. How do you know if your company should invest time into emerging tech? How to discover and benefit from AI opportunities? How to run AI projects?

Follow our article series to learn how to get on a path towards AI adoption. Join us as we explore the benefits and challenges that come with AI implementation and guide business leaders in creating AI-based companies.

check it out

Becoming AI Driven

Insights on practical AI applications just one click away

Sign up for our newsletter and don't miss out on the latest insights, trends and innovations from this sector.

Done!

Thanks for joining the newsletter

Check your inbox for the confirmation email & enjoy the read!

This site uses cookies for analytical purposes.

Accept Privacy Policy

In the interests of your safety and to implement the principle of lawful, reliable and transparent processing of your personal data when using our services, we developed this document called the Privacy Policy. This document regulates the processing and protection of Users’ personal data in connection with their use of the Website and has been prepared by Nexocode.

To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at the level which ensures compliance with applicable Polish and European laws such as:

  1. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (published in the Official Journal of the European Union L 119, p 1); Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item 1000);
  2. Act of 18 July 2002 on providing services by electronic means;
  3. Telecommunications Law of 16 July 2004.

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.

1. Definitions

  1. User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal person, or an organizational unit which is not a legal person to which specific provisions grant legal capacity.
  2. Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
  3. Website – website run by Nexocode, at the URL: nexocode.com whose content is available to authorized persons.
  4. Cookies – small files saved by the server on the User's computer, which the server can read when when the website is accessed from the computer.
  5. SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary methods of data transmission encrypts data transmission.
  6. System log – the information that the User's computer transmits to the server which may contain various data (e.g. the user’s IP number), allowing to determine the approximate location where the connection came from.
  7. IP address – individual number which is usually assigned to every computer connected to the Internet. The IP number can be permanently associated with the computer (static) or assigned to a given connection (dynamic).
  8. GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals regarding the processing of personal data and onthe free transmission of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
  9. Personal data – information about an identified or identifiable natural person ("data subject"). An identifiable natural person is a person who can be directly or indirectly identified, in particular on the basis of identifiers such as name, identification number, location data, online identifiers or one or more specific factors determining the physical, physiological, genetic, mental, economic, cultural or social identity of a natural person.
  10. Processing – any operations performed on personal data, such as collecting, recording, storing, developing, modifying, sharing, and deleting, especially when performed in IT systems.

2. Cookies

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet. The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end device.
Cookies are used to:

  1. improve user experience and facilitate navigation on the site;
  2. help to identify returning Users who access the website using the device on which Cookies were saved;
  3. creating statistics which help to understand how the Users use websites, which allows to improve their structure and content;
  4. adjusting the content of the Website pages to specific User’s preferences and optimizing the websites website experience to the each User's individual needs.

Cookies usually contain the name of the website from which they originate, their storage time on the end device and a unique number. On our Website, we use the following types of Cookies:

  • "Session" – cookie files stored on the User's end device until the Uses logs out, leaves the website or turns off the web browser;
  • "Persistent" – cookie files stored on the User's end device for the time specified in the Cookie file parameters or until they are deleted by the User;
  • "Performance" – cookies used specifically for gathering data on how visitors use a website to measure the performance of a website;
  • "Strictly necessary" – essential for browsing the website and using its features, such as accessing secure areas of the site;
  • "Functional" – cookies enabling remembering the settings selected by the User and personalizing the User interface;
  • "First-party" – cookies stored by the Website;
  • "Third-party" – cookies derived from a website other than the Website;
  • "Facebook cookies" – You should read Facebook cookies policy: www.facebook.com
  • "Other Google cookies" – Refer to Google cookie policy: google.com

3. How System Logs work on the Website

User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The information collected in the Logs is processed primarily for purposes related to the provision of services, i.e. for the purposes of:

  • analytics – to improve the quality of services provided by us as part of the Website and adapt its functionalities to the needs of the Users. The legal basis for processing in this case is the legitimate interest of Nexocode consisting in analyzing Users' activities and their preferences;
  • fraud detection, identification and countering threats to stability and correct operation of the Website.

4. Cookie mechanism on the Website

Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful information and are stored on the User's computer – our server can read them when connecting to this computer again. Most web browsers allow cookies to be stored on the User's end device by default. Each User can change their Cookie settings in the web browser settings menu: Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings > Advanced. In the "Privacy and security" section, click the Content Settings button. In the "Cookies and site date" section you can change the following Cookie settings:

  • Deleting cookies,
  • Blocking cookies by default,
  • Default permission for cookies,
  • Saving Cookies and website data by default and clearing them when the browser is closed,
  • Specifying exceptions for Cookies for specific websites or domains

Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options > Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with the OK button.

Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field. From there, you can check a relevant field to decide whether or not to accept cookies.

Opera
Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site data. From there, adjust the setting: Allow sites to save and read cookie data

Safari
In the Safari drop-down menu, select Preferences and click the Security icon.From there, select the desired security level in the "Accept cookies" area.

Disabling Cookies in your browser does not deprive you of access to the resources of the Website. Web browsers, by default, allow storing Cookies on the User's end device. Website Users can freely adjust cookie settings. The web browser allows you to delete cookies. It is also possible to automatically block cookies. Detailed information on this subject is provided in the help or documentation of the specific web browser used by the User. The User can decide not to receive Cookies by changing browser settings. However, disabling Cookies necessary for authentication, security or remembering User preferences may impact user experience, or even make the Website unusable.

5. Additional information

External links may be placed on the Website enabling Users to directly reach other website. Also, while using the Website, cookies may also be placed on the User’s device from other entities, in particular from third parties such as Google, in order to enable the use the functionalities of the Website integrated with these third parties. Each of such providers sets out the rules for the use of cookies in their privacy policy, so for security reasons we recommend that you read the privacy policy document before using these pages. We reserve the right to change this privacy policy at any time by publishing an updated version on our Website. After making the change, the privacy policy will be published on the page with a new date. For more information on the conditions of providing services, in particular the rules of using the Website, contracting, as well as the conditions of accessing content and using the Website, please refer to the the Website’s Terms and Conditions.

Nexocode Team

Close

Want to unlock the full potential of Artificial Intelligence technology?

Download our ebook and learn how to drive AI adoption in your business.

GET EBOOK NOW