Welcome to the dawn of a new era in data management—an era marked by agility, domain-driven design, and an autonomous approach to data governance. Today’s digital-driven business landscape demands robust and scalable data architectures that can keep up with the fast-paced nature of modern enterprises. However, traditional data management models, marked by centralized data lakes and warehouses, often fall short, grappling with data silos, slow decision-making, and lack of ownership.
Enter Data Mesh—an innovative, decentralized data architecture that has sparked a paradigm shift in how organizations view and handle their data. A data mesh treats data as a product, empowering domain-focused teams to manage their data and leveraging a self-serve data platform to promote efficiency. In doing so, it fundamentally changes the way data consumers, data scientists, data engineers, and other stakeholders interact with data, fostering an ecosystem that is responsive, flexible, and tuned for high data quality.
In this comprehensive guide, we’ll delve deep into the world of data mesh, exploring its key principles, benefits, challenges, and real-world implementations. We’ll contrast it with other data architectures and shine a light on its impact on data teams. Whether you’re considering the leap towards a data mesh approach or are simply interested in the potential it holds for data management, this article will provide you with the insights you need to navigate the ever-evolving data landscape.
Are you ready to ride the wave of data mesh and transform your data strategy today? Let’s dive in!
TL;DR
•Traditional data architectures, such as data lakes and data warehouses, have been struggling to keep up with the demands of modern businesses due to scalability issues, data silos, and slow decision-making.
•Data Mesh emerges as a transformative approach, addressing these challenges by decentralizing data ownership and encouraging domain-driven data management. This groundbreaking approach treats data as a product and emphasizes on autonomy and collaboration among data teams.
• The core principles of Data Mesh include decentralized domain ownership, data as a product, self-serve data platform, and federated computational governance. Implementing these principles allows organizations to leverage their data effectively, break data silos, and enable agile responses to business requirements.
• Unlike data lakes and data fabrics, Data Mesh is focused on decentralization, autonomy, and productization of data, offering a comprehensive solution for complex data integration scenarios.
•Implementing Data Mesh brings about several benefits such as improved data quality, increased agility, and alignment with business objectives. However, challenges like managing the complexity of the new architecture, selection of appropriate technologies and tools, and ensuring the necessary skills and expertise are also present.
• Companies like PayPal, Intuit, Delivery Hero, and Zalando have successfully implemented Data Mesh, showcasing the real-world benefits and potential challenges of this revolutionary approach.
• Feeling overwhelmed or unsure where to start? Don’t worry. At nexocode, our data engineers are seasoned in building data-intensive applications and are ready to guide you in your Data Mesh implementation journey.
Contact us today to start leveraging your data more effectively and driving your business towards innovation and success.
The Path Toward Mesh
The journey to data mesh begins with understanding the evolution of data architectures. Traditional data managemdecision-making speed and quality andytical data planes, resulting in complex and fragile systems. Centralized data warehouses and data lakes emerged as solutions, but they also brought challenges such as data silos, slow decision-making, and a lack of ownership in monolithic structures.
Operational Data and Analytical Data Planes
The dichotomy of operational and analytical data planes has long been a cornerstone in traditional data management. Operational data powers the enterprise through transactional activities, whereas analytical data offers valuable insights into the past and future, aiding crucial decision-making processes. However, this separation can lead to intricate and slow operations, putting a strain on efficient data management.
Data Warehouse Architecture
Data warehouse architecture emerged as a concept in the late 1980s and early 1990s when organizations recognized the need to store and analyze large amounts of data for business intelligence purposes. Bill Inmon, known as the “father of data warehousing”, defined the term as a collection of data that is subject-oriented, integrated, non-volatile, and time-variant for the purpose of decision support. Throughout the 1990s and early 2000s, data warehouses were the mainstay of many organizations’ data strategies.
Data warehouse architecture is a system specifically designed for data analysis and reporting. It works as a centralized repository where data coming from different sources is cleaned, transformed, and stored. While this model provides a consistent and unified view of data to support reporting and analytical requirements, it’s not without its limitations.
Over the years, as the amount of data has grown exponentially, data warehouse architectures have struggled with scalability. Scaling a data warehouse to accommodate more data often requires substantial investment in hardware and software resources. Moreover, it’s a time-consuming process that requires careful planning and execution.
In addition, traditional data warehouses are designed to handle structured data, leaving them less capable of dealing with the influx of unstructured data that modern businesses often have to process. This limitation means that they may not be able to fully leverage the potential insights that could be derived from diverse data sources like social media feeds, IoT devices, or multimedia content. The rigidity of data warehouse architectures can make them slow to adapt to new data types and sources, further limiting their usefulness.
Data Lake Architecture
Data lake architecture is a more flexible, yet still centralized, approach to storing and managing data that allows organizations to store all types of structured and unstructured data at any scale.
The term “Data Lake” was first coined by James Dixon in the late 2000s. Dixon compared a data mart (a subset of a data warehouse) to a bottle of water “cleansed, packaged and structured for easy consumption” while a data lake is more like a body of water in its natural state. Data lakes have gained popularity since the mid-2010s with the rise of big data and machine learning, as they could store massive amounts of raw data in its native format, including structured, semi-structured, and unstructured data.
Data lakes are designed to provide maximum data flexibility, adaptability, and scalability. They are capable of storing vast amounts of data and allow users to access the data in its raw form, providing the flexibility for diverse methods of data analysis.
However, data lakes are not without their challenges. The very flexibility that makes them appealing can lead to issues like “data swamps”. Without proper metadata management and data governance, data lakes can quickly become disorganized, making it challenging for users to find the data they need.
Furthermore, the onus is often on the end-user to clean and prepare the data for their specific use case. This process can be time-consuming and requires a certain level of expertise. Moreover, data lakes require robust data governance strategies to ensure data accuracy and compliance with data privacy regulations.
Finally, there is often a lack of synchronization between data producers and consumers, which can cause delays and miscommunications. As data volumes grow, organizations can struggle to keep up, causing backlogs and bottlenecks that hinder efficiency and productivity.
Challenges of Centralized Data Architectures
Centralized data architectures, like data warehouses and data lakes, brought many benefits to organizations, but they also introduced several challenges. As data volumes, velocity, and variety grew exponentially with the digital revolution, centralized systems struggled to keep up.
Firstly, data silos became a persistent issue. Although these architectures were meant to unify data, they often ended up separating data across different departments or business units, causing a lack of cohesion in data analysis and decision-making.
Secondly, these architectures typically required substantial time and resources to maintain. ETL (Extract, Transform, Load) processes needed to clean and format data were complex and time-consuming. As data increased in volume and complexity, these processes often became bottlenecks.
Thirdly, there was a clear lack of ownership. With data being centralized, it was often unclear who was responsible for the quality, accuracy, and timeliness of data. This issue often led to data governance problems and a lower quality of data overall.
Finally, these architectures often resulted in slow decision-making. By the time data was extracted, transformed, loaded, analyzed, and finally reported, the information was often outdated, reducing its value for strategic decision-making.
Overall, while centralized data architectures have played a critical role in business intelligence, their limitations have necessitated new ways of thinking about data management.
From Centralized to Decentralized: Enter Data Mesh
As the challenges of centralized data architectures became increasingly apparent, a new paradigm began to take shape—data mesh. Born from the need for a more flexible, responsive, and efficient data management system, data mesh is a revolutionary approach that rethinks how data is treated in organizations.
What is Data Mesh?
The data mesh moves away from centralized systems and instead advocates for a decentralized, domain-driven design. It empowers individual business units or domain teams to take control of their data as independent “data products”. This shift fundamentally alters the data landscape, eliminating data silos and fostering ownership and accountability.
Unlike traditional architectures, data mesh recognizes the diversity of data sources and treats data not just as a byproduct of operations but as a valuable asset in its own right. It also encourages a shift in mindset: from data as a project, which has a finite life cycle, to data as a product which is continuously managed and improved.
This distributed data architecture does not mean a return to data chaos, however. Data mesh provides principles and constructs to ensure that while data ownership is distributed, data governance is also federated, allowing for a consistent, compliant, and trusted approach to data management.
So, how does data mesh work? Let’s explore the core principles that underpin this transformative approach.
Data mesh is founded on four core principles that enable organizations to break free from the constraints of centralized data management and embrace a more agile, domain-driven approach to domain data management. These principles, including decentralized domain ownership, data as a product, self-serve data platform, and federated computational governance, empower domain teams to take responsibility for their data, ensuring data quality, ownership, and accountability throughout the organization.
By understanding and applying these principles, organizations can harness the full potential of data mesh and transform their data strategy.
Decentralized Domain Ownership
Decentralized domain ownership is a core principle of data mesh that empowers domain teams to take responsibility for their data, breaking down data silos and improving data quality. By assigning ownership to domain teams, organizations can ensure that data is managed by those with the most domain knowledge and expertise, resulting in more accurate and reliable data products.
This decentralized data mesh paradigm approach fosters collaboration and autonomy among domain teams, allowing them to respond quickly to changing business requirements and drive innovation.
Data as a Product
Treating data as a product is another fundamental principle of data mesh, focusing on the discoverability, accessibility, and trustworthiness of data assets. By viewing data as a valuable asset with its own lifecycle and governance structure, organizations can ensure a delightful user experience for data consumers and drive value from their data.
Self-Serve Data Platform
The self-serve data platform is a critical component of data mesh, enabling domain teams to efficiently manage their data products and promote collaboration and autonomy within the organization. By providing a user-friendly interface and hiding technical complexity, the self-serve data platform allows domain teams to focus on their core business objectives while still maintaining control over their data.
This approach reduces the reliance on central data teams and fosters a more agile, responsive data management system that can quickly adapt to changing business needs.
Federated Computational Governance
Federated computational governance is the final principle of data mesh, balancing the autonomy of domain teams with the need for standardization and governance across the organization. By adopting a federated governance model, organizations can prevent duplication of effort and maintain data quality across domains while allowing teams to work independently and respond to changing business requirements.
This approach ensures that data is managed consistently, consistently compliant, promoting trust and reliability in the organization’s data assets.
Data mesh is built on a set of essential components that work together to enable a decentralized, domain-driven data management system.
Data Contract
A data contract is a formal agreement between data producers and consumers that defines data product structure, format, and quality. By establishing clear expectations and guidelines for data exchange, data contracts promote stability, trust, and quality assurance within the data mesh ecosystem.
Data contracts also play a crucial role in facilitating data flow across data products, ensuring seamless integration and interoperability among different data sources and systems.
Data Product Catalog
One of the essential components of a data mesh is the Data Product Catalog. As data is now treated as a product, it becomes crucial to have a comprehensive catalog that provides a single source of truth about the data products available within an organization.
The data product catalog includes information such as the description of the data product, the data product owner, the source of the data, the format of the data, SLA, and the methods to access it. It aids data discoverability and allows users to find the data products they need easily, contributing to more efficient and effective use of data across the organization.
Change Data Capture
Change data capture (CDC) is a technique for tracking changes in source data and applying them to the data mesh, ensuring data consistency and freshness. By continuously monitoring and capturing modifications to data in a database, CDC enables real-time or near-real-time data integration, providing up-to-date information for data consumers and supporting timely decision-making.
Implementing CDC in a data mesh architecture ensures organizations can maintain an accurate, current view of their data, driving better insights and more informed decisions.
Data Transformations
Data transformations involve converting raw data into a format suitable for analysis and consumption by downstream applications and users. Data meshes usually require domain teams to cover preprocessing of raw data into events and entities, representing business objects with changing states over time.
Data transformations are essential for ensuring that data is structured, organized, and accessible for analysis, enabling organizations to derive valuable insights and make informed decisions based on their data. Domain data engineers play a crucial role in this process.
Data Cleansing
Data cleansing is ensuring only high quality data enters the mesh by identifying and correcting errors, inconsistencies, and inaccuracies in data products. In a data mesh architecture, just like for data transformations, domain teams are responsible for carrying out data cleansing, ensuring that their data products meet the organization’s quality standards.
Data Ingestion
Data ingestion is the process of collecting, importing, and processing data from various sources into the data mesh. Data mesh provides a flexible, scalable framework for data integration and analysis by enabling domain teams to ingest and manage their data.
Data mesh platform team supports domain teams in building data ingestion pipelines by providing dedicated connectors, best practices, and streamlined tools and utilities. These components aid in transforming raw data into a structured and useful format, allowing it to be easily queried and analyzed. Furthermore, the platform team ensures reliable, secure, and efficient data ingestion, helping domain teams maintain high data quality, meet compliance requirements, and reduce time-to-insight. This collaborative approach to data ingestion within a data mesh fosters a culture of data ownership and empowers domain teams to create high-value data products tailored to their specific needs.
Data Mesh Platform Backbone
Underpinning the distributed architecture of a data mesh, the data mesh platform backbone serves as the foundational infrastructure component that facilitates efficient data exchange and communication between disparate domain teams. It is an essential part of a data mesh, given the diversity and distributed nature of data sources that it encompasses.
Harness the full potential of AI for your business
Often implemented using
event-streaming technologies, like
Apache Kafka, the data mesh platform backbone is designed to capture, disseminate, and manage data changes across the mesh in real-time. This real-time data exchange enables organizations to react promptly to business events, make faster, data-informed decisions, and glean timely insights—vital attributes in today’s rapid-paced, data-centric business environments.
By leveraging a data mesh platform backbone, organizations can ensure that their data mesh transcends the notion of a mere collection of disjointed data products. Instead, it evolves into a dynamic, interconnected ecosystem where data flows smoothly and timely, delivering maximum value. This backbone lends robustness and agility to the data mesh, making it highly responsive and empowering it to support innovative, data-driven initiatives effectively.
Data mesh’s transformative effect on data architecture doesn’t just bring about technological changes. It also has significant implications for the roles and responsibilities of data teams within an organization. By creating a structure where every team has clear accountability and autonomy over their domain, data mesh allows for a more streamlined and efficient workflow. Here’s how the responsibilities are generally divided among different teams in a data mesh architecture:
Domain Teams
Domain teams are the lifeblood of the data mesh architecture as they support multiple data products limited to the boundaries of their domain. They are cross-functional teams that combine domain-specific knowledge with data product management expertise. Their main responsibilities include:
Data Product Ownership: Domain teams are accountable for the design, development, operation, and quality of their data products. Domain oriented data ownership means that they are autonomous data owners that create data products within a particular domain.
Data Management: They are responsible for the data life cycle, including collection, storage, cleansing, and updates.
Data Security: The team ensures compliance with data security and privacy regulations.
User Support: Domain teams are also responsible for providing support to the consumers of their data products, responding to inquiries, feedback, and issues as they arise.
Self-Serve Data Platform Team
The self-serve data platform team provides the infrastructure that allows domain teams to operate autonomously. They don’t manage the data themselves but offer services and tools that support domain teams in their tasks. Their responsibilities include:
Platform Development: Building and maintaining a self-serve platform that domain teams can use to manage their data products.
Tool Provision: Providing necessary tools and technologies that domain teams need for data ingestion, processing, storage, and analysis.
Technical Support: Offering technical support and expertise to domain teams as they develop and manage their data products.
Access Management: The team is responsible for managing access to the platform and its resources, ensuring the right users have the right permissions and that sensitive data is appropriately protected.
Policy Automation: Creating and implementing automated policies for data usage and access within the platform, facilitating efficient operations and enhanced compliance that comes from the data governance team.
Data Product Monitoring/Alerting/Logging: Implementing and maintaining a system for monitoring the performance of data products, alerting relevant parties to issues, and logging events for future analysis and troubleshooting. This system helps maintain the health of the data mesh and its products, ensuring that potential issues can be detected and addressed promptly.
Governance Team
The governance team oversees the use of data across the entire organization, ensuring data quality and compliance. Their primary responsibilities include:
Data Quality Assurance: Implementing and enforcing data quality standards across all domain teams.
Regulatory Compliance: Ensuring the organization meets all data-related regulatory requirements.
Policy Development: Developing and enforcing policies around data usage, security, compliance, and privacy.
Enabling Team
The enabling team serves as a bridge between the domain teams, the platform team, and the governance team. They’re focused on facilitating the successful implementation of the data mesh architecture. Their main responsibilities include:
Training and Education: Offering training and educational resources to ensure all teams have the necessary skills to work within the data mesh architecture.
Best Practices Guidance: Providing guidance on best practices for data management, based on industry trends and the specific needs of the organization.
Cross-Functional Collaboration: Facilitating collaboration between different teams to ensure they’re working towards a common goal.
Data Literacy Promotion: Helping to build a
data-driven culture by promoting data literacy across the organization.
At the same time, data mesh allows data teams to respond quickly to changing business requirements, ensuring that their data-driven insights and decision-making are always up-to-date and relevant.
To fully understand the value of data mesh, it’s important to contrast it with other data architectures, such as data lakes and data fabrics. While data lakes are centralized repositories for storing raw data, data fabrics provide a combination of technological capabilities that work together to offer an interface for data consumers.
Data Mesh vs. Data Lake
Data mesh and data lake architectures differ in several key aspects. While data lakes centralize data storage and management, data mesh emphasizes decentralization and domain-driven design, providing greater flexibility and scalability.
Moreover, data mesh enables domain teams to take ownership of their data, fostering collaboration and autonomy and reducing the need for central data teams. By adopting data mesh, organizations can break free from the constraints of data lake architectures and better address their data needs and challenges.
Data Mesh vs. Data Fabric
Data mesh and data fabric architectures also have distinct differences. While data fabric concentrates on technology and provides data access to domains, data mesh is focused on decentralization, autonomy, and productization of data.
Data fabric may be sufficient for simple data integration cases. Still, for more complex scenarios or when business knowledge needs to be integrated into the data, data mesh offers a more comprehensive solution, enabling organizations to achieve greater agility, responsiveness, and innovation in their data management efforts.
Implementing Data Mesh: Benefits and Challenges
The decision to implement data mesh comes with its own benefits and challenges, which organizations must carefully consider before embarking on this journey.
Benefits of Data Mesh
By adopting data mesh, organizations can reap a range of benefits, such as enhanced data quality, increased agility, improved alignment with business objectives, and a decentralized approach to data management that allows business users and data scientists (data consumers) to access and analyze data from any source without requiring expert data teams’ involvement.
The data mesh approach can also help organizations overcome the limitations of traditional data architectures, enabling them to better manage and leverage their data assets for improved decision-making and innovation.
Challenges in Adopting Data Mesh
While the benefits of data mesh are compelling, organizations must also be prepared to navigate the potential challenges associated with its implementation. These challenges include managing the complexity of the data mesh architecture, selecting the appropriate technologies and tools, and ensuring the necessary skills and expertise are in place to manage the new data landscape.
Additionally, organizations must strike a balance between decentralization and governance, ensuring that data remains secure, compliant, and of high quality while promoting autonomy and collaboration among data teams.
Successful Data Mesh Implementations: Case Studies
Real-world examples of successful data mesh implementations can offer valuable insights and guidance for organizations considering the adoption of this revolutionary approach to data management. Companies such as PayPal, Intuit, Delivery Hero, and Zalando have successfully implemented data mesh, demonstrating the potential benefits and challenges of this decentralized, domain-driven approach.
PayPal Data Mesh Case Study
PayPal’s data mesh implementation showcases the power of this decentralized approach in improving data quality and enabling faster decision-making. By adopting data mesh, PayPal overcame the limitations of their centralized data management system and provided a more agile, responsive, and efficient data platform for their users.
The success of PayPal’s data mesh implementation demonstrates the value of this innovative approach to data management and highlights the potential benefits it can bring to organizations seeking to drive value from their data.
Intuit Data Mesh Case Study
Intuit’s journey to data mesh began with realizing that their existing centralized data architecture was insufficient to meet their growing data needs. After conducting research and testing, they decided to implement a data mesh architecture, empowering their data workers and enabling more precise and timely insights for their customers.
Intuit’s data mesh implementation highlights this approach’s potential advantages and challenges and offers valuable lessons for organizations considering a similar transition.
Delivery Hero Data Mesh Case Study
Faced with the challenges of managing complex data sources in a rapidly growing and globally distributed environment,
Delivery Hero, a leading online food delivery platform, adopted the data mesh approach. The implementation decentralized data ownership, with domain teams responsible for their specific data products. The self-serve data platform team provided the necessary tooling and infrastructure for data ingestion, processing, storage, and analysis, while the governance team ensured compliance with data-related regulations.
This transition to a data mesh architecture allowed Delivery Hero to break down data silos, enhance data accessibility and quality, and enable real-time analytics. The transformation improved overall data efficiency and supported the company’s continuous growth and complex data needs, showcasing the potential benefits and applicability of this paradigm shift in data architecture.
Zalando Data Mesh Case Study
Zalando, a leading European fashion platform, successfully implemented data mesh to address their data challenges and improve data-driven decision-making across the organization. By adopting data mesh, Zalando was able to decentralize data ownership, improve data quality, and empower domain teams to manage their data resources more effectively.
This case study highlights the benefits of data mesh implementation and offers valuable insights for organizations considering a similar transition to this innovative data management approach.
The Enterprise Data Strategy and the Role of Data Mesh
The landscape of data architecture is shifting. Traditional, centralized data management structures, while powerful, have often created a maze of inefficiencies and bottlenecks that slow down decision-making and innovation. It’s clear that a new approach is needed, one that scales with the growing demands and complexity of modern business. This is where data mesh comes into play.
The data mesh paradigm is a revolutionary step in the evolution of data architecture, promising to overcome the challenges of centralized models. By decentralizing data ownership and operations, data mesh creates an environment where domain teams are empowered to manage their data as independent data products. This paves the way for more efficient, effective data usage across the organization.
However, implementing a data mesh isn’t just a matter of restructuring teams or redefining roles. It’s a cultural shift, a new way of thinking about data as a product, and it requires thorough planning and technical expertise to execute effectively.
At nexocode, we understand the power and potential of data mesh. Our data engineers have extensive experience in building data-intensive applications and can guide you through every step of the data mesh implementation process. From planning your transition to executing the technical intricacies of your new data architecture, our team is here to ensure you harness the full benefits of data mesh.
A data mesh is a decentralized, domain-oriented approach to data architecture and organizational design. It involves distributing responsibilities for data across multiple teams in an organization, breaking down data silos and enabling more efficient, scalable data management.
Data mesh revolves around four key principles: domain-oriented decentralized data ownership and architecture, data as a product, self-serve data infrastructure as a platform, and federated computational governance. These principles aim to address the limitations of centralized data management and promote a more efficient, scalable approach to data architecture.
Benefits of a data mesh include improved scalability, the elimination of data silos, enhanced data discovery and quality, and the empowerment of domain teams to act autonomously while maintaining a consistent approach to data management across the organization.
In a data mesh, a data product is a cohesive, full lifecycle management of data that is treated as a product with a product owner. These products are owned and operated by domain teams, and their primary responsibility is to ensure the data product's discoverability, trustworthiness, ease of use, and secure access.
The self-serve data platform team in a data mesh provides a platform that enables domain teams to manage their data products effectively. The team manages data infrastructure, storage, processing capabilities, data cataloging, access management, policy automation, and monitoring/alerting/log tools.
Companies like Delivery Hero and Zalando have successfully implemented data mesh architectures. For instance, Delivery Hero used the data mesh approach to break down data silos, enhance data accessibility and quality, and enable real-time analytics across their globally distributed environment.
Traditional data architectures like data warehouses and data lakes centralize data into a single system managed by a centralized team. In contrast, a data mesh distributes data ownership across domain-specific teams, promoting more efficient data use and management, and helping organizations better leverage their data in a scalable and resilient manner.
Implementing a data mesh in your organization involves a significant shift in both technical architecture and organizational structure. It requires careful planning and a deep understanding of the data mesh principles. The complexity of this process can often benefit from expert guidance. nexocode's team of data architects have extensive experience in building modern data architectures at scale and can help you navigate the process of data mesh implementation. [Contact us to learn more about how we can support your data mesh journey](
https://nexocode.com/contact/ "Contact Us").
With over ten years of professional experience in designing and developing software, Dorota is quick to recognize the best ways to serve users and stakeholders by shaping strategies and ensuring their execution by working closely with engineering and design teams.
She acts as a Product Leader, covering the ongoing AI agile development processes and operationalizing AI throughout the business.
Would you like to discuss AI opportunities in your business?
Let us know and Dorota will arrange a call with our experts.
Artificial Intelligence solutions are becoming the next competitive edge for many companies within various industries. How do you know if your company should invest time into emerging tech? How to discover and benefit from AI opportunities? How to run AI projects?
Follow our article series to learn how to get on a path towards AI adoption. Join us as we explore the benefits and challenges that come with AI implementation and guide business leaders in creating AI-based companies.
In the interests of your safety and to implement the principle of lawful, reliable and transparent
processing of your personal data when using our services, we developed this document called the
Privacy Policy. This document regulates the processing and protection of Users’ personal data in
connection with their use of the Website and has been prepared by Nexocode.
To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and
technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at
the level which ensures compliance with applicable Polish and European laws such as:
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on
the protection of natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
(published in the Official Journal of the European Union L 119, p 1);
Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item
1000);
Act of 18 July 2002 on providing services by electronic means;
Telecommunications Law of 16 July 2004.
The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.
1. Definitions
User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal
person, or an organizational unit which is not a legal person to which specific provisions grant
legal capacity.
Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court
Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department
of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
Website – website run by Nexocode, at the URL: nexocode.com whose content is available to
authorized persons.
Cookies – small files saved by the server on the User's computer, which the server can read when
when the website is accessed from the computer.
SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary
methods of data transmission encrypts data transmission.
System log – the information that the User's computer transmits to the server which may contain
various data (e.g. the user’s IP number), allowing to determine the approximate location where
the connection came from.
IP address – individual number which is usually assigned to every computer connected to the
Internet. The IP number can be permanently associated with the computer (static) or assigned to
a given connection (dynamic).
GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of individuals regarding the processing of personal data and onthe free transmission
of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
Personal data – information about an identified or identifiable natural person ("data subject").
An identifiable natural person is a person who can be directly or indirectly identified, in
particular on the basis of identifiers such as name, identification number, location data,
online identifiers or one or more specific factors determining the physical, physiological,
genetic, mental, economic, cultural or social identity of a natural person.
Processing – any operations performed on personal data, such as collecting, recording, storing,
developing, modifying, sharing, and deleting, especially when performed in IT systems.
2. Cookies
The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.
The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the
Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end
device. Cookies are used to:
improve user experience and facilitate navigation on the site;
help to identify returning Users who access the website using the device on which Cookies were
saved;
creating statistics which help to understand how the Users use websites, which allows to improve
their structure and content;
adjusting the content of the Website pages to specific User’s preferences and optimizing the
websites website experience to the each User's individual needs.
Cookies usually contain the name of the website from which they originate, their storage time on the
end device and a unique number. On our Website, we use the following types of Cookies:
"Session" – cookie files stored on the User's end device until the Uses logs out, leaves the
website or turns off the web browser;
"Persistent" – cookie files stored on the User's end device for the time specified in the Cookie
file parameters or until they are deleted by the User;
"Performance" – cookies used specifically for gathering data on how visitors use a website to
measure the performance of a website;
"Strictly necessary" – essential for browsing the website and using its features, such as
accessing secure areas of the site;
"Functional" – cookies enabling remembering the settings selected by the User and personalizing
the User interface;
"First-party" – cookies stored by the Website;
"Third-party" – cookies derived from a website other than the Website;
"Facebook cookies" – You should read Facebook cookies policy: www.facebook.com
"Other Google cookies" – Refer to Google cookie policy: google.com
3. How System Logs work on the Website
User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The
information collected in the Logs is processed primarily for purposes related to the provision of
services, i.e. for the purposes of:
analytics – to improve the quality of services provided by us as part of the Website and adapt
its functionalities to the needs of the Users. The legal basis for processing in this case is
the legitimate interest of Nexocode consisting in analyzing Users' activities and their
preferences;
fraud detection, identification and countering threats to stability and correct operation of the
Website.
4. Cookie mechanism on the Website
Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful
information
and are stored on the User's computer – our server can read them when connecting to this computer
again.
Most web browsers allow cookies to be stored on the User's end device by default. Each User can
change
their Cookie settings in the web browser settings menu:
Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings >
Advanced. In
the "Privacy and security" section, click the Content Settings button. In the "Cookies and site
date"
section you can change the following Cookie settings:
Deleting cookies,
Blocking cookies by default,
Default permission for cookies,
Saving Cookies and website data by default and clearing them when the browser is closed,
Specifying exceptions for Cookies for specific websites or domains
Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options >
Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with
the OK
button.
Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field.
From
there, you can check a relevant field to decide whether or not to accept cookies.
Opera
Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site
data. From there, adjust the setting: Allow sites to save and read cookie data
Safari
In the Safari drop-down menu, select Preferences and click the Security icon.From there,
select
the desired security level in the "Accept cookies" area.
Disabling Cookies in your browser does not deprive you of access to the resources of the Website.
Web
browsers, by default, allow storing Cookies on the User's end device. Website Users can freely
adjust
cookie settings. The web browser allows you to delete cookies. It is also possible to automatically
block cookies. Detailed information on this subject is provided in the help or documentation of the
specific web browser used by the User. The User can decide not to receive Cookies by changing
browser
settings. However, disabling Cookies necessary for authentication, security or remembering User
preferences may impact user experience, or even make the Website unusable.
5. Additional information
External links may be placed on the Website enabling Users to directly reach other website. Also,
while
using the Website, cookies may also be placed on the User’s device from other entities, in
particular
from third parties such as Google, in order to enable the use the functionalities of the Website
integrated with these third parties. Each of such providers sets out the rules for the use of
cookies in
their privacy policy, so for security reasons we recommend that you read the privacy policy document
before using these pages.
We reserve the right to change this privacy policy at any time by publishing an updated version on
our
Website. After making the change, the privacy policy will be published on the page with a new date.
For
more information on the conditions of providing services, in particular the rules of using the
Website,
contracting, as well as the conditions of accessing content and using the Website, please refer to
the
the Website’s Terms and Conditions.
Nexocode Team
Want to unlock the full potential of Artificial Intelligence technology?
Download our ebook and learn how to drive AI adoption in your business.