Don't try to boil the ocean on day one: Anand Medepalli from Shippeo on the Complexities of Data in Supply Chain

Don't try to boil the ocean on day one: Anand Medepalli from Shippeo on the Complexities of Data in Supply Chain

Jarek Jarzębowski - November 14, 2024

The past few years have taught us that it’s hard to predict anything: a period of relative stability can be abruptly interrupted by a single event. And then, it often turns out that, despite appearances, we aren’t really prepared for every possibility. In the supply chain, even a small change can disrupt the delicate balance and trigger a cascade of economic and social consequences. Every delay, every stop, every slight route change matters.

Anand Medepalli helps companies predict these events and their impacts, drawing key business insights from data across the entire supply chain. In a conversation with Jarek Jarzębowski, he highlights how companies in this sector have changed their approach to data in recent years, partly due to the development of widely accessible tools. As he emphasizes, companies can now focus on the data itself rather than on managing it. This shift brings them a step closer to significant process optimization. Dive into Anand’s data-driven world and benefit from his expertise.

Key Takeaways from the Conversation

More Accessible Models, More Focus on Data Itself: The AI landscape has significantly transformed, with foundational models created by tech giants like Google and Microsoft becoming widely accessible. This shift has reduced the need for individual businesses to develop and maintain their own AI models. It allowed them to focus more on leveraging these advanced tools rather than building them from scratch.

Data Quality Over Data Quantity: The emphasis in data management has moved from merely handling large volumes of data to ensuring the quality and relevance of that data. Companies are now prioritizing the accuracy, consistency, and applicability of data to meet specific business needs, which is critical for deriving meaningful insights and making informed decisions.

AI is Not an Ultimate Answer: Not all business problems require AI solutions, and in some cases, AI can be overly complex or unnecessary. Businesses need to carefully assess when and where AI can add value, avoiding the temptation to apply it indiscriminately. Strategic deployment of AI ensures that resources are used efficiently and that the technology serves its intended purpose.

The Importance of Learning: AI systems need to continuously learn and adapt, much like human learning processes. Just as a child learns not to repeat mistakes, AI systems must evolve by learning from data and experiences over time. This ongoing learning capability is essential for AI to effectively handle disruptions and improve performance.

Statistics and Machine Learning Go Well Together: Machine learning models are paired with statistical models which rely on historical data, to uptake the predictions based on real-time data. This way, companies can predict shipment ETAs by adjusting for variables like vehicle type, journey length, and unexpected events such as border delays. These models continuously adapt in real-time, providing flexible forecasting even in unpredictable conditions.

Supply Matters as Much as Demand: The pandemic exposed critical weaknesses in global supply chains, particularly on the supply side, which had been previously overlooked. This has led to a new understanding that supply chains must be resilient to frequent and varied risks. Supply chain officers now need to be agile, reacting quickly to unexpected disruptions to maintain operational stability.

Conversation with Anand Medepalli

Jarek Jarzębowski: Let’s start with a little bit of background. Can you share more about yourself? What are you doing professionally, and what are you working on right now?

Anand Medepalli: I’m Anand Medepalli, a Chief Product Officer at Shippeo. I’ve been in supply chain for about 15 to 20 years. I’m a mathematician by training. At Shippeo, I focus on product strategy, which informs our global market strategy, blending real-time supply chain visibility, big data, machine learning, and AI to meet our customers’ needs.

We are in the supply chain business, specifically in real-time transportation visibility. A lot of our customers operate close to just-in-time, especially in automotive, where a plant could stay idle if a shipment part doesn’t arrive on time. We inform them in advance so they can make alternative arrangements. 

We track where the shipment is, and predict when it will reach its next milestone, whether that’s a port or a warehouse. The data we collect comes from GPS devices, IoT devices, low-orbit satellites, and TMSs. We contextualize it all and use machine learning to predict the shipment’s journey and its arrival time. 

Jarek Jarzębowski: I’ve seen that you’ve been involved in data science, AI, and machine learning for some time now. Can you also tell us briefly what you’ve been doing before that in AI and machine learning?

Anand Medepalli: It’s actually a funny story. My gray beard will tell you I’ve been around for a long time. During my PhD in mathematics, my focus was on combinatorial optimization, and my thesis advisor was a professor in computer science. He encouraged me to take computer science courses, and I ended up doing enough to qualify for a master’s degree in computer science, though I never formally completed it. 

My advisor suggested I work with another professor who was researching neural networks. This was in 1991 or 1992, long before “machine learning” became a buzzword. Back then, I wasn’t thrilled because it required a lot of data, which went against everything I learned in mathematics about working with small sample sizes. Also, in computer science, the focus was on designing efficient algorithms, but neural networks seemed to require a lot of computation for a single outcome. So, I moved on.

Fast-forward to today, and I’m deep into machine learning and neural networks. My real immersion into machine learning happened around 2015 at JDA (now Blue Yonder), when I was asked to lead an effort in retail planning to see how machine learning could be applied. Later, I worked at a company focused on big data, and then I joined Element AI in Montreal, founded by some of my old colleagues and the Turing Award winner Yoshua Bengio. There, I was the head of product and helped bring five AI products to market.

During my time at Element AI, I realized it was less about algorithms and more about data. We struggled because we didn’t have enough data. Now, algorithms are widely available, but data is the key differentiator. That’s what attracted me to Shippio, a company that collects and contextualizes data, making it useful with the help of machine learning and AI. 

While AI is fascinating, its practical applications are often about automating manual processes, summarizing documents, or finding patterns in data—tasks that have always existed but are now more accessible.

Jarek Jarzębowski: Apart from this shift from algorithm to data being the most important thing, what else has changed in recent years? When you first looked at this field, you didn’t see real-world applications, but now you do. What has changed over the last 10 years, and what is the current landscape? Apart from big things like ChatGPT and OpenAI, what’s really going on in the data science field that you find most interesting now?

Anand Medepalli: Well, going back to AI, these foundational algorithmic advances are super important. What has fundamentally changed is that back in the day, nobody was building scalable models that could be used across multiple applications. 

At Element AI, we were applying our models in various industries like pharmaceuticals, manufacturing, and retail. Now, companies like Google, Microsoft, and OpenAI have built these foundational models that are widely accessible. B2B software companies like us used to have to build our own AI models, but the biggest challenge was keeping them up to date in a rapidly evolving field. The driving factor for AI algorithms has always been compute power, but with cloud computing, that’s no longer a limitation. Companies like Microsoft, Amazon, and Google have made access to compute power at a reasonable cost very easy.

The second major change is in data management. Tools like Snowflake have simplified data management, making it easier to feed algorithms. Previously, you had to write your own ETL scripts and manage data pipelines manually. Now, automated data pipelines are available, and you can build once and reuse many times. So, the focus for B2B companies has shifted to the actual data itself—its quality and how it’s used. Managing and maintaining a quality data set to meet business needs is now the most critical task.

This shift has freed up resources. As a Chief Product Officer, I don’t have to worry about foundational aspects like algorithms and compute power; I can use those platforms and focus on data quality. About 20% of my team’s capacity is dedicated to data quality management because, despite our familiarity with the data, there are always surprises. We spend a lot of time ensuring data quality because it’s essential for understanding what the data is trying to tell us. For example, if a particular data value is missing, we need to determine whether it’s because there were no sales that day (which would be correct) or if it’s a data entry error. That kind of attention to detail is crucial.

In data science, therefore, we spend a lot of time understanding the business context and the business value. Don’t get me wrong. We bring our data science PhD knowledge or master’s knowledge to bear because we still need to look at how to tweak the algorithm, what the configuration of the algorithm should be. But that used to consume most of our time; now it consumes much less time. Now, we focus on understanding the data to make our data science useful.

You’ve heard the famous saying, “garbage in, garbage out.” I was guilty of that. I would tell my bosses, “Guys, my algorithms are great. I’m sorry, you’re not bringing me the right data.” As a Chief Product Officer, I don’t say that anymore. Instead, I say data is my top priority. I need to make sure it comes in properly so that I get proper outcomes.

Jarek Jarzębowski: You’ve mentioned that Shippeo is a data company, and that you’re using multiple ways of gathering data like GPS, IoT devices, and so on. What are you doing when you get the data? What AI technologies are you using to show the value of the data? And how are you using the current state of technology to bring the best results?

Anand Medepalli: Yeah, so, I mean, as I said, data is the foundational aspect of what we do. However, we’re also a prediction company. For that prediction, we use machine learning models to try and see what the data is saying. We also use some data cleansing techniques from machine learning. In other words, machine learning is very good at saying, “Hey, this data stream used to have these data points, but they’re not here now,” and then it can recommend a value based on what it saw if something was missing. There are these classic cleansing rules you can apply for a dataset to be good. However, I also believe very strongly that not everything needs AI. It can be too heavy.

A lot of our data cleansing is done by business rules. Why? I collect data and give it to you as my customer, and I give it to your neighbor as my second customer. You both absorb the same data, but you use it slightly differently. Therefore, what is garbage for you is not garbage for the other person. What you see as a problem, that person doesn’t see as a problem, and vice versa. So what am I to do? I’m going to say, “OK, so there is a Jarek rule that we need to learn.” But initially, let’s implement that rule. What do you want, Jarek? How do you see this? We’ll debate with you, we’ll argue, and we’ll codify that for you. Then, after a certain period of time, the machine learns that this is the rule, and the rules become inherent. We start by teaching the system certain basic things. The system learns, and then we don’t have to keep updating those rules in the long run.

I don’t want to sit here and tell you that I press a button and play golf every day. I don’t do that. I have to sit and look at the data and see, “Oh, right, this is a new condition, never saw this before. What does it mean? Is it relevant? If so, how do I deal with this?” Then we put a manual business rule in place to manage it. After collecting enough information, if the rule is consistent, we automate it. So there’s a manual process that we automate. We use machine learning to learn from the system.

The second aspect is prediction. We have machine learning models with roughly 200-plus features to predict things like the ETA of a shipment at a warehouse or port. In Europe, for example, a truck transporting a shipment might have different breaks along the way, like the driver taking a break or making an intermediate stop. During that journey, we predict what’s going to happen next and what will happen eventually.

For instance, if it’s a long-distance journey, the average speed will be different compared to a short journey, which might involve a van instead of a big truck. The machine learning model manages this by recognizing the distance and adjusting predictions accordingly. We don’t get it right all the time, especially if it’s between a short and long-distance journey, but we adjust as needed. Border crossing is another example. It’s unpredictable; one day, a border might be open and easy to cross, but the next, a truck might get stuck, leading to delays.

For instance, one of our customers had a driver stuck at a border, and by the time they were released, the driver was out of hours and couldn’t legally drive anymore. We also model border congestion, which can be unpredictable. A similar scenario happens with containers on ships arriving at ports. The schedule might say the vessel is due to arrive at a certain time, but it doesn’t mean the container is ready to be picked up. We predict when the ship will dock, when the container will clear customs, and when it will be ready for pickup.

Our machine learning models predict all these things. We also rely on statistical models because we have a lot of historical data. Given a journey from A to B, whether it’s a direct train or involves multiple modes of transport, we can estimate the lead time before knowing anything. That’s a statistical model, not machine learning. It’s based on historical data. As real-time data starts coming in, we update the prediction, and that’s where machine learning kicks in.

Jarek Jarzębowski: What you’ve described is quite advanced and involves different layers of decision-making and technology. Can you share an example of a challenge you faced in implementing or developing your solution? Perhaps something unexpected that you had to overcome.

Anand Medepalli: Of course, it’s a daily process and daily problem. It’s like driving at 20 miles an hour versus driving at 100 miles an hour. The same driver, the same car, but the consequences are very different, right? At 100 miles an hour, you have to be much more alert, much more conscious. That’s what’s happening with real-time visibility. For example, Renault, one of our big customers, operates their plants almost just in time. Their plant managers plan everything—shifts, people, equipment—waiting for parts to arrive when they’re supposed to. If they don’t, there’s a risk that the plant will stay idle, and they’ll lose thousands of euros.

So we have to drive at 100 miles an hour, but the risks are magnified. What if the GPS ping was wrong? A GPS signal can bounce off a window and show the truck on the other side of the street when it’s right outside. How do you clean this? The truth is, we don’t always succeed. For instance, we had a speed profile for trucks, but it turns out some customers use vans, so we needed to adjust the model. These challenges arise during implementation, and we realize that 70-80% of what we do is accurate, but we struggle with the remaining cases, which might be important for the customer. That’s when we dig into it and make adjustments.

If I step back, my point of view, or rather Shippio’s point of view, is that in today’s world, you need high-quality information, data, insights, and predictions that you can trust. Especially in supply chain, there are so many disruptions every day that it’s physically impossible for any team to handle them all. You need to focus on the top 10-20% that require manual input, while trusting the system to handle the rest accurately. You need a high-quality system of information, but you also need to engage with it, particularly for the top issues where the system doesn’t know what to do. That’s where automation, workflows, and orchestration come in—a system of engagement.

And this system needs to learn continuously. Just like teaching a child not to touch something, the system needs to learn not to repeat mistakes. The system of engagement allows you to engage with the system of learning underneath it. Over time, the system learns to handle disruptions on its own.

Let me give you another example. If I’m engaging with the system, I want it to be intuitive. I want to ask a simple question, like, “Show me the shipments that are likely to be delayed.” Why should I have to click ten times to get to my records? Can’t I just ask the system? This is where AI, particularly generative AI and natural language processing, comes in. Increasingly, companies are thinking along these lines. For instance, Salesforce’s Tableau released a Tableau co-pilot that allows you to create dashboards with natural language. The technology isn’t primetime yet, but it’s only a matter of time before it scales.

Jarek Jarzębowski: Okay. You’ve mentioned the role of data a couple of times, emphasizing its crucial importance. Can you share some tips or recommendations for companies that want to better manage their data? How should they approach data management so they can use it effectively in data science?

Anand Medepalli: First, start with what you want out of it. Begin with the outcome. In our case, we knew that customers weren’t going to give us a Nobel Prize for telling them where their shipment is. They want to know when it’s arriving at their warehouse or leaving the port. Every time a shipment is stuck somewhere, costs increase. They want to know the choke points in their supply chain and how to avoid them.

So, start with your business outcome, and then map your data accordingly. This approach will help you prioritize. We’ve used internal and external data for our purposes. Often, the internal data is customer data, such as order information, order history, and expected delivery dates. We complement that with external data from GPS, IoT devices, and other sensors. Then we ask, what is useful? For instance, the temperature might be important for some customers, but it’s not critical for others. We tailor our data collection and analysis based on what is important for our business outcome.

Why do you want to collect data? Because you want to solve some business problems. Whether you’re a data scientist, a business person, an IT person, or just a curious journalist, it doesn’t matter. Always ask the hard question: why? Why do you need that data? What are you trying to solve? Be super clear about the use cases you’re trying to solve.

Work backward from the desired outcome. If that’s the goal, what steps are needed to achieve it? The first step is always identifying where your data is. Once you know, you can confirm that it’s the data necessary for that specific use case. Then, move on to the next use case and gather similar data requirements. Eventually, you’ll have a clear set of data requirements. Next, consider: what will my data-as-a-service and data-sharing strategies be? Sometimes, people only want the data, like “Show me sales in the northeastern region.” They don’t need further analysis—just the raw data. As your business grows, you may later ask, “Can you tell me what to do with this data?” but that will come with time.

Don’t try to boil the ocean on day one. Don’t think of AI and machine learning as these magic tools that will give you so much. They won’t. They will only give you what you train them to do.

Start simple. Start with the outcomes. Be super clear why you’re asking. Be very realistic. Figure out what will help you get that data. Be simple in your approach. Be humble in your approach. Solve one problem, then solve the second one. Soon, these low-hanging fruits will add up to a massive tree of benefits you’re providing your organization. 

I have immense respect for data scientists—they always excel and do the right thing. However, they also have a responsibility to ensure we have the right data before analyzing patterns and determining what can be delivered. They must also assess if the results align with the original expectations and use cases. If there’s a gap, they should identify it and figure out how to bridge it. Initially, this might involve having a human in the loop.

Sometimes, someone needs to step in to bridge the gap until the system learns. As you seek more advanced outcomes, remember two key lessons: keep it simple, and understand that this is a never-ending story.

Jarek Jarzębowski: What are you most excited about for the future? What technologies, use cases, or trends are you most excited about?

Anand Medepalli: I’ll stick to supply chain, which I focus on these days. The pandemic exposed a significant weakness in supply chains. Prior to that, everybody thought demand-side shocks were what we needed to manage. What were the demand-side shocks? Christmas time, Black Friday that Amazon would do, or Thanksgiving, back to school, or Chinese New Year. These were the kinds of events where demand became so huge that supply chains were struggling to cope with that demand. How do I get all the iPhones into the stores at the right time? Because I launched it, and everybody needs to buy it. Once the first wave is bought, how are we going to replenish them? It was all demand-driven.

The pandemic showed there were weaknesses on the supply side as well. When shipping lines put the ships away and tried to bring them back after COVID, you know what happened. All of us, because of the stimulus that the governments gave, had a lot of cash. We weren’t spending that money because we weren’t able to travel anywhere or do anything. So what did we all do? We all started upgrading our homes. The suppliers were struggling to deliver the items that were suddenly in demand, and there was not enough capacity to carry. The vulnerability on the supply side became very critical.

So the supply chain became volatile on both the demand and supply sides. It became very important that supply chain risks be understood much better than before. The trend in supply chain has gone towards understanding likely risks when the risks are unpredictable. Nobody thought the Ukraine war would happen, and there it was. After the pandemic, everybody thought, okay, now we will go back to normal, but no. Ukraine happened. Cost of living crisis happened. Then the Middle East happened. Baltimore Bridge happened. Panama Canal drought happened. It was like one bad thing after another, and not one risk was like the previous one.

So I can’t build a model to manage pandemic-related disruptions and then say, “Oh, Baltimore Bridge collapse, let me apply this.” It won’t work. Every risk is now frequent, and every risk is now different, which means that as a supply chain officer, the only thing I can do is react really quickly to anything that happens. I need information—quality information—at my fingertips. 

But can I go into my planning with a different mindset? I don’t have a static historical data set with which I can forecast what is likely to happen and take plus or minus of that. Those days are gone. I need a very dynamic and rich historical data set with all kinds of things that could go wrong in it. So I can simulate a series of plans, simulate a series of scenarios that could happen. When something goes wrong, I won’t be sitting there saying, “What the hell is this?” I’ll be able to say, “Oh yeah, guys, we practiced this. This is scenario 52. What did we say we would do?”.

This digital twin technology, while not new, has historically struggled due to a lack of sufficient data. Today, data has become central to the supply chain, not just for managing current disruptions but for improving future planning and enabling simulators to model various scenarios. The exciting trend is the availability of high-quality, real-time data, which supports both immediate issue resolution and optimizes upstream and downstream processes. For example, during invoicing, supply chains often rely on external partners, which can complicate cash flow management.

A lot of times you have disputes over that, like: “You didn’t deliver at the time you said.” “Oh, I did.” “No, you didn’t.” And that resolution takes a long time. Now with real-time data, they use our data, actually. A lot of companies do. They have the timestamp of exactly when that shipment was delivered. That eliminates this kind of discussions.

Another trend that I’m excited to support is the whole sustainability, carbon footprint management. It is not just a moral requirement but now a legislative requirement in Europe, as you know. How can companies say what exactly was their carbon footprint in their supply chain? You guessed well - the data that we collect can give that information. Why? Because we have tracked every shipment. We know what route it took, how long it took, what were the breaks - everything. So we can help you calculate your carbon footprint at a shipment level. You can report it to authorities, to your board, to whoever. So honestly, the trend that I’m excited about is the multiple use cases of the datasets to coherently help not just here and now, but the entire end-to-end supply chain planning.

That’s the real trend—there’s no doubt about the rise of AI, whether it’s generative AI, vision algorithms, or document scanning. But the key shift is the democratization of data, where high-quality information is instantly accessible to everyone, from execution to planning to strategy teams, all pulling from the same data source. The biggest challenge in organizations is data silos—finance, transportation, and supply chain teams often work with isolated data sets on the same shipments, which leads to misaligned plans and poor execution. These silos prevent data from maturing and instead cause it to mutate, resulting in incorrect inputs. The exciting part is that these silos can now be broken, and we can finally rely on unified data.

Jarek Jarzębowski: I think that very interesting days are still ahead of us with the potential use of data and its impact on different businesses and different departments, different areas in the business.

Anand Medepalli: You made a very good point about different departments and businesses. What does that mean? Now, suddenly I’m used to my own data, and you’re telling me there’s an organizational data change management. So, the mindset has to change. Trust in the system and the information it provides has to change. You have to accept that this is how it works now. Why am I optimistic about it? Because of the new generation entering the workforce. The Gen Zers coming in trust the system implicitly. They were practically born with a phone in hand, so they are comfortable sharing information and trusting what they see. While this trust can sometimes be misplaced, if the data is of high quality, their inherent trust in the system to do the right thing creates the perfect conditions for this transformation to happen.

I’ll leave you with this statistic: Around 2021, during the pandemic, most supply chain organizations planned and promised to have systems in place by 2026 to make their supply chains resilient—meaning they wouldn’t break if something went wrong. But based on the 2023 Gartner report, 95% of those companies now admit they won’t be ready by 2026. 

By the way, there was another study that showed digitalization was seen as the solution. In fact, Shippeo grew during the pandemic because digitization was the only way to track shipments when no one was answering the phones. We’ve benefited from that, but we also recognize the pitfalls. Sixty-eight percent of supply chain organizations that invested in digitization technologies say they’re not seeing a return on investment. I firmly believe this is because they’ve neglected, misunderstood, or underestimated the need for high-quality systems of information. These systems are crucial for learning, resilience, and going beyond mere survival.

So, to answer your question about my advice to organizations: Go into this with your eyes wide open. You have no choice but to do it, but it’s not an easy journey. There’s no magic pill. Don’t be seduced by flashy AI terminologies. Yes, you’ll use those—they’re powerful and necessary—but you must understand your data, break down silos, and trust your system of information.

Jarek Jarzębowski: I guess, at least at the moment, we still need people, humans to understand the data and make decisions.

Anand Medepalli: Not just today—we will always need them. There will always be new conditions in the data. I’ll give you a simple example. I’m comfortable with my current data, but now I want to expand my business to Poland. It’s a great market, so let’s go there. I’ll talk to Jarek, but then I realize we’ve never done business there. I don’t have any data from Poland. Suddenly, there’s new data, and I have to make sense of it. I could have the system give me nonsense answers, but I need accurate insights.

What these systems with proper data flow do is remove the mundane and predictable tasks from your workload. Instead of checking every shipment, you focus on the five shipments that really need your attention, like when a GPS device on a truck fails. That’s why you’re not getting data, and now you’re in the dark. So, I alert Jarek, saying, “I have no idea what’s happening—you’ve got to handle this.” Jarek will pick up the phone and call the trucking company, asking, “Hey, what happened? Why am I not getting the data?” The dispatcher might apologize and explain the issue. Then Jarek might say, “Call me every time there’s a milestone. If the driver stops for a meal, I want to know because I have calculators here that tell me what to inform my warehouse team.”

That human touch—human in the loop—will never go away, no matter how advanced AI becomes, because data has a funny way of being wrong at the worst times. That’s where the human in the loop comes in.

That’s why I talked about the system of engagement. Make it easy for Jarek to interact with a lot of information, even when it’s bad information. The system can learn and maybe directly email the dispatcher next time without needing to ask Jarek. That’s where Gen AI comes in. Human beings and business acumen will always be needed. What you need to do with data and AI is to make their jobs continuously easier so they can keep pushing boundaries, especially in a world that’s constantly in turmoil.

Anand Medepalli’s Background

Almost two decades of experience in supply chain management, 8 years spent delving into mathematics, and numerous AI product strategies in his portfolio —Anand Medepalli has all it takes to cultivate innovation within the shipping sector and beyond it. As the Chief Product Officer at Shippeo, Anand champions DevOps culture, platform strategies, and UX-focused software development, leveraging AI for better prediction accuracy. With a strong expertise that bridges product strategy, machine learning, business processes, and execution, he has consistently delivered cutting-edge technology solutions across diverse industries.

About Shippeo

shippeo.com

Shippeo is a supply chain visibility platform that enables leading companies to harness transportation insights for superior customer service and operational efficiency. By providing shippers, carriers, and end customers with real-time tracking and predictive ETAs for every delivery, Shippeo allows businesses to anticipate issues early and manage exceptions effectively.

Closing Thoughts

In today’s world, AI is often presented as a cure-all, a powerful force capable of handling any data-related task. However, Anand shows that the reality is much more complex, especially when it comes to supply chains. First, the power of machine learning alone is not enough; statistics and skillful use of historical data are also essential. Second, sometimes using AI simply isn’t cost-effective. Third, different needs require different approaches to data, and some insights can’t be drawn without the “human touch.”

Anand’s vision is optimistic—thanks to the development of data management tools, we can focus more on data and its quality. As a result, we can respond better and faster to unexpected market changes and situations affecting the smooth flow of goods. According to Anand, humans will always be a necessary part of the equation, able to react to new conditions and creatively solve data-related problems.

About the author

Jarek Jarzębowski

Jarek Jarzębowski

People & Culture Lead

Linkedin profile Twitter

Jarek is an experienced People & Culture professional and tech enthusiast. He is a speaker at HR and tech conferences and Podcaster, who shares a lot on LinkedIn. He loves working on the crossroads of humans, technology, and business, bringing the best of all worlds and combining them in a novel way.
At nexocode, he is responsible for leading People & Culture initiatives.

Would you like to discuss AI opportunities in your business?

Let us know and Dorota will arrange a call with our experts.

Dorota Owczarek
Dorota Owczarek
AI Product Lead

Thanks for the message!

We'll do our best to get back to you
as soon as possible.

This article is a part of

AI Revolution Diaries
7 articles

AI Revolution Diaries

Step into the narrative of change with our AI Revolution Diaries, where each interview captures a moment in the ongoing revolution of artificial intelligence across industries. These diaries detail the firsthand experiences of businesses at the forefront of integrating AI, highlighting the transformative impact and the lessons learned throughout their journey of innovation.

Engage with our series to discover the strategies that drive successful AI integration, and grasp the benefits and hurdles encountered by pioneers in the field. Let us be your guide in navigating the transformative journey of AI, empowering your business to harness the full potential of data and shape the future of your industry.

check it out

Becoming AI Driven

Insights on practical AI applications just one click away

Sign up for our newsletter and don't miss out on the latest insights, trends and innovations from this sector.

Done!

Thanks for joining the newsletter

Check your inbox for the confirmation email & enjoy the read!

This site uses cookies for analytical purposes.

Accept Privacy Policy

In the interests of your safety and to implement the principle of lawful, reliable and transparent processing of your personal data when using our services, we developed this document called the Privacy Policy. This document regulates the processing and protection of Users’ personal data in connection with their use of the Website and has been prepared by Nexocode.

To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at the level which ensures compliance with applicable Polish and European laws such as:

  1. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (published in the Official Journal of the European Union L 119, p 1); Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item 1000);
  2. Act of 18 July 2002 on providing services by electronic means;
  3. Telecommunications Law of 16 July 2004.

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.

1. Definitions

  1. User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal person, or an organizational unit which is not a legal person to which specific provisions grant legal capacity.
  2. Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
  3. Website – website run by Nexocode, at the URL: nexocode.com whose content is available to authorized persons.
  4. Cookies – small files saved by the server on the User's computer, which the server can read when when the website is accessed from the computer.
  5. SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary methods of data transmission encrypts data transmission.
  6. System log – the information that the User's computer transmits to the server which may contain various data (e.g. the user’s IP number), allowing to determine the approximate location where the connection came from.
  7. IP address – individual number which is usually assigned to every computer connected to the Internet. The IP number can be permanently associated with the computer (static) or assigned to a given connection (dynamic).
  8. GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals regarding the processing of personal data and onthe free transmission of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
  9. Personal data – information about an identified or identifiable natural person ("data subject"). An identifiable natural person is a person who can be directly or indirectly identified, in particular on the basis of identifiers such as name, identification number, location data, online identifiers or one or more specific factors determining the physical, physiological, genetic, mental, economic, cultural or social identity of a natural person.
  10. Processing – any operations performed on personal data, such as collecting, recording, storing, developing, modifying, sharing, and deleting, especially when performed in IT systems.

2. Cookies

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet. The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end device.
Cookies are used to:

  1. improve user experience and facilitate navigation on the site;
  2. help to identify returning Users who access the website using the device on which Cookies were saved;
  3. creating statistics which help to understand how the Users use websites, which allows to improve their structure and content;
  4. adjusting the content of the Website pages to specific User’s preferences and optimizing the websites website experience to the each User's individual needs.

Cookies usually contain the name of the website from which they originate, their storage time on the end device and a unique number. On our Website, we use the following types of Cookies:

  • "Session" – cookie files stored on the User's end device until the Uses logs out, leaves the website or turns off the web browser;
  • "Persistent" – cookie files stored on the User's end device for the time specified in the Cookie file parameters or until they are deleted by the User;
  • "Performance" – cookies used specifically for gathering data on how visitors use a website to measure the performance of a website;
  • "Strictly necessary" – essential for browsing the website and using its features, such as accessing secure areas of the site;
  • "Functional" – cookies enabling remembering the settings selected by the User and personalizing the User interface;
  • "First-party" – cookies stored by the Website;
  • "Third-party" – cookies derived from a website other than the Website;
  • "Facebook cookies" – You should read Facebook cookies policy: www.facebook.com
  • "Other Google cookies" – Refer to Google cookie policy: google.com

3. How System Logs work on the Website

User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The information collected in the Logs is processed primarily for purposes related to the provision of services, i.e. for the purposes of:

  • analytics – to improve the quality of services provided by us as part of the Website and adapt its functionalities to the needs of the Users. The legal basis for processing in this case is the legitimate interest of Nexocode consisting in analyzing Users' activities and their preferences;
  • fraud detection, identification and countering threats to stability and correct operation of the Website.

4. Cookie mechanism on the Website

Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful information and are stored on the User's computer – our server can read them when connecting to this computer again. Most web browsers allow cookies to be stored on the User's end device by default. Each User can change their Cookie settings in the web browser settings menu: Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings > Advanced. In the "Privacy and security" section, click the Content Settings button. In the "Cookies and site date" section you can change the following Cookie settings:

  • Deleting cookies,
  • Blocking cookies by default,
  • Default permission for cookies,
  • Saving Cookies and website data by default and clearing them when the browser is closed,
  • Specifying exceptions for Cookies for specific websites or domains

Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options > Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with the OK button.

Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field. From there, you can check a relevant field to decide whether or not to accept cookies.

Opera
Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site data. From there, adjust the setting: Allow sites to save and read cookie data

Safari
In the Safari drop-down menu, select Preferences and click the Security icon.From there, select the desired security level in the "Accept cookies" area.

Disabling Cookies in your browser does not deprive you of access to the resources of the Website. Web browsers, by default, allow storing Cookies on the User's end device. Website Users can freely adjust cookie settings. The web browser allows you to delete cookies. It is also possible to automatically block cookies. Detailed information on this subject is provided in the help or documentation of the specific web browser used by the User. The User can decide not to receive Cookies by changing browser settings. However, disabling Cookies necessary for authentication, security or remembering User preferences may impact user experience, or even make the Website unusable.

5. Additional information

External links may be placed on the Website enabling Users to directly reach other website. Also, while using the Website, cookies may also be placed on the User’s device from other entities, in particular from third parties such as Google, in order to enable the use the functionalities of the Website integrated with these third parties. Each of such providers sets out the rules for the use of cookies in their privacy policy, so for security reasons we recommend that you read the privacy policy document before using these pages. We reserve the right to change this privacy policy at any time by publishing an updated version on our Website. After making the change, the privacy policy will be published on the page with a new date. For more information on the conditions of providing services, in particular the rules of using the Website, contracting, as well as the conditions of accessing content and using the Website, please refer to the the Website’s Terms and Conditions.

Nexocode Team