What does it take to revolutionize with AI? How are companies leveraging artificial intelligence to drive innovation and efficiency? What challenges do they face, and what breakthroughs are shaping their journey? In our new series, AI Revolution Diaries, we explore real stories of innovation from the front lines of different industries. This time, we’re looking at the logistics sector, an area full of opportunities for AI to make big changes.
We’re starting our series with a talk with Christian Merkwirth, an expert who has spent over twenty years using AI and data science in many fields, including logistics. Christian’s background in physics and his long career have given him a unique view on how AI can solve complex problems in moving goods and services more efficiently.
In this article, Christian, who is leading AI research at FLYR Labs, shares his thoughts on the future of logistics with AI. His insights offer a unique perspective on the challenges and opportunities that lie ahead in harnessing AI to revolutionize logistics and supply chain management.
AI Revolution Diaries are all about bringing to light the real-life uses of AI and data science, directly from the people making it happen. With these interviews we aim to inspire, inform, and ignite a conversation among our clients and partners about the boundless possibilities of AI. Welcome to a series where technology meets real life, bringing forth tales of transformation, innovation, and the future of various industries.
Key Takeaways from the Conversation
Technological evolution’s impact on the supply chain: The rapid advancements in computing and data analysis technologies have been pivotal in enabling more sophisticated AI applications in general, but in our guest’s opinion we have not yet seen the full use of AI in the supply chain management.
Logistics industry’s unique AI challenges: Christian shares his perspective on how the logistics sector faces specific hurdles in applying AI. Despite significant technological advancements, the logistics sector faces unique challenges in fully adopting AI. These include the complexity of forecasting demand, the scarcity of comprehensive data sets, and the specific nature of logistical operations that are less amenable to current AI solutions compared to other sectors.
Crucial role of data management in logistics: For AI to be effectively leveraged in supply chain management, logistics companies must prioritize high-quality, well-structured data collection and management practices from the start. The lack of large-scale, high-quality public datasets in logistics underscores the importance of meticulous data collection and management by individual companies to inform future AI-driven decision-making processes.
Tailored solutions vs. off-the-shelf products: The logistics industry’s diversity means that while some areas may benefit from standardized AI solutions, many aspects will likely continue to require customized approaches to address specific challenges.
The emerging role of LLMs in logistics: Although the full capabilities of LLMs in enhancing logistics operations are still under exploration, Christian is an optimist about their potential to address some of the industry’s pressing challenges through
predictive analytics and optimization.
Importance of human-AI collaboration in supply chains: Despite the advancements in AI, the need for human insight and intervention remains critical, especially when unforeseen real-world events impact logistics operations, says Christian. This highlights a future where AI and human expertise synergize to enhance supply chain management.
Conversation with Christian Merkwirth
Jarek Jarzębowski: Hello Christian, can you start by telling me a little bit more about you, your role, and your experience in data science and logistics?
Christian Merkwirth: Yeah, awesome! Okay, that’s a long story… I’ve been working with data science for a really long time, twenty-plus years now. This sounds like it’s impossible, but it is. I studied physics and, let’s say, at the stage of doing the master’s work, physics is often related to analyzing experimental data, sometimes a lot of it and it’s also typical for physicists to be good at coding. If you look at that, these are the ingredients you need for data analysis. Actually, you need several things: you need to be good or proficient at using the computer, you should have some affinity to math, you should have some affinity to experimental data, meaning taking the experiment as an expression of nature. And the other thing is, as a physicist, you are a bit predestined for data science. As a physicist, you perceive the world as being a causal machine. For example, I always believe whatever I find in the data, there must be an underlying mechanism that produces this data, it doesn’t come from nothing.
There is an underlying hidden determinism or truth in the data that you have to detect; you have to basically uncover. It’s not that you invent it. You try to look at the data and try to understand what the nature of this data is, and it comes from physics, so coming from physics is a good start, but that doesn’t mean that every good data scientist needs to be a physicist. I started as a physicist, and I came into the data business, and then very early on, I was in a startup in Germany that was working with lots of experimental data for discovering drugs, pharmaceuticals, which is a huge field. Now the technology is ripe in the sense that we can finally really do what this company twenty years ago wanted to do. I think I was hired in this company in 2000 and I think now finally the promise can be fulfilled that you can cheaply generate new drug candidates. It took a lot of time, you need a lot of experimental data, you need a lot of compute power. What has changed in these twenty years is Moore’s law,and the internet. I mean now there is much more data collected and it is much easier to get large datasets. The breakthrough in deep learning was I think around 2012 when the first deep learning models excelled the previous generation of models that didn’t use deep learning by a large margin.
Then every year after 2012 this computer vision resource, deep learning got better and better to an extent that now you would consider it solved. The errors are at the level of the errors that humans do when they label these images. The difference is basically the amount of data you have to train your models and the amount of compute grew in the last twenty years and I think it’s growing even faster. Now we are deep inside this. When you look at the past 20 years another thing that changed is that then had to program most stuff in for the CPU, which is very slow. Also, you used then C++, which by itself is a good language, but it has a slow iteration cycle.
It takes a lot of time to run the full loop of coding, compiling the code & running the experiment. Later on, I had the opportunity to use MATLAB for doing tensor type of computing. MATLAB was the go-to tool at that time, after that NumPy arose and it became more popular. And then later actually the deep learning frameworks TensorFlow, Theano, there were some before like Caffe and then Keras, PyTorch. They made working with this kind of deep learning approach so much easier. I saw all these developments, I worked in large American companies, like Google, later in startups, for example like FLYR where I built up a lot of the data science stack that we use now. It’s based on deep learning, TensorFlow and Keras coming from Google.
Jarek Jarzębowski: You have briefly told me about the recent advancements in data science in general. I am wondering, what are the key technologies, key advancements that are currently shaping the AI landscape in the logistics field because this is the field that you’re currently working in.
Christian Merkwirth: That’s a very good question. The logistics field is very, very special. It’s very specific. It doesn’t lend itself very easily to the advancements that were done recently. It does not benefit from computer vision too much. There are some applications, like you could automatically count items loaded in a truck. You could automatically detect, for example, did the plane leave the gate or did it arrive at the gate, right? Like passengers disembarking from a plane, you could with this Not sure if we think that this is logistics, but let me put it like that - I think the logistics world is still not fully touched by AI. It is maybe ripe for that. But there hasn’t been such an immense breakthrough. There are two methods of data science that are extremely important here. The one thing is to forecast demand for a given transport. Let it be cargo, let it be passengers. Ships, trains, rail, truck. You typically need to forecast how much of which type of good I need to get from A to B when. You can use neural networks for that, but they don’t necessarily excel. It’s not as clear as it is for computer vision or for NLP that neural networks are the go-to tool here.
Forecasting is the one thing that is extremely important and what makes it difficult in this case is really the data. The data is much more difficult to handle than text data that you use for NLP. There’s really a problem with the data. There’s not a lot of huge public datasets. Also, we call this data censored (which does not mean some bad humans sits and blacks out some part of the data). Censoring means that often you can see observe the full demand that is out there since you had to set a price for your service, and some people are not willing to pay that price level. So this means some part of the full demand is censored. A second thing is that even if you know perfectly how many people would like to go from A to B from B to C from E to F on a given day on a given hour at a given price point, what do you do with this information? Typically you use a so-called optimizer to solve the allocation problem. Basically how many and which trucks you send to transport the goods from A to B which drivers you send, which route. But what is next? When the truck delivered the goods at point B it should get a new contract right? Like to transport another order of goods from B to D. You want to do an optimization where you want to make sure that you put the right shipment on the right truck going there but you would like to also optimize that at the next step your trucks are at a good place. They don’t start then the next day all from point A, they are now scattered at a hundred different places over your country and now you want to give them the next orders right.
Jarek Jarzębowski: So we’ve got a problem of forecasting demand. But also, we need to do second-order thinking about what will happen next and how to approach this problem not today but also in a few days where each next step will also affect all the other steps: How do we actually approach it? You have mentioned that it is not solved yet. Definitely it’s not easy to solve. So how can you approach it? How would you approach it?
Christian Merkwirth: This is a good question and I can’t get into full detail but there are a range of possible approaches and at this place I want to recommend a book or even the whole work of Warren Powell who created the term sequential decision and analytics. It is a very complex problem again because of this iterative nature. So think about it like if you are the manager of the fleet with trucks and you are after a few days of operating. You want to say which orders do I accept today and how do I assign them to my fleet in a way that in five days from now I again make a lot of money. I want to make good money today but I also want to make good money tomorrow.
When I look at the weather forecast I plan for three days maybe a week but I know that it’s an indication and can change. I think this is always the case right? It’s not not only for the weather. There is a so-called prediction horizon and beyond that your forecast is simply a low probability. I mean all the probabilities, they go down a lot. Maybe in the long term you just predict the typical averages right?
Jarek Jarzębowski: Yeah, so basically you are aiming for a good positioning instead of maximizing the gains in the short term because over the long term the better positioning overall might be better.
Christian Merkwirth: Exactly, you will try to find an assignment of orders to trucks that A it gets your decently high profit today or in the next twenty four hours but it also brings your trucks in a kind of constellation all over the country that is not too shabby. We want to put our fleet into a valuable position. With value we mean typically something like an historical average of what the truck can make, when they are at a given location. For example when they’re in Seattle you know they could take some plane parts and bring them back to closer where your outbound journey started.
On the other hand you might not want to send a truck into the wilderness in Canada. You know there are surely places where the chances that there’s a follow up order are really low, while at other places it’s more or less sure that you will get a follow up order. In the end what you optimize your decisions for is basically I want a good value today plus it needs to be at a decent value for the next day and you make a tradeoff between the two. You could be very short sighted and go for maximum value today or you could say “I don’t care what my short-term profit is, I just just want to put my trucks at the right place”.
Jarek Jarzębowski: A few minutes ago you have mentioned that one of the problems in logistics is that there is not too much data available. Do you think that for individual companies it is one of the things that might be of utmost importance? Since there is not too much public data and we want to make decisions in the future based on Data, we need to gather our data structured well and manage it well. Since there is no good solution at the moment and we don’t really know what we will need and so we need to focus really really well on managing the data?
Christian Merkwirth: Absolutely. It would be very very helpful generally for progress in this area. It’s super important to collect data in a proper way, meaning as precise as you can. That sounds simple but it’s not. Think about a plane people book but then at the last moment, there can be changes, like people don’t show up at the gate, like planes have technical problems and then they don’t fly. It’s extremely hard to collect data in a way that later on after two or three years you really understand what happened that day. Logistics is a lot about the real world, it’s a very down-to-earth kind of business. The thing about the image data - it was relatively easy. They are a crazy number of photographs taken in a year. Cameras are everywhere, so it was clear that we will get vast amounts of such data. We have a lot of it and it’s relatively good quality and you can annotate it relatively easily e.g. using human annotators. Logistics data is different. There is no public data set that is of gigabytes of good quiet quality data. There’s thousands of logistics companies but everyone has their in-house data and it’s most probably undergone many changes of the data schema. I would say there is a lack of publicly available data that could be used for research.
Jarek Jarzębowski: Let’s say that we have a decent size company that might be able to gather enough data, but it has not done so so far and they want to somehow use the ‘data’ approach in general in the future. How should they approach this problem? Should they start with forming an internal data science department so that they can structure the data properly? What they can do to make sure that in two or three years, they will actually be able to do something.
Christian Merkwirth: It needs a bit of love and care of the data scientists towards the data. The data handling is eighty or ninety percent of your job and time spent on the job. It’s extremely important that you have the data science approach embedded in your company so that you know how this data was generated, what was collected, what doeseach field really mean, why sometimes I see that there is a value of zero or a missing value, etc.
Jarek Jarzębowski: So if a company wants to use Data Science they should focus on gathering good data from the very beginning - as good as they can - from the very beginning and also understand what they are gathering. Job of the data scientist or a consultant starts before the actual work on a model because you need to first work on a data structure and then make it as useful as possible.
You have also mentioned that this big trend of LLMs is not currently utilized in logistics. Do you think that it will somehow also affect logistics?
Christian Merkwirth: Actually I’m waiting for the moment that someone comes with the right prompt and the right context and simply makes it work. It might be that LLMs could do that but I haven’t seen that yet. I’m waiting every day that a paper comes out that LLMs can do this relatively well. Maybe they won’t do a perfect solution but they might do a good enough solution for most companies.
Jarek Jarzębowski: Often 80% is good enough and you don’t need to optimize for 100 % on many occasions. Do you think that there is a place in logistics for ready might solutions like or each solution should be or each solution should be tailor made or at least customized for the problem?
Christian Merkwirth: That’s a really good question. It really decides a lot about the business strategy. I think it depends on the area right. I think about trucking logistics. It might be possible to come up with a standard solution that needs very little customization now. Actually I didn’t work with that and maybe I completely underestimate how specific and how different each logistics company may be. I still would be positive that it should be possible to come up with a good product off the shelf that with relatively little customization. I worked with airlines a lot over the last five years and we had the same idea for airlines. But the unfortunate reality is that airlines typically have an immense complicated and specific individual tech stack that they acquired over the years. As withs other logistics but especially for the transport of people, they operate according to a single rule. The show must go on which means you cannot stop. You cannot close down your airline for seven days and implement a perfect solution. The show must go on, the planes must fly. You have customers that are dependent on that. You know they are watining at the destination and want to return so you must go on flying. It means that in case of the highest emergency, the airline switches back to manual mode. It doesn’t happen every day of course, but when some situation arises they might have to put up a war room and try to manually get the crews and the machines assigned. What I want to say is unfortunately it’s incredibly hard to develop a standard product right now. I don’t say it is totally impossible. Even then,if you had that, the airlines often will not be switching, because of the high cost of switching. Even if you had this perfect solution, iit’s very hard to sell. I think this is total customer focus, which is a good thing. On the other hand it makes the airline world on the IT side very difficult.
Jarek Jarzębowski: I’m also wondering about your. Let’s say predictions about the future. Maybe there’s something else you’re excited about and that might shape the future of the industry in the field of big data and data science in general?
Christian Merkwirth: That’s a tricky question to be honest. Like I said I believe the technology is not fully solved. It could be that the LLMs could be a very universal approach but there could also be very specific approaches. I could envision a future where you basically use LLMs to write simulators which are used a lot. We didn’t discuss it too much but you can try to simulate the trucks where they are in 2 scenarios and what would happen. Like you you could run millions of simulations of what might happen in the next week and try to pick the optimal path. Now again I’m just speculating.
Jarek Jarzębowski: It has been said that for some time now humans are not the best chess players or go players. The AI models are better and even better are usually teams or hybrid teams of human and a machine.
Somehow I see the connection and similarity between these games of chess and go and logistics - sending trucks to different cities and it’s changing the scene. Do you think that the future might be that these hybrids of AI models give best results?
Christian Merkwirth: That’s a very good question and it’s a very good point. Actually when I spoke before about the value function for your future state. It’s exactly what DeepMind’s AlphaZero is doing. In the best of all worlds they would need to trigger the manual override only once in 10 years or at the best case never but when the mess hits the fan they need to be able to turn it off and go in fully manually. What I want to say is, maybe the number of people that need to supervise the AI would be less than what you need now but you will need well educated people to look at what the AI is doing and intervene intervene in such case. I think this will still hold true for a good couple of years if not longer.
Jarek Jarzębowski: Basically we need better models, but also people who understand how to work with them and that understand what the model is saying and how to interpret that and for that you need to both understand the technology but also the field itself.
Christian Merkwith: Absolutely and also like you said, maybe it’s not only supervising as you said. Like you let the computer do a lot of basic work, but you surely want to look at summaries and you want to look at the extreme cases and decide. Then again, the computer sees only the data you provide the computer. But the computer has a very hard time reaching out and calling someone to better understand what’s going on. Like there’s a strike at the border and for example, you would need to know will the strike dissolve in a day or is that a huge political thing that could last a month. The model doesn’t have the concept of such kinds of events. You don’t have this written in your database so it’s good to have a human fallback. It’s extremely important.
Jarek Jarzębowski: It definitely seems that logistics is a field that is both very complex and is interesting from both perspectives - business-wise and technology-wise. It’s really both interesting and complex and probably it is a very good place to use data science in general as it might affect the field really really immensely.
Christian Merkwirth: Logistics doesn’t happen inside the perfectly controlled clean room. It’s really happening outside in the open world and the amount of surprises that reality can generate are endless, issues you never would be able to think of beforehand.
Jarek Jarzębowski: Christian, thank you very much for sharing your experience and approach. I really think that it will be an interesting read for many people.
Christian Merkwirth’s Background
Christian Merkwirth is an expert in artificial intelligence and machine learning, currently making significant contributions to the advancement of AI in logistics and supply chain management. With a foundation in nonlinear physics, holding a PhD from Georg-August-Universität Göttingen, Merkwirth has developed a unique perspective on solving complex problems in the logistics industry through the application of predictive analytics and deep learning technologies.
His career spans over two decades, starting from his time as a research scientist applying machine learning to drug design at Novel Science International. He has also worked at Google as a Research Software Engineer. Christian has also served as an Adjunct Professor at Jagiellonian University, focusing on applied data analysis, ensemble methods, and the application of convolutional neural networks to various domains including cheminformatics and quantitative finance.
In recent years, Christian has led AI and ML initiatives at FLYR Labs, where as Head of AI Research, he has spearheaded the development of a novel Network Revenue Management System powered by proprietary deep learning models. His work focuses on improving pricing and forecasting models to enhance revenue management systems for airlines, positioning him as a leader in integrating AI within the logistics and supply chain sectors.
FLYR Labs stands at the forefront of innovation with The Revenue Operating System, a platform dedicated to harnessing the power of artificial intelligence to empower leaders in the transportation sector. By integrating cutting-edge AI technologies, FLYR Labs offers a comprehensive solution designed to optimize revenue performance and enhance digital customer satisfaction. Its platform provides a unified view— a “single pane of glass”— encompassing data analysis, forecasting, pricing strategies, reporting, and simulation tools. This integration facilitates and automates wide-ranging commercial functions and decisions. From its Internet Booking Engine (IBE) to offer management, customer communication, and content management, FLYR’s platform ensures a seamless e-commerce experience. With its headquarters in California and additional offices in Los Angeles, San Francisco, Dallas, Kraków, and Amsterdam, FLYR Labs is redefining the future of transportation through innovation and excellence.
As we conclude our exploration into the transformative impact of AI in logistics through the lens of Christian Merkwirth’s experience at FLYR Labs, a few pivotal thoughts emerge. This conversation not only sheds light on the current landscape and challenges of integrating AI into logistics but also underscores the untapped potential waiting to be harnessed. Christian’s journey and insights serve as a beacon for the future, highlighting the importance of data-driven decision-making, the adaptability of AI technologies, and the critical role of human insight in navigating the complexities of logistics and beyond.
The evolving narrative of AI in logistics, as exemplified by FLYR Labs, suggests a future where the boundaries of efficiency, prediction, and customer satisfaction are continually expanded. The journey of AI is far from linear; it’s a path marked by innovation, learning, and the relentless pursuit of excellence.
This interview marks the beginning of our journey through the AI Revolution Diaries, offering a glimpse into how AI is reshaping logistics and supply chain management. As we continue to explore these transformations with other industry experts, we will uncover more about how businesses can navigate the challenges and seize the opportunities presented by AI.
Jarek is an experienced People & Culture professional and tech enthusiast. He is a speaker at HR and tech conferences and Podcaster, who shares a lot on LinkedIn. He loves working on the crossroads of humans, technology, and business, bringing the best of all worlds and combining them in a novel way. At nexocode, he is responsible for leading People & Culture initiatives.
Are you curious if AI is something that can change your company?
Step into the narrative of change with our AI Revolution Diaries, where each interview captures a moment in the ongoing revolution of artificial intelligence across industries. These diaries detail the firsthand experiences of businesses at the forefront of integrating AI, highlighting the transformative impact and the lessons learned throughout their journey of innovation.
Engage with our series to discover the strategies that drive successful AI integration, and grasp the benefits and hurdles encountered by pioneers in the field. Let us be your guide in navigating the transformative journey of AI, empowering your business to harness the full potential of data and shape the future of your industry.
In the interests of your safety and to implement the principle of lawful, reliable and transparent
processing of your personal data when using our services, we developed this document called the
Privacy Policy. This document regulates the processing and protection of Users’ personal data in
connection with their use of the Website and has been prepared by Nexocode.
To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and
technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at
the level which ensures compliance with applicable Polish and European laws such as:
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on
the protection of natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
(published in the Official Journal of the European Union L 119, p 1);
Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item
1000);
Act of 18 July 2002 on providing services by electronic means;
Telecommunications Law of 16 July 2004.
The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.
1. Definitions
User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal
person, or an organizational unit which is not a legal person to which specific provisions grant
legal capacity.
Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court
Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department
of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
Website – website run by Nexocode, at the URL: nexocode.com whose content is available to
authorized persons.
Cookies – small files saved by the server on the User's computer, which the server can read when
when the website is accessed from the computer.
SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary
methods of data transmission encrypts data transmission.
System log – the information that the User's computer transmits to the server which may contain
various data (e.g. the user’s IP number), allowing to determine the approximate location where
the connection came from.
IP address – individual number which is usually assigned to every computer connected to the
Internet. The IP number can be permanently associated with the computer (static) or assigned to
a given connection (dynamic).
GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of individuals regarding the processing of personal data and onthe free transmission
of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
Personal data – information about an identified or identifiable natural person ("data subject").
An identifiable natural person is a person who can be directly or indirectly identified, in
particular on the basis of identifiers such as name, identification number, location data,
online identifiers or one or more specific factors determining the physical, physiological,
genetic, mental, economic, cultural or social identity of a natural person.
Processing – any operations performed on personal data, such as collecting, recording, storing,
developing, modifying, sharing, and deleting, especially when performed in IT systems.
2. Cookies
The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.
The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the
Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end
device. Cookies are used to:
improve user experience and facilitate navigation on the site;
help to identify returning Users who access the website using the device on which Cookies were
saved;
creating statistics which help to understand how the Users use websites, which allows to improve
their structure and content;
adjusting the content of the Website pages to specific User’s preferences and optimizing the
websites website experience to the each User's individual needs.
Cookies usually contain the name of the website from which they originate, their storage time on the
end device and a unique number. On our Website, we use the following types of Cookies:
"Session" – cookie files stored on the User's end device until the Uses logs out, leaves the
website or turns off the web browser;
"Persistent" – cookie files stored on the User's end device for the time specified in the Cookie
file parameters or until they are deleted by the User;
"Performance" – cookies used specifically for gathering data on how visitors use a website to
measure the performance of a website;
"Strictly necessary" – essential for browsing the website and using its features, such as
accessing secure areas of the site;
"Functional" – cookies enabling remembering the settings selected by the User and personalizing
the User interface;
"First-party" – cookies stored by the Website;
"Third-party" – cookies derived from a website other than the Website;
"Facebook cookies" – You should read Facebook cookies policy: www.facebook.com
"Other Google cookies" – Refer to Google cookie policy: google.com
3. How System Logs work on the Website
User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The
information collected in the Logs is processed primarily for purposes related to the provision of
services, i.e. for the purposes of:
analytics – to improve the quality of services provided by us as part of the Website and adapt
its functionalities to the needs of the Users. The legal basis for processing in this case is
the legitimate interest of Nexocode consisting in analyzing Users' activities and their
preferences;
fraud detection, identification and countering threats to stability and correct operation of the
Website.
4. Cookie mechanism on the Website
Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful
information
and are stored on the User's computer – our server can read them when connecting to this computer
again.
Most web browsers allow cookies to be stored on the User's end device by default. Each User can
change
their Cookie settings in the web browser settings menu:
Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings >
Advanced. In
the "Privacy and security" section, click the Content Settings button. In the "Cookies and site
date"
section you can change the following Cookie settings:
Deleting cookies,
Blocking cookies by default,
Default permission for cookies,
Saving Cookies and website data by default and clearing them when the browser is closed,
Specifying exceptions for Cookies for specific websites or domains
Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options >
Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with
the OK
button.
Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field.
From
there, you can check a relevant field to decide whether or not to accept cookies.
Opera
Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site
data. From there, adjust the setting: Allow sites to save and read cookie data
Safari
In the Safari drop-down menu, select Preferences and click the Security icon.From there,
select
the desired security level in the "Accept cookies" area.
Disabling Cookies in your browser does not deprive you of access to the resources of the Website.
Web
browsers, by default, allow storing Cookies on the User's end device. Website Users can freely
adjust
cookie settings. The web browser allows you to delete cookies. It is also possible to automatically
block cookies. Detailed information on this subject is provided in the help or documentation of the
specific web browser used by the User. The User can decide not to receive Cookies by changing
browser
settings. However, disabling Cookies necessary for authentication, security or remembering User
preferences may impact user experience, or even make the Website unusable.
5. Additional information
External links may be placed on the Website enabling Users to directly reach other website. Also,
while
using the Website, cookies may also be placed on the User’s device from other entities, in
particular
from third parties such as Google, in order to enable the use the functionalities of the Website
integrated with these third parties. Each of such providers sets out the rules for the use of
cookies in
their privacy policy, so for security reasons we recommend that you read the privacy policy document
before using these pages.
We reserve the right to change this privacy policy at any time by publishing an updated version on
our
Website. After making the change, the privacy policy will be published on the page with a new date.
For
more information on the conditions of providing services, in particular the rules of using the
Website,
contracting, as well as the conditions of accessing content and using the Website, please refer to
the
the Website’s Terms and Conditions.