With processes that involve numerous variables and demand for advanced forecasting, logistics is a perfect ground for AI use cases to prove themselves. Having an extensive background in business intelligence and years of streamlining supply chain processes on the counter, Varun Tyagi is a perfect guest to discuss the industry challenges with.
Based on his experiences across different companies, he shows how to navigate AI innovation in the rapidly evolving technological landscape and avoid common traps in data-based implementations. Check why AI is not always the answer and what to prioritize when optimizing supply chains. Join our host, Jarek Jarzębowski, and make the most of Varun’s expertise.
Key Takeaways From the Conversation
Leveraging AI tools in the supply chain: while generative AI offers immense potential, it is not always the answer to logistics-related issues - and that applies to any other industry. Choose technology based on the issue to solve and its complexity, not its novelty. Many supply chain challenges can be addressed using enhanced natural language processing techniques and various other AI tools, such as clustering algorithms, recommendation systems, and route optimization models.
Prioritizing data privacy and security: Several customer and internal data flow through logistics data pipelines daily. Safeguarding it is the key. How? Establishing robust data privacy measures, including encryption algorithms, data quality gates, and privacy score tracking - these are the preventive steps you can take. Setting up a dedicated data privacy center also ensures ongoing oversight and compliance.
Maintaining control amid technological advances: with technology evolving rapidly, maintaining version control and ensuring access to the latest reliable models and libraries are paramount. Monitoring model KPIs, conducting thorough agent training, and developing scripts for consistent customer interactions are essential strategies to mitigate potential issues.
Addressing customer concerns effectively: properly training agents to handle unexpected model outputs and framing questions professionally is essential for maintaining customer satisfaction and professionalism.
Foundational steps in supply chain optimization: before delving into sophisticated AI algorithms, companies should prioritize establishing real-time visibility in the supply chain and addressing sustainability concerns. This includes optimizing routes, using alternative fuels, and focusing on environmental sustainability alongside data-driven optimizations.
Converstion with Varun Tyagi
Jarek Jarzębowski: Hello Varun. Can you tell me a little bit more about yourself, your journey in AI and logistics, and your backstory?
Varun Tyagi: My name is Varun, and I’ve been working in the
business intelligence and data industry for the last fifteen years. I started my career as a software engineer at Infosys, then became a business analyst. Having gained knowledge and understanding of the business, as well as technical domain expertise, I started my own business back in India. It was in trading, and that was my first foray into logistics and supply chain.
After spending four years there, having scaled the business from 0 to around 1.5 million euros per year, it was time for me to move on to a different venture. The next one, which I founded myself, was into low-cost housing societies. Using data, I identified a niche with enough demand but less supply. The business was not profitable back then, but it was a market to capture. However, data in the construction industry was very hard to get, and I ended up selling my company.
After that, I started my MBA, where I had a chance to broaden my horizons, studying with people of seventy nationalities. Once I finished my degree, I joined Movinga as a Senior Business Intelligence Operations Manager. There, I analyzed the data or issues and created dashboards and reports for the investors.
Movinga was a complex aggregated platform where you could find the best-suited partners to relocate you from city to city through a simple process. However, the data was not captured properly in servers and databases, all landing in Google Sheets. All the data from Salesforce was going into Google Sheets, based on which we carried out our analyses. Due to this, the data quality was very low. We did not have reliable information in time, and accuracy, timeliness, consistency, and completeness—all the seven attributes of data quality—were lacking.
After six months into the company, my head of BI resigned, my data engineer resigned, and I was the only person maintaining the entire data pipeline. That also meant I had more exposure to how the data pipeline was working. I used this opportunity to establish the entire data pipeline from scratch and maintain the data quality gates to gain some visibility and be able to identify the issues.
More or less at the same time, I also started studying AI and machine learning. I call myself a self-taught AI professional because I learned everything by myself and implemented those things in production as well. I would say I increased effectiveness in the companies across the board by 45%, having understood the business aspect of things and how to apply technology to them. After that, I was also part of a pricing team. In a nutshell, I established the data science team, business intelligence team, and pricing team in Movinga.
After four years, I moved on to Foodspring, an eCommerce company with a supply chain department. There, my aim was to optimize the supply chain efficiency. We implemented a lot of data products on operational efficiency and worked on warehouse optimization, customer segmentation, and so on. Also, we migrated the single source of truth from our shop system to our ERP-based system. Then, I think, after 2 years again, it was time for me to get back to logistics, and I moved to Forto, a company within the B2B segment. I only had one experience with B2B before. And back then, in B2B logistics, the ticket sizes were huge, which also meant a high risk of losing customers. That requires constant innovation and creating value for the customer. We focused on being client-first and implemented AI solutions to unresolved problems, improving data quality. Overall, I’ve spent over nine years working with AI, data quality, and scalable infrastructure.
Jarek Jarzębowski: You’ve been in the field for a while, working in logistics and construction with various companies. What are the main AI technologies driving the current transformation in logistics? Can you highlight some specific technologies and use cases driving these changes?
Varun Tyagi: Let’s start with the fact that artificial intelligence is a broad term, an umbrella term including machine learning, generative AI, deep learning, and other fields. Working with logistics, I’ve also used the SciPy library to solve linear problems with its linear algebra functions, which help solve basic equations. I have also tried to find the defect or some kind of image captioning solution. It’s an important issue in logistics, identifying the defect in a product or in a container before it’s loaded.
Generative AI is a rapidly improving technology with huge potential. However, many issues can be resolved without it, by using enhanced natural language processing techniques like transformers. We have various models and Python libraries for tasks like route optimization and carbon emission reduction. Recommendation systems use T5, a text-to-text transformer, as well as BERT and random forests. What I’m trying to say is that numerous AI tools except GenAi - libraries, clustering algorithms, both supervised and unsupervised learning - can be leveraged in the supply chain.
The biggest issues in the supply chain include tackling varying customer demands and fluctuating market dynamics. Operational costs are unpredictable and can shift in either direction. Additionally, sustainability concerns are growing, with customers increasingly vocal about wanting ethically produced goods and sustainable materials. How do we track that, how do we cater to those needs? All these kinds of things, these dynamics are changing. Another challenge is the visibility in the supply chain I was also talking about. There’s a lack of visibility and transparency. So, you see, when we are touching upon so many challenges, we can leverage so many AI applications to address them.
Jarek Jarzębowski: Can you delve into a specific project you’ve worked on, showcasing a real-life application of AI? What challenges or problems were you trying to solve, and how did you approach them? What were the outcomes?
Varun Tyagi: Of course! In Movinga, one of the challenges I saw was sales agents spending most of their efficient time (almost 80%) collecting the inventory of items (IOI) from the customers. Inventory of items is a list of all the furniture in the apartment with their approximate measurements so that the agents can calculate the volume and thereby feed the manually calculated volume to our pricing model. This requires a lot of manual effort from our agents, leading to inefficiency and potential errors. Another challenge was the large number of untapped leads due to spending too much time on one customer, leaving no time for other potential leads.
The issue worsens when agents must spend time onboarding new agents and training them on approaching potential customers, which takes around 2 to 3 weeks. This is particularly challenging when introducing novel technology that most agents are unfamiliar with. And if you add one more technology on top of it, there would be some resistance as well.
Given these challenges, I saw a huge potential in using Machine Learning to reduce the time spent on each lead, increasing efficiency and revenue. For example, if we were to implement an image recognition model, we could ask customers to upload pictures of their apartments. Our proprietary ML algorithm could then identify the area and furniture, generating an IOI list for our sales agents. This ensured accurate calculation of the two critical factors for pricing: apartment area and the volume to be relocated. This was a problem that we wanted to mitigate, thereby supporting our sales agents.
In response to it, the team implemented a combination of Object detection and image recognition. CNN and linear algebra were used to identify the furniture, calculate the area of the apartment and the volume to be moved. This model was trained on a large set of labeled data using our proprietary pipeline. It analyzed images of apartments, furniture, and furniture brands to determine area and volume. The UI displayed the apartment’s total area and a list of furniture items with their volumes, calculating the total volume. Customers could verify this information in the UI. Once approved, the data, including volume and area, was sent to our pricing model to calculate the final price, displayed on a pricing calendar.
Managing data governance was crucial to prevent any potential leaks of personal information - but more on that later. After implementing our robust ML model, lead time decreased from 45-60 minutes to just 10-15 minutes for calculating IOI. Our algorithm accurately identified furniture from pictures, providing apartment area and volume with 85-90% accuracy in 2-3 minutes. The remaining time was dedicated to customer confirmation. This was a game-changer for us. Not only did we improve the efficiency of the sales agents, but were also able to increase the revenue, net promoter score, and the bottom line of the company by using just one technology.
Jarek Jarzębowski: Can you also touch upon how you approach the challenge of verifying the output of the model and also the question of privacy?
Varun Tyagi: Basically, we conducted an A/B experiment within our company. We created an API through which internal customers (selected employees) could upload the pictures of their apartment and activate the mode, mimicking the entire process as if we would do in the production.We checked the model’s authenticity, and made changes wherever possible to create a well-functioning ML model. This is how we ensured the output of the model.
Coming to the privacy question: We had set up the pipeline in such a way that no one will be able to see the pictures of the apartment apart from the customers. It was extremely important for us to both maintain the data privacy (especially the Personal Identifiable Information (PII) of the customers) and our own internal data. Therefore, we had set up sturdy encryption algorithms and also hashed all the important information such as the address of the customers, their names, profession etc. We also created data quality gates tracking 4 out of 7 data quality attributes, as well as the KPI showing privacy score for a lead.
Harness the full potential of AI for your business
To oversee data privacy, we established a dedicated data privacy center involving data engineers, data scientists, and marketers. We were stringent about enforcing these data quality gates and tracked model drift using KPIs. Model drift refers to the decline in model performance due to changes in underlying data distribution. Continuously monitoring these KPIs ensured our model remained up-to-date and used the correct versions of Python libraries.
Once the model verification, robust data pipeline and quality gates were established, we started introducing the new feature to the sales agents. That sums up how we set up the entire data pipeline ensuring data privacy and security, preventing any information leak.
Jarek Jarzębowski: Were there any additional challenges that you had to overcome to implement that or any other recent solutions?
Varun Tyagi: The technology is changing almost every six months. We were talking about GPT 2.0 in 2022, and now we’re already waiting for the launch of GPT 5. It has changed a lot in the last two years. So I think version controlling and access to the latest, reliable models and libraries is the key. If you recall two or three months ago, there was an issue in which GPT 3.5 or 4 was producing very bizarre results.To prevent such scenarios with GenAI models, we need robust controls. We achieve this by consistently tracking model KPIs like accuracy, model drift, and F1-score. We conducted thorough agent training and developed scripts to guide tool usage, prioritizing relevant questions. Our advantage was having a fully customized API tool trained on our data, shielding us from external ML world changes. We focused solely on version controlling libraries. In-house containment and stringent quality gates were essential to maintain confidentiality.
The second issue was what kind of questions to ask the customers in case the model is not generating the expected output. This could happen due to poor picture quality, unfamiliar items, or model drift. As I mentioned previously, we also needed to train the agents on how to frame the questions properly. The agents cannot be baffled and say, “Oh! I am not sure why the model is churning out bad results”, “I do not know what the problem is”, or “ I have never seen such output before”. Such responses or dilemmas create a very unprofessional image in the minds of customers.
Therefore, we needed to conduct the A/B testing. We also employed a marketing team to chart out the script for the agents. Having deliberately created some flaws in the model (of course, without deploying it to production), we wanted to check how a sales agent would respond to the rare failure of the model and noted down the responses. Once this work was done, almost 5-6 weeks later, we created the script for all the agents. The Head of Sales was also engaged to ensure that all the agents were going by the planned script. It was clear that even with our most tenuous brainstorming, we might miss on certain scenarios. We had a proper answer in case an agent did not find an answer in the script. The answer was:
“Please do not worry, We understand the importance of achieving the desired results with our machine learning model. Rest assured, our team is highly trained and dedicated to resolving any issues promptly. We value your business and are committed to ensuring your satisfaction. If the issue is not resolved within the next 2 minutes, we will automatically send you the output to your email address and provide you with an additional discount on your relocation as a gesture of our commitment to your satisfaction.”
This answer ensured that for Movinga, the customers are its priority and highlighted the company’s dedication to resolving issues as a top priority.
The implementation of AI in logistics is really fancy, but before launching our models to production, we have to do enormous work. These were the few challenges we had to resolve before actually putting the model to work in the real world. Ultimately, it took us around six months to get everything on top of it. Then, we had to make it scalable, serving it through Flask API or other APIs. These were huge technological challenges that we had to go through. But apart from that, training was the biggest one, I would say.
Jarek Jarzębowski: Let’s leave AI and talk about data management in logistics in general. We have discussed it briefly before, but I want to know your perspective on how difficult and important it is. Also, how to approach data management to make data usable even for future cases?
Varun Tyagi: Yeah, I think the first thing is we need to get the basics right. We need to see that change in technology means change in customers - every one or two years. Also, predicting revenue for companies working with startups can be challenging, as the first-year revenue of a startup is usually not a reliable indicator. It would only be after two or three years when the company has more or less stabilized and is growing at 10%, 20%, 30%, and not growing like 200%. So, the approach also depends on the kind of company you’re working with.
What I see is that there are always outliers in data. Thus, we start with a source, with the kind of information that we need. That’s why we call it a data lake. Everything we can possibly imagine goes into the data lake, in S3 buckets, or maybe in BigQuery. Choosing the right kind of data is also crucial. If you only want to predict revenue, we can just take the last 2 or 3 years of data that would be helpful for calculations. However, in the case of sentiment analysis, every single survey matters to us, regardless of whether it is a high-profile customer or a low-profile customer. We need to see whether the customer is satisfied with the services. So, all the NPS survey data, even from the first year of the business, is beneficial in this case.
But in the end, for every prediction, and in all of the data management, we need to make sure we have proper data pipelines, ETL/ELT tools, data quality gates, etc., which accurately see the 7 attributes of data quality. We need proper data modeling, scaling in a way in which we have the final reporting layer. Having data quality gates across the pipeline matters a lot. Once we have these foundations correct, it’s up to us to pick and choose what range of data we want to take for which kind of use case. This is how the overall data management looks like for us, regardless of technology. If the foundation is strong, we can build anything on top of that foundation.
Jarek Jarzębowski: Some people say that we are still in the infancy stage of AI, even though its history is quite long. What do you think about the future of AI in logistics? How will it change or shape the industry?
Varun Tyagi: I would say it’s going to be huge, mainly due to how long the supply chain risk management has already been in place. Let’s go back to the UPS Orion era. As a system, Orion was trying to avoid all the right turns in the US with the aim of maintaining the cost-effectiveness of transportation. Since taking a left turn also meant a higher probability of an accident, there was a risk management element to it as well. So they just wanted to avoid all these situations. The other use case, if I remember correctly, was in Walmart. The network used AI to analyze historical trends and weather patterns, as well as market sentiment. It also identified potential supply chain disruptions. So, it’s been used for a long time.
I think, in the future, most of our work will be governed by AI. Take voice assistants, for instance, working text-to-speech, speech-to-text, and so on. We might soon have agents, supply chain, or sales agents who can do the work for you. I’m not saying that it will completely replace the workforce because these tasks still require that human element, but it will become integral to our work. Look at traditional invoicing - it will likely be gone in the future, streamlined by blockchain.
Marketing and hiring decisions in the supply chain will also evolve with AI-based interfaces automatically screening applicants. In terms of operational efficiency, startups are developing solutions to ensure container safety at sea, requiring extensive data and machine learning analysis. AI will impact almost every department, enhancing their operations significantly.
Many people are still afraid of AI, fearing it might take over the world, remembering the dystopian scenarios like in Terminator. But I think we are very far away from that. It’s not close, but AI is already becoming sentient. Some time ago, a startup that introduced the first AI software developer, Devin. We’re seeing major advancements in AI, especially in the supply chain. However, the foundation needs to be strong. For example, we still have a lot of data quality issues in supply chain and logistics, which need to be improved on before we implement AI solutions.
Jarek Jarzębowski: So probably the first steps for the companies to take would be to focus on those fundamentals. But what next? How should they approach adopting AI solutions? Where should they start?
Varun Tyagi: If the companies already have a proper data foundation, they should focus on gaining real-time visibility in the supply chain
That’s the basics of using data: not just applying AI to the problem, but first understanding the problem and then determining which technologies might be useful. Setup is the first thing. Real-time visibility is crucial, yet 80% of companies still lack it - but it can be resolved with proper data and AI technologies.
The companies should also be aware of the sustainability concerns. For example, as I mentioned before, 80% of the global world trade is through shipments. That also means 80% of the global carbon emissions from trading are done through sea. If it consumes a lot of oil, it also uses a huge amount of energy and pollutes the environment. As a part of sustainability efforts, companies should come up with different kinds of route optimization algorithms that I was telling you about. And not just that - they should also practice alternative fuel usage, meaning green fuels that Forto is also providing. We should go beyond data and think about the environmental concerns.
These are the four things that a company needs to set up before moving to sophisticated algorithms. Then, they can use the defect analysis on a particular container and optimize their warehouses, which I tried to do in Foodspring. We just need to begin from the basics, starting with real-time visibility.
Jarek Jarzębowski: Yeah, this is great advice. Thank you very much for sharing your perspective and experience. If someone wanted to connect with you or ask you a follow-up question, where should they look?
Varun Tyagi: I’m writing blogs on Medium, so you can also comment there. My email is
varun.tyagi83@gmail.com, so anybody who wants to approach me for any kind of consulting or advice can always reach out to me there and also on
LinkedIn.
Jarek Jarzębowski: Great, thank you!
Varun Tyagi’s Background
Varun Tyagi is a seasoned expert in business intelligence and data specializing in AI and scalable infrastructure within logistics and supply chain management. After completing his MBA, he has helped various companies from the sector scale their businesses, revamping their data pipelines and processes.
Having transformed his self-taught expertise in AI and machine learning into years of experience, he assisted in various implementations of artificial intelligence in logistics and beyond. He also built the data science, business intelligence, and pricing teams.
From Infosys, through Movinga and Foodspring, to Forto, Varun Tyagi had plenty of opportunities to expand his knowledge on supply chain efficiency optimization. Now, he spreads it through his blog and interviews like this one, contributing to the further development of this niche.
About Movinga
Movinga is Europe’s fastest-growing online provider of removals, revolutionizing the moving process for the 21st century. With its transparent, user-friendly online booking platform, scheduling and booking moves are easier than ever.
Offering guaranteed fixed prices tailored to clients’ needs, Movinga demonstrates that high quality can be affordable. That approach pays off, given its rapid expansion across Germany, France, Austria, and Sweden. The ultimate goal? Making Movinga synonymous with moving worldwide.
Thanks to emerging technologies like GenAI, logistics can spread its wings, streamlining processes to an unprecedented degree. However, technology itself is not the most important factor here; it’s the preparation and proper management of data. In the evolving technological landscape, robust controls are also essential, and the best shield is custom solutions. If you also want to harness the potential within your data in the most efficient way, let’s discuss your idea!
Jarek is an experienced People & Culture professional and tech enthusiast. He is a speaker at HR and tech conferences and Podcaster, who shares a lot on LinkedIn. He loves working on the crossroads of humans, technology, and business, bringing the best of all worlds and combining them in a novel way. At nexocode, he is responsible for leading People & Culture initiatives.
Would you like to discuss AI opportunities in your business?
Let us know and Dorota will arrange a call with our experts.
Step into the narrative of change with our AI Revolution Diaries, where each interview captures a moment in the ongoing revolution of artificial intelligence across industries. These diaries detail the firsthand experiences of businesses at the forefront of integrating AI, highlighting the transformative impact and the lessons learned throughout their journey of innovation.
Engage with our series to discover the strategies that drive successful AI integration, and grasp the benefits and hurdles encountered by pioneers in the field. Let us be your guide in navigating the transformative journey of AI, empowering your business to harness the full potential of data and shape the future of your industry.
In the interests of your safety and to implement the principle of lawful, reliable and transparent
processing of your personal data when using our services, we developed this document called the
Privacy Policy. This document regulates the processing and protection of Users’ personal data in
connection with their use of the Website and has been prepared by Nexocode.
To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and
technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at
the level which ensures compliance with applicable Polish and European laws such as:
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on
the protection of natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
(published in the Official Journal of the European Union L 119, p 1);
Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item
1000);
Act of 18 July 2002 on providing services by electronic means;
Telecommunications Law of 16 July 2004.
The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.
1. Definitions
User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal
person, or an organizational unit which is not a legal person to which specific provisions grant
legal capacity.
Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court
Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department
of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
Website – website run by Nexocode, at the URL: nexocode.com whose content is available to
authorized persons.
Cookies – small files saved by the server on the User's computer, which the server can read when
when the website is accessed from the computer.
SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary
methods of data transmission encrypts data transmission.
System log – the information that the User's computer transmits to the server which may contain
various data (e.g. the user’s IP number), allowing to determine the approximate location where
the connection came from.
IP address – individual number which is usually assigned to every computer connected to the
Internet. The IP number can be permanently associated with the computer (static) or assigned to
a given connection (dynamic).
GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of individuals regarding the processing of personal data and onthe free transmission
of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
Personal data – information about an identified or identifiable natural person ("data subject").
An identifiable natural person is a person who can be directly or indirectly identified, in
particular on the basis of identifiers such as name, identification number, location data,
online identifiers or one or more specific factors determining the physical, physiological,
genetic, mental, economic, cultural or social identity of a natural person.
Processing – any operations performed on personal data, such as collecting, recording, storing,
developing, modifying, sharing, and deleting, especially when performed in IT systems.
2. Cookies
The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.
The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the
Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end
device. Cookies are used to:
improve user experience and facilitate navigation on the site;
help to identify returning Users who access the website using the device on which Cookies were
saved;
creating statistics which help to understand how the Users use websites, which allows to improve
their structure and content;
adjusting the content of the Website pages to specific User’s preferences and optimizing the
websites website experience to the each User's individual needs.
Cookies usually contain the name of the website from which they originate, their storage time on the
end device and a unique number. On our Website, we use the following types of Cookies:
"Session" – cookie files stored on the User's end device until the Uses logs out, leaves the
website or turns off the web browser;
"Persistent" – cookie files stored on the User's end device for the time specified in the Cookie
file parameters or until they are deleted by the User;
"Performance" – cookies used specifically for gathering data on how visitors use a website to
measure the performance of a website;
"Strictly necessary" – essential for browsing the website and using its features, such as
accessing secure areas of the site;
"Functional" – cookies enabling remembering the settings selected by the User and personalizing
the User interface;
"First-party" – cookies stored by the Website;
"Third-party" – cookies derived from a website other than the Website;
"Facebook cookies" – You should read Facebook cookies policy: www.facebook.com
"Other Google cookies" – Refer to Google cookie policy: google.com
3. How System Logs work on the Website
User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The
information collected in the Logs is processed primarily for purposes related to the provision of
services, i.e. for the purposes of:
analytics – to improve the quality of services provided by us as part of the Website and adapt
its functionalities to the needs of the Users. The legal basis for processing in this case is
the legitimate interest of Nexocode consisting in analyzing Users' activities and their
preferences;
fraud detection, identification and countering threats to stability and correct operation of the
Website.
4. Cookie mechanism on the Website
Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful
information
and are stored on the User's computer – our server can read them when connecting to this computer
again.
Most web browsers allow cookies to be stored on the User's end device by default. Each User can
change
their Cookie settings in the web browser settings menu:
Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings >
Advanced. In
the "Privacy and security" section, click the Content Settings button. In the "Cookies and site
date"
section you can change the following Cookie settings:
Deleting cookies,
Blocking cookies by default,
Default permission for cookies,
Saving Cookies and website data by default and clearing them when the browser is closed,
Specifying exceptions for Cookies for specific websites or domains
Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options >
Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with
the OK
button.
Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field.
From
there, you can check a relevant field to decide whether or not to accept cookies.
Opera
Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site
data. From there, adjust the setting: Allow sites to save and read cookie data
Safari
In the Safari drop-down menu, select Preferences and click the Security icon.From there,
select
the desired security level in the "Accept cookies" area.
Disabling Cookies in your browser does not deprive you of access to the resources of the Website.
Web
browsers, by default, allow storing Cookies on the User's end device. Website Users can freely
adjust
cookie settings. The web browser allows you to delete cookies. It is also possible to automatically
block cookies. Detailed information on this subject is provided in the help or documentation of the
specific web browser used by the User. The User can decide not to receive Cookies by changing
browser
settings. However, disabling Cookies necessary for authentication, security or remembering User
preferences may impact user experience, or even make the Website unusable.
5. Additional information
External links may be placed on the Website enabling Users to directly reach other website. Also,
while
using the Website, cookies may also be placed on the User’s device from other entities, in
particular
from third parties such as Google, in order to enable the use the functionalities of the Website
integrated with these third parties. Each of such providers sets out the rules for the use of
cookies in
their privacy policy, so for security reasons we recommend that you read the privacy policy document
before using these pages.
We reserve the right to change this privacy policy at any time by publishing an updated version on
our
Website. After making the change, the privacy policy will be published on the page with a new date.
For
more information on the conditions of providing services, in particular the rules of using the
Website,
contracting, as well as the conditions of accessing content and using the Website, please refer to
the
the Website’s Terms and Conditions.