AI Ethics and Designing for Responsible AI: Trust, Fairness, Bias, Explainability, and Accountability

AI Ethics and Designing for Responsible AI: Trust, Fairness, Bias, Explainability, and Accountability

Jarek Jarzębowski - January 5, 2022

Artificial intelligence (AI) may be the most powerful technology humanity has ever created. AI technology is already changing the way we work, how we interact with each other, and even who gets to participate in society. But with great power comes great responsibility for society and the environment we live in. As we continue to push forward with this technology, it’s vital that we’re aware of all ethical considerations and the risks involved with AI and how to design responsibly. As AI becomes more prevalent in our lives, it is critical that its creators’ design for responsible AI follow an ethical framework that covers a set of principles that will help ensure that artificial intelligence systems are fair and accountable.

This article covers topics such as trust, fairness, bias, explainability, accountability, diversity, and data privacy to help you think about designing for responsible AI so your business can thrive now and well into the future.

Why Are Ethics in AI Important?

As human beings, we would like to think of ourselves as ethical. For the most part, we believe that there is some aspect of right and wrong, which can be observed and somehow assessed. Let’s put aside for now if that is always true and assume that we live according to ethical principles and express moral behavior. Therefore, we are not only intelligent beings but also ethical beings, which means that our decisions are made with pure calculation and emotions and according to one’s values.

We can argue if artificial intelligence truly is intelligent, but - once again - let’s put aside that question and focus on AI ethics. Since ethical behavior relates to moral principles and standards, we should ask ourselves if what we see as AI can be righteous? This is not a purely academic dispute since we already know where in real-world AI will and can be used - autonomous weapons, self-driving cars, and autonomous decision making in different areas, to name a few.

What Are the Ethical Concerns of Artificial Intelligence?

The trolley problem (also known as the trolley dilemma) is one of the most known thought experiments in ethics and psychology. The series of questions usually begin with the trolley running on one track with few people standing on it. The bystander can decide to change the track, but the trolley would kill one person. The decision to intervene and divert the truck is purely ethical. Will the bystander do nothing and let a few be killed or take action knowing it will kill one. The experiment can go on and on with different variables changing - how many people and who is standing on each track. We can also change the experiment to be more similar to real-life situations - for example, legal and medical.

We cannot “solve” the trolley problem as the solution depends on moral and ethical principles. Different people would make other decisions according to their values. Yet when we think of artificial intelligence and AI systems, we need to consider that there will undoubtedly be times when it will have to take such decisions. The question is will or how much humane the decision will be. Such dilemmas are not only limited to human life and death situations since AI systems are present in so many areas of our lives. Will such a candidate be taken under consideration if he has been to prison? Will we - or should we rather say it - allow abortion knowing that woman’s life is in some danger? If so, where would be the line? Is a 5% life threat big enough or not? Will male candidates be more favorably treated than females? We have difficulties solving these ethical problems with the human brain. Still, it doesn’t mean that they should not be considered when designing artificially intelligent systems.

AI, Humans, and Ants

For the most part, we believe that we are good. For example, we don’t like animals to suffer. We even try to alleviate their suffering; we are trying to preserve nature, we definitely will not make them suffer. Or at least this is how we think of ourselves. Let’s take ants. We don’t want their harm, we do not destroy the anthills, and when we see them walking on the sidewalk, we will probably step over them. When the ants are in our way, though, it is all a different story. If we find them at home, we kill them. We buy special powders to lure them and eradicate their population. We try not to harm ants, but we don’t think twice if they somehow cross our plans. If there is an anthill where we want to build a road - we don’t overthink it and just destroy it. We are not even considering it in terms of ethical issues.

How is it relevant to AI ethics? Technological progress is constant, and we should assume that one day intelligent systems will be smart enough to control themselves and take decisions that are best for them. We might not program them to kill humans or annihilate the human population, but what would happen if humans stood against AI progress?

Trusting the Black Box of AI

We can see the future where AI algorithms make more and more impactful decisions. We should ask ourselves though how it would look like. Let’s imagine that we would feed it some data, giving us results, recommendations, or even decisions. What we lack in that scenario is an insight into how the decision has been made. As long as regular traditional, programmed algorithms are making such suggestions, we can look into the code and understand the decision-making process. In the case of AI, it all changes. Usually, we cannot look into, check and verify the process and we should simply trust it has made the right decision. We see only the black box without easy checking the inner mechanics.

The questions about the decision-making processes will not magically vanish, and rightly so! Trusting the black box of AI will be difficult, even if we see real-world performance and examples. We should not focus only on the technological development of AI possibilities but also on the potential impact on the real world. Thus, creating “ethical AI” should be of utmost importance.

What to Consider When Creating Ethical AI Solutions?

AI Bias - Will the Model Be Prone to Reinforcing Prejudice and Discrimination?

As mentioned in previous paragraphs, verifying decisions made by AI is crucial. We have already seen how AI-made decisions can be unfair and biased. One may ask how it is possible if AI itself cannot be prejudiced. The answer can be found in biased training data such algorithm stands upon.

One of the big tech companies has tried to use AI in the recruitment process. It has built an algorithm that has learned the trends about the current workforce - who is working at the company, where, with what results, etc. Then it compared the candidates with a built persona. For some time, the results provided by artificial intelligence seemed relevant, but fortunately, the company has decided to test it further.

We very well know that some of the most common biases we humans have are race, sexual orientation, and gender. We also know that women are still underrepresented in tech companies due to a myriad of reasons, even though there is no real reason for that. When analyzed by AI systems, such data will reinforce the bias because looking backward, choosing man would seem like the best possible decision.

Fortunately, the company mentioned above was able to identify the AI bias. Still, we can see the potential future where it would not have found it, and recruitment decisions would be taken under false presumptions, further deeper the gender gap in the company and result in loss of potential talent.

Machine Learning Fairness - What Does It Mean to Have Fairness in an Algorithm?

Algorithmic bias is a mathematically defined concept, whereas model fairness is socially established. The family of bias and fairness metrics in modeling shows how a model’s results can vary depending on various groups in your data.

Following the argument from previous paragraphs, we can see a need for fairness in machine learning to ensure AI systems’ unbiased ethics. In fact, in the last couple of years, we have seen more work done to ensure fairness in machine learning and artificial intelligence.

The idea of quantifying fairness in AI models is far from trivial. First, there are numerous interpretations of fairness for AI models, including disparate impact, disparate treatment, and demographic parity, each of which captures a different aspect of application fairness. Continuously monitoring deployed models and determining whether the performance is fair along these outlines is a necessary first step towards providing a fair system and, therefore, unbiased user experience. Fortunately, you can use a range of different solutions to assess fairness and fight bias.

Big tech companies (Google, Facebook, IBM, just to name a few) are currently attempting to reduce the software bias and ensure its fairness. Algorithms that are supposed to help with are far from ideal, but knowing the potential impact AI and ML can have on the lives of real people, we should not stop perfecting them. Few areas with enormous possible effects are health care, insurance, education, career, or finance. In each of them, we can see how false positives or false negatives might affect people.

Explainable Artificial Intelligence - When Transparency and Interpretability in Machine Learning Are Crucial?

AI explainability is one of the most challenging topics to cover when it comes to AI ethics. Not only because it is so broad but also because of its very nature, AI itself can be considered an “explanation” for decisions made by machine learning algorithms.

To explain a fair and unbiased ML decision-making process, we need transparency regarding what data was used and how the model works (which parameters were changed). This way, one could understand that there are no hidden variables or mechanisms which might, later on, determine unfair outcomes that may affect real people’s lives. Transparency should become a standard requirement when developing new products based on AI technologies such as image recognition systems, natural language processing systems, etc.

Explainable artificial intelligence, especially when we talk about neural networks or deep learning algorithms hard to execute from the technical point of view. It is often tough to understand why a particular decision was made in these cases. We can see that interpretability and transparency are crucial aspects of building responsible AI systems. We need to be able to trust those systems and their decisions - if we can’t do that, then the whole point of introducing AI into our lives is lost.

Accountability - Who Is Responsible When an AI System Misbehaves?

The trolley problem mentioned in the first paragraphs can be a good starting point to think of the accountability of AI systems. Let’s imagine a situation where an autonomous car makes its own decisions, and one day, it makes a wrong decision, leading to a fatal accident. If no changes had been made to the car and there was no human in the car, we might quickly blame the car’s manufacturer or the creator of the algorithm. But what if the situation is more complex (which would always be the case) and there would be a “driver” in the car who could have potentially prevented the accident but have not? What if the owner of the car has made some alterations? What if the vehicle has been driving according to traffic regulations, but the victim had, for example, entered the road where should not have?

There are myriad potential facets of such a situation, but we would somehow have to find the accountable culprit. Is it the AI? Creator? Manufacturer? Driver? Victim?

Even more, questions arise if we consider the situation the AI has learned during its “lifetime.” If a potentially risky behavior worked 99 times, does it mean that it should be done for the 100th time? And if that time something terrible happens - who or what should be responsible? If we want accountable artificial intelligence, we need to think of and design AI systems with accountability in mind.

AI Design Sprint in progress — mapping the confusion matrix and highlighting potential risks and costs of false positives and negatives
AI Design Sprint in progress — mapping the confusion matrix and highlighting potential risks and costs of false positives and negatives

Diversity Issues - How to Make Sure AI Reflects the Real World?

Only recently, we have started to think more strategically and consciously about diversity and inclusion. In companies, we see D&I positions that would not have been present even a couple of years ago, and we see more discussion about equality in media. This is a positive change, which will have a significant impact, but we cannot stop there. If we neglect to consider all those ethical questions while creating machine learning models and AI systems, we will most definitely face even more significant problems in the future.

As we all know, data is not always representative of reality. The more biased a dataset is, the more likely it is that an AI algorithm will be biased too. If our datasets only consist of data on white men, for example, then any AI system developed from these data will have a high chance of being racist or sexist. Lack of data ethics early in the process potentially excludes underrepresented groups in AI systems used for delivering essential services and benefits.

To avoid this, we need two things: first, better and more diverse data; secondly, we need to diversify the people who develop AI algorithms and take care of data ethics. Only by having different backgrounds and experiences represented in the creation process of artificial intelligence can we ensure that it reflects the real world as closely as possible.

Data Privacy - How to Make Sure Data Is Safe?

As we are collecting more and more data, it is becoming increasingly important to think about keeping that data safe. We need to ensure that the AI algorithms are protected and the information they are working with.

GDPR act imposes strict obligations and restrictions on the use of personal data. One such responsibility relates to anonymous data that can no longer be associated with a particular person. AI models that tackle the most challenging problems are often trained using personal data. Advanced AI models, such as deep neural networks operate on large amounts of data to generate predictions or classifications. They often result in a black-box model, where it is challenging to derive exactly which data influenced the decision and how.

On the other hand, it is not impossible for a malicious third party with access to the trained model (and not the personal data that was used for training) to reveal sensitive personal information. This is why it is vital to render AI models anonymous. Tradition anonymization techniques for AI rely on data generalization that unfortunately downgrades the model’s accuracy. IBM Data and AI’s latest research on the subject proposes a unique solution that optimizes the model’s accuracy while anonymizing data during training boost privacy protection.

There are a few ways of accomplishing data ethics and privacy:

  • By encrypting the data.
  • By ensuring that the AI algorithm cannot learn from outside its dataset.
  • By using secure computation, even the people who develop or operate AI systems cannot see or access the raw data.
  • And last but not least, by making it impossible to reverse engineer the ML model and get access to private data used for model training (e.g., if the model is open-sourced, would it be possible to trace back what data was used for training).

All of these measures are important to keep our data safe and protect ourselves from potential misuse of information. Privacy protection should be an integral part of any ethical AI system.

Designing and Implementing Ethical AI Systems - Responsible AI Practices

Now that we have looked at some of the ethical considerations involved in developing artificial intelligence, it is essential to think about responsible AI practices.

There are a few key points that we should keep in mind:

  • AI Experts need to be aware of the ethical implications of their work and think about how they can avoid potential harm.
  • AI solutions should be designed with accountability and transparency in mind.
  • Data privacy needs to be taken into consideration from the very beginning.
  • Diversity and inclusion must be an integral part of any AI development process.
AI ethics and designing for responsible AI human centered AI

If we adhere to these ethics principles when creating AI systems, we will be on our way towards more responsible and ethically sound artificial intelligence solutions. Reliable, effective AI applications that are user-centered should be built according to proven software development standards and best practices, with rules tailored to machine learning. The following are our top step-by-step recommendations:

  1. Approach each project with a human-centered mindset. Our AI Design Sprint workshops are always an excellent idea for an AI project kickstart.
  2. Use design thinking methodology to iteratively progress with your AI project, ensuring you won’t lose sight of humans.
  3. Make sure your AI team is not only experienced but also diverse.
  4. Determine multiple metrics and benchmarks to assess AI ethics during model development.
  5. Examine the original training data.
  6. Recognize the boundaries of your data and remember - No computation without representation.
  7. Make sure data privacy considerations are taken into account.
  8. Conduct iterative model testing (user testing, e2e tests, unit test, test against designated and updated data sets).
  9. Assess and monitor ethical aspects after the model is deployed.

Conclusions

As we can see, there are a number of ethical considerations that need to be taken into account when developing artificial intelligence. We need to treat artificially intelligent systems with great awareness of any implications they may have on human society. If we want to create ethical AI applications that are responsible and trustworthy, we need to think about data privacy, diversity and inclusion, and accountability. Questions about ethical AI should be raised every single time we implement solutions using machine learning techniques. Your next project is not necessarily focused on autonomous vehicles or weapons, but still, you should always consider any adverse outcomes and long-term consequences your system may bring. By following some simple guidelines, we can ensure that our AI solutions reflect our values as a society.

About the author

Jarek Jarzębowski

Jarek Jarzębowski

People & Culture Lead

Linkedin profile Twitter

Jarek is an experienced People & Culture professional and tech enthusiast. He is a speaker at HR and tech conferences and Podcaster, who shares a lot on LinkedIn. He loves working on the crossroads of humans, technology, and business, bringing the best of all worlds and combining them in a novel way.
At nexocode, he is responsible for leading People & Culture initiatives.

Would you like to discuss
AI opportunities in your
business?

Let us know and Mateusz
will arrange a call with our experts.

Thanks for the message!

We'll do our best to get back to you
as soon as possible.

This article is a part of

Becoming AI Driven
19 articles

Becoming AI Driven

Artificial Intelligence solutions are becoming the next competitive edge for many companies within various industries. How do you know if your company should invest time into emerging tech? How to discover and benefit from AI opportunities? How to run AI projects?

Follow our article series to learn how to get on a path towards AI adoption. Join us as we explore the benefits and challenges that come with AI implementation and guide business leaders in creating AI-based companies.

check it out

More articles

Becoming AI Driven

Insights on practical AI applications just one click away

Sign up for our newsletter and don't miss out on the latest insights, trends and innovations from this sector.

Done!

Thanks for joining the newsletter

Check your inbox for the confirmation email & enjoy the read!

Find us on

Need help with implementing AI in your business?

Let's talk blue circle

This site uses cookies for analytical purposes.

Accept Privacy Policy

In the interests of your safety and to implement the principle of lawful, reliable and transparent processing of your personal data when using our services, we developed this document called the Privacy Policy. This document regulates the processing and protection of Users’ personal data in connection with their use of the Website and has been prepared by Nexocode.

To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at the level which ensures compliance with applicable Polish and European laws such as:

  1. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (published in the Official Journal of the European Union L 119, p 1); Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item 1000);
  2. Act of 18 July 2002 on providing services by electronic means;
  3. Telecommunications Law of 16 July 2004.

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.

1. Definitions

  1. User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal person, or an organizational unit which is not a legal person to which specific provisions grant legal capacity.
  2. Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
  3. Website – website run by Nexocode, at the URL: nexocode.com whose content is available to authorized persons.
  4. Cookies – small files saved by the server on the User's computer, which the server can read when when the website is accessed from the computer.
  5. SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary methods of data transmission encrypts data transmission.
  6. System log – the information that the User's computer transmits to the server which may contain various data (e.g. the user’s IP number), allowing to determine the approximate location where the connection came from.
  7. IP address – individual number which is usually assigned to every computer connected to the Internet. The IP number can be permanently associated with the computer (static) or assigned to a given connection (dynamic).
  8. GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals regarding the processing of personal data and onthe free transmission of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
  9. Personal data – information about an identified or identifiable natural person ("data subject"). An identifiable natural person is a person who can be directly or indirectly identified, in particular on the basis of identifiers such as name, identification number, location data, online identifiers or one or more specific factors determining the physical, physiological, genetic, mental, economic, cultural or social identity of a natural person.
  10. Processing – any operations performed on personal data, such as collecting, recording, storing, developing, modifying, sharing, and deleting, especially when performed in IT systems.

2. Cookies

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet. The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end device.
Cookies are used to:

  1. improve user experience and facilitate navigation on the site;
  2. help to identify returning Users who access the website using the device on which Cookies were saved;
  3. creating statistics which help to understand how the Users use websites, which allows to improve their structure and content;
  4. adjusting the content of the Website pages to specific User’s preferences and optimizing the websites website experience to the each User's individual needs.

Cookies usually contain the name of the website from which they originate, their storage time on the end device and a unique number. On our Website, we use the following types of Cookies:

  • "Session" – cookie files stored on the User's end device until the Uses logs out, leaves the website or turns off the web browser;
  • "Persistent" – cookie files stored on the User's end device for the time specified in the Cookie file parameters or until they are deleted by the User;
  • "Performance" – cookies used specifically for gathering data on how visitors use a website to measure the performance of a website;
  • "Strictly necessary" – essential for browsing the website and using its features, such as accessing secure areas of the site;
  • "Functional" – cookies enabling remembering the settings selected by the User and personalizing the User interface;
  • "First-party" – cookies stored by the Website;
  • "Third-party" – cookies derived from a website other than the Website;
  • "Facebook cookies" – You should read Facebook cookies policy: www.facebook.com
  • "Other Google cookies" – Refer to Google cookie policy: google.com

3. How System Logs work on the Website

User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The information collected in the Logs is processed primarily for purposes related to the provision of services, i.e. for the purposes of:

  • analytics – to improve the quality of services provided by us as part of the Website and adapt its functionalities to the needs of the Users. The legal basis for processing in this case is the legitimate interest of Nexocode consisting in analyzing Users' activities and their preferences;
  • fraud detection, identification and countering threats to stability and correct operation of the Website.

4. Cookie mechanism on the Website

Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful information and are stored on the User's computer – our server can read them when connecting to this computer again. Most web browsers allow cookies to be stored on the User's end device by default. Each User can change their Cookie settings in the web browser settings menu: Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings > Advanced. In the "Privacy and security" section, click the Content Settings button. In the "Cookies and site date" section you can change the following Cookie settings:

  • Deleting cookies,
  • Blocking cookies by default,
  • Default permission for cookies,
  • Saving Cookies and website data by default and clearing them when the browser is closed,
  • Specifying exceptions for Cookies for specific websites or domains

Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options > Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with the OK button.

Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field. From there, you can check a relevant field to decide whether or not to accept cookies.

Opera
Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site data. From there, adjust the setting: Allow sites to save and read cookie data

Safari
In the Safari drop-down menu, select Preferences and click the Security icon.From there, select the desired security level in the "Accept cookies" area.

Disabling Cookies in your browser does not deprive you of access to the resources of the Website. Web browsers, by default, allow storing Cookies on the User's end device. Website Users can freely adjust cookie settings. The web browser allows you to delete cookies. It is also possible to automatically block cookies. Detailed information on this subject is provided in the help or documentation of the specific web browser used by the User. The User can decide not to receive Cookies by changing browser settings. However, disabling Cookies necessary for authentication, security or remembering User preferences may impact user experience, or even make the Website unusable.

5. Additional information

External links may be placed on the Website enabling Users to directly reach other website. Also, while using the Website, cookies may also be placed on the User’s device from other entities, in particular from third parties such as Google, in order to enable the use the functionalities of the Website integrated with these third parties. Each of such providers sets out the rules for the use of cookies in their privacy policy, so for security reasons we recommend that you read the privacy policy document before using these pages. We reserve the right to change this privacy policy at any time by publishing an updated version on our Website. After making the change, the privacy policy will be published on the page with a new date. For more information on the conditions of providing services, in particular the rules of using the Website, contracting, as well as the conditions of accessing content and using the Website, please refer to the the Website’s Terms and Conditions.

Nexocode Team