Building trust in the age of AI – How businesses can build fairness into their machine learning models

Share this post:

Enterprise-wide deployments of AI are constrained by the requirements of scaling any new system or technology: transparency, security, and the application’s ability to work across many systems. But solving for these challenges is not enough. Every organization that develops or uses AI, or hosts or processes data, must do so in ways that allow them to rationalize the decisions or recommendations in a way that is easily consumable.

Much like an impressionable child, new technologies like AI are prone to influence by the nature of the information and data sets with which they are presented. Perhaps the training data isn’t representative. Or, AI models could be unknowingly fed biased data that affects their output.

A recent Forrester report “The Ethics Of AI: How To Avoid Harmful Bias and Discrimination” describes the ideal machine learning data models as being FAIR, or:

  • Fundamentally sound
  • Assessable
  • Inclusive
  • Reversible

Businesses should strive to create models that are “FAIR” to protect against harmful bias. Let’s examine Forrester’s recommendations how organizations can leverage AI for the good of humankind, while avoiding the ethical pitfalls associated with perceived discrimination.

The threat of opaque or unfair machine learning models is real, and safety-critical and highly-regulated domains are most likely to feel the impact. Organizations must be able to address unfair models, or suffer reputational, regulatory, and revenue consequences. The obvious reputational risks aside, if for example, your AI mortgage application is rejecting applicants who shouldn’t be rejected, or your AI marketing application isn’t targeting certain potential customers who may buy your product, it’s ultimately just bad business.

How do businesses build models that people trust over time? Lack of transparency into “black boxes” — limited visibility into data sets, computations, assumptions and processing — makes it difficult to pinpoint reasons for model and processing degradation over time. This increases the risks associated with using and supporting the models, and compromises developing trust in the models.

When you’re training an AI model, you also need to be aware of any underlying unfairness in the data. Machine learning, much like human learning, is inherently the “product” of the information provided, relative to the parameters of its programming. It’s important for any company that augments their business processes with machine learning to consider the strengths, weaknesses, opportunities and threats of AI they can’t trust, to ensure their best-laid plans don’t go awry.

The flip-side of bias: Targeted machine learning used for good

There are many scenarios where targeted data actually enhances AI algorithms. When correctly configured, it helps accelerate the time it takes to discover a potential solution and increase the accuracy of search results.

For example, companies looking to optimize their marketing campaigns will create an ideal “persona” of their target audience, and then locate as many prospects as possible which fit that persona profile. AI algorithms can identify prospects that fit these personas from multiple data sources, including CRM applications and social media channels.

Other examples where it’s fundamentally sound to use targeted data in machine learning include:

  • Presenting diagnoses or treatment options to medical professionals based on historical patient cases and patient demographics
  • Helping investment advisors to source potentially lucrative investment opportunities within rapidly changing financial markets
  • Positioning products based on an online shopper’s browsing or buying history
  • Recommending news stories or knowledge base support articles based on subscriber or customer profile data
  • Approving loans and deciding mortgage interest rates based on the applicants’ credit scores and previous history with the bank

While these examples of non-representative data used to train models are fundamentally sound, it’s important for companies to ensure their AI models are fair and don’t discriminate so individuals and communities can trust these systems.

When bias leads to discrimination, and discrimination to weakness

Training an AI engine with insufficient data which doesn’t represent every possible permutation will cause the AI bot to ignore anything which it can’t understand.

Consider what happens when a business programs an algorithm to search out prospects based on a narrow demographic profile from limited market research. They may miss out on market segments they never thought of as viable prospects, and create ethical dilemmas.

Data scientists and developers must prevent algorithmic or human bias creeping into their models while still using the helpful bias these models identify to differentiate between customers. Companies should assess the full scope of their market opportunity, so their appearance of prejudice doesn’t damage their reputation.

European GDPR rules are forcing companies to change the way they manage personal information, and how they document their compliance. Businesses need to strike a balance between gathering and storing enough data to understand their audience, while complying with security and privacy regulations.

Many feel that if these concerns are left unchecked there will be a growing possibility of AI reinforcing systemic biases and exacerbating inequality in our business and personal lives.

Inclusivity expands market opportunities and safeguards reputation

There are many real-life scenarios where a customer is loyal to a particular retailer, such as a grocery store, which carries specialty foods from their country of origin. Or that tailor their marketing campaigns to specific genders. For example, women with families respond favorably to grocery coupons with significant discounts.

For these retailers, by being sensitive to the unique needs of a critical segment of their market, and eliminating discrimination, companies stand to preserve or grow their customer base by tailoring their product inventory and advertising campaigns based on inclusivity.

AI platforms can also help companies to listen to their audience across more channels, such as social media, online forums and contact center applications. If there is chatter about a company’s insensitivity to particular genders, races, age group or ethnicity, it’s best to identify the issue quickly and address it immediately.

IBM’s vision for ethics and transparency in AI

At IBM we believe we have an inherent obligation to monitor for, and correct, unethical or objectionable bias in the algorithms themselves, as well as any bias caused by the human-influenced data sets their systems interact with. Watson is transparent about who trains our AI systems, what data was used to train those systems, and most importantly what drives our customers’ algorithmic recommendations.

For example, IBM announced plans to release more than 1 million facial images to help better train the AI used for facial recognition. The risk of bias being built into facial recognition AI systems is a concern for any organization developing facial analysis algorithms. For example, does the AI accurately recognize different skin colors and other attributes in a non-discriminatory way. Since AI is only as good as the data that trains it, IBM thinks making a diverse dataset like this available will help root out bias.

At the recent VivaTech 2018 event in Paris. IBM CEO Ginni Rometty talked about her vision of the need for ethics and transparency in AI and data management. Rometty invited business leaders to follow IBM’s Principles for Trust and Transparency:

  1. The purpose of AI is to augment human intelligence
  2. Data and insights belong to their creators
  3. AI systems must be transparent and explainable

The Watson team embraces these principles, and they act as our “guiding light” in developing the Watson platform, and customer AI applications on top of it. We are driven by opportunities in industries like banking, retail and manufacturing to help companies to augment their employees’ skills and experiences. You can view Rometty’s entire keynote address here.

Are you looking for further information about how to implement enough AI bias to get better insights from data. Are you concerned with your AI models discriminating against a segment of the population, such as by age, gender, race or sexual orientation? Forrester has some guidelines that can help your business. Download the Forrester report, “The Ethics Of AI: How To Avoid Harmful Bias And Discrimination“.

Read the Forrester report,“The Ethics Of AI: How To Avoid Harmful Bias And Discrimination”

Leave a Comment

Your email address will not be published. Required fields are marked *