Having seen up close the dangers of algorithmic bias in artificial intelligence, CWT’s Matthew Newton has some suggestions for risk mitigation.
Artificial intelligence is being deployed at scale in business travel management, with algorithms changing the game for service consistency, efficiency and cost-effectiveness. But this watershed moment threatens to unleash a monster. Algorithmic bias in AI-powered decision-making poses a significant threat to various aspects of our working and traveling lives. The erosion of trust and the potential devaluation of mutual benefits are real risks. To guard against these pitfalls, collective effort is essential to dismantle bias and prevent its adverse effects.
As a chief architect responsible for developing customer-centric digital strategies for clients in 139 countries, tackling bias has become as important to me as modernizing core systems and enhancing differentiation. The consequences of ignoring the potential for bias are vast, creating disparities in budget allocation and reducing employee satisfaction, resulting in some employees feeling undervalued and marginalized.
To tackle algorithmic bias effectively, it’s vital to first recognize its presence and then to understand its underlying causes.
Employee Well-Being (But Only For Some)
I recently had the opportunity to examine the complexities of algorithmic bias with the teams responsible for CWT’s Intelligent Display. This feature utilizes machine learning to recommend relevant and policy-compliant hotels to travelers, ensuring consistency across channels and enhancing the user experience. It is geared towards improving traveler satisfaction, increasing travel program compliance and upholding our duty of care to travelers.
An algorithm like the one that serves Intelligent Display may subtly favor hotels in certain regions or neighborhoods, assuming they are safer or more comfortable. This bias disproportionately impacts employees from certain backgrounds. If, for instance, the algorithm consistently recommends upscale hotels in affluent areas, employees from lower-income backgrounds are potentially left feeling underserved. Affluent areas tend to be pricier, which poses a challenge or inconvenience for employees paying out-of-pocket for certain things. Well-being during business trips can suffer, affecting overall job satisfaction and, potentially, job performance.
A similar algorithm might suggest flights with tight layovers, favoring efficiency. However, if it fails to consider the requirements of employees with additional needs or those traveling with families, travelers might miss connections and feel stressed, leading to out-of-program travel behavior. The program’s success hinges on fair and inclusive decision-making, which biased algorithms can severely undermine.
There’s a threat to financial equity, too. A company’s AI-driven expense management system, while efficient, may approve higher travel budgets for senior executives more readily than for junior staff. Implicit biases — assuming senior roles require more expensive accommodation, for example — can lead to financial disparities. Some departments may consistently receive larger budgets, affecting overall cost management strategies.
The Many Shapes Of Biased Business Travel
Algorithmic bias includes implicit biases, statistical biases and biases present in data used to train algorithms. All forms amplify favoritism or discrimination by baking in and scaling inherent biases.
- Implicit bias can lead to preferential treatment or discrimination in flight or hotel recommendations based on factors like gender, ethnicity and age.
- Statistical bias occurs when algorithms favor or disadvantage certain groups based on statistical data patterns. For instance, if historical travel data reflects the preferences of a specific demographic, the algorithm might perpetuate those patterns, resulting in unequal access to travel choices or resources for individuals outside a majority group.
- The biases present in training data can also threaten the fairness of AI systems. If a subset of past travel records used to train an algorithm indicates that employees from a particular region are given fewer travel options, the algorithm may amplify this bias, limiting travel choices for individuals from that region.
Racism, sexism and other forms of discrimination have for decades played a role in people’s working lives. Naturally, biases will seep into our technology. Take speech recognition, a market set to surpass $45 billion by 2032. Common speech recognition systems are 13 percent more accurate in detecting male voices than female voices. The chasm widens when ethnicity is added to the mix.
Building A Battle Plan
Implementing ethical guidelines for AI and its uses is essential for achieving and maintaining equitable outcomes. So is conducting regular audits of decision-making processes, collaborating with diverse stakeholders and providing ongoing professional training. Non-profit organizations such as Partnerships on AI and AI Now, and the European Commission’s ethics guidelines for AI, provide a wealth of resources, including algorithmic accountability templates and frameworks for cross-collaboration.
Determining effective interventions early by using bias analyzers, such as those available from the likes of PwC, can help companies protect themselves from hidden bias risk and find practical solutions.
Begin with balanced and contextually relevant training data. When we built CWT’s Intelligent Display, we made a conscious choice to minimize the data elements that could contribute to bias. Curating diverse datasets and understanding the context of that data collection reduces the risk. The next imperative is to exclude sensitive attributes like age, gender, ethnicity and socioeconomic status, and define clear boundaries for AI tasks that avoid assumptions perpetuating stereotypes.
Bias mitigation is a team sport. It involves experts from diverse domains with different perspectives to comprehensively assess and address biases as they evolve and uphold ethical principles in AI design and evaluation. A multi-faceted approach not only guides internal practices but also contributes to a broader commitment to responsible AI use and customer service excellence.
Sweeping advances in the DE&I realm help to level the playing field and extend opportunities to all, but ignoring algorithmic bias at this critical juncture is missing half the battle. This multi-layered endeavor demands collective effort and ongoing dialogue within the industry. By fostering inclusivity and accountability, we can create AI-powered travel systems that prioritize fairness and transparency, ensuring unbiased decision-making for every traveler, regardless of background or identity.
Travel affects everything. The future is built on it. We must train our algorithms to bring everyone along for the ride.
This Op Ed was created in collaboration with The Company Dime‘s Editorial Board of travel managers.
Mr. Newton, I have to say, this was a great read. Kudos to you and your insight into this topic. I believe with the development and implementation of NDC, the danger of “bias” becomes greater and I believe we are all best served in a truly “free market.”