SkyLink CEO and co-founder Atyab Bhatti provides this overview of some key concepts to kick off a series of slightly high-tech posts about artificial intelligence. 


Notorious for its multiple complexities, corporate travel is slowly bowing to the transformative capabilities of artificial intelligence (AI). A breakthrough in that revolution is generative AI, an innovation backed by the remarkable prowess of large language models (LLMs). Here’s a look at how these complex machines offer substantial benefits, and a nod to one of their challenges: “hallucinations.”

As highly advanced AI systems, LLMs are built on a foundation of extensive education. They are trained to comprehend and generate human-like text. The broad spectrum of LLMs’ abilities includes spontaneously initiating cohesive ideas, immediately responding to urgent inquiries and distilling information into concise summaries. The implications of these capabilities for the corporate travel industry are vast, bringing increased speed and efficiency to the process.

Understanding LLM functioning requires looking into the roles that parameters and hyperparameters play in their construction. Parameters contribute to an AI model’s understanding, not unlike a travel agent customizing their services to suit each of their customers’ unique preferences. In contrast, hyperparameters construct the concept of the operation and set the rules of the game even before any training kicks off. One could compare hyperparameters to a comprehensive itinerary that lays out a journey’s notable locations, stops and guidelines. The flexibility of these hyperparameters allows LLMs to modify and create a plan that matches a traveler’s specific needs to a tee.

SkyLink's Atyab Bhatti
SkyLink CEO and co-founder Atyab Bhatti

Another critical component of LLM functionality is the attention window. This term refers to the LLM focusing on a particular set of data. It lets the LLM sift through a multitude of information payloads without losing context or details of the underlying request. This proves to be useful when TMCs have long email threads — the average correspondence between a traveler and an agent consists of 4.5 emails, according to TMCs we’ve talked to. By having an LLM with an expanded attention window, even lengthy threads can be responded to, all while the context of the email persists.

LLMs will foster adaptable workflows that minimize the burden for the traveler.

Consider the real-life application of LLMs in corporate travel management in handling a large volume of email communications from traveling employees daily. Emails can range from confirmations of reservations and itinerary changes to general inquiries about services. Rather than manually sorting through the mountain of emails, the LLM can be programmed to classify and segregate these emails into the various categories. The LLM scans the content of the messages, identifies key language signifiers and swiftly categorizes the email based on its content. In this way, travel management personnel can focus on addressing each category more efficiently, saving hours of manual labor and decreasing response times significantly. This automated system is akin to teaching AI-specific skills (parameters) to understand and sort emails more effectively, coupled with setting the right initial training conditions (hyperparameters) such as learning pace and total email training volume. These preparatory steps ensure the AI learns efficiently and effectively.

Despite the numerous advantages of these technological wunderkinds, they do come with their quirks. One of these is the phenomenon known as hallucinations. Like a travel agent making assumptions about a client’s travel details without enough context, hallucinations occur when an LLM fabricates data without sufficient grounding. 

Hallucinations are a function of temperature, a hyperparameter that can be adjusted before and after training a language model. Effectively, temperature can be thought of as a creativity level. The higher the temperature, the more willing a model is to employ a less statistically relevant node when the LLM is predicting its next output. Nodes and parameters work together in an LLM. Nodes are like neurons in a brain; they are basic units of the network that perform computation. These computations are informed by parameters, which include the weights and biases (the keys to the LLM prediction engine). Weights determine the strength of the connection between nodes, while biases adjust the output along with the weighted sum of inputs.

Guided by these settings, LLMs create text by choosing from possible words or phrases. With a low temperature, the LLM sticks to more likely options, resulting in safer, more predictable text. With a high temperature, it’s more willing to choose less likely options, leading to more diverse and potentially creative — but also possibly less coherent — outputs.

These instances are rare, and rigorous evaluation methods can monitor and control hallucinations to ensure the AI model’s authenticity and reliability. For example, an unsupervised model with limited guardrails might assume that a traveler asking for a flight to Georgia is referring to the country rather than the U.S. state. While this example is harmless, the consequences of such a mistake in execution can have asymmetric implications.

In the next part of our series, we’ll discuss personalization.


This Op Ed was created in collaboration with The Company Dime‘s Editorial Board of travel managers.

One Comment

Leave a Reply