SkyLink CEO and co-founder Atyab Bhatti delivers his third slightly high-tech post about artificial intelligence.
In recent years, large language models (LLMs) and generative AI systems built on transformer architectures have taken the world by storm with their impressive natural language capabilities.
From engaging in human-like dialogue to crafting content to even coding, these systems “understand” language in profound ways. However, we don’t fully understand the inner workings and exact decision-making processes of these complex neural networks.
At their core, LLMs are prediction engines. They are trained on vast amounts of data to model patterns. Often, these patterns look and sound like a person. The profound insight is that language is predictable and, to a large extent, can be modeled mathematically. Saying, “Hi, my name … ” would naturally lead to the word “is.” By learning the statistical relationships between words and phrases, LLMs predict the most likely sequence of tokens, words or word fragments, given an input.
This predictive ability enables transformers to generate contextually relevant language that mimics human intelligence. Without going into the philosophical debate of what is human intelligence or sentience, LLMs are remarkably powerful software.
However, if you asked a machine learning/AI researcher to explain precisely how the model arrived at its specific output, they would be hard-pressed to give you a detailed, interpretable answer. We know the principles of how these LLM transformers work, but the exact computations and knowledge representations remain opaque. It’s similar to how the human mind remains an enigma, with billions to trillions of potential routes for each fired synapse that eventually form our feelings, thoughts and understanding of life.
Though the field of explainable AI is making progress — developing techniques to better understand the decisions of black-box models — we still have a long way to go before we can fully unpack the “thought process” of an LLM.
Despite these limitations in understanding, LLMs already augment and automate decision-making in various domains. It’s rumored that further breakthroughs will enable AI to make high-quality decisions in even more contexts. Travel managers would be wise to keep up with these developments, as they bring both opportunities and risks to how business travel will be planned, optimized and supported in the future.
Transformer-based AI has made incredible strides in modeling language, but our understanding of its inner workings remains incomplete. Seeking greater interpretability is important for AI research and safety. In the meantime, businesses should thoughtfully leverage the powerful capabilities we do have while remaining cognizant of the gaps in our knowledge.
This Op Ed was created in collaboration with The Company Dime‘s Editorial Board of travel managers.