August 1, 2024

Second Quarter 2024: Investment Perspective

INVIDIOUS

On January 17, 1961 President Dwight Eisenhower delivered his farewell address to the nation. Rather than a paternal benediction, Eisenhower delivered a shocking warning. He cautioned that the armaments industry and the defense establishment might subvert public policy in favor of their private interests. The capstone lines read: “We must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists, and will persist.”

But Eisenhower’s concern extended beyond the military-industrial complex. He worried that the government’s dominant role in funding research would give rise to a “scientific technological” elite who would similarly capture public policy. His caveat sounded less like premonition and more like an assessment of reality. Ike’s realization was that the geopolitical threats, weapons systems and the underlying science had become so complex that policy makers had become wholly dependent on domain experts. And that those experts would not be impartial public servants. Instead, they would have axes to grind and mouths to feed. In a way, it is a story as old as time. Whenever a party has a financial interest in a public policy, they will attempt to influence the decision. Ike realized that, as the economy and technology had become more complex, the pathways
of influence and patronage were simply harder to discern.

Today we would describe Eisenhower’s dilemma as a hype cycle. The Soviet Union was a concerted pervasive threat to human freedom. There is considerable debate about whether the Soviet threat was underestimated or exaggerated. Barely 18 months from Eisenhower’s address, the Cuban missile crisis would erupt. In October 1963, I imagine, most Americans would not have begrudged a penny of the Pentagon’s spending. On the other hand, from 1955-1965, the CIA regularly estimated the gross
domestic product of the Soviet Union between $300 and $400 billion. It wasn’t until the fall of the Soviet Union 30 years later that we learned the truth. The actual Soviet GDP was $100-$150 billion. Maybe the analysts were using their best efforts. Or perhaps the numbers were distorted because that was in the best interests of the national security bureaucracy and the defense contractors.

I recently attended a panel discussion of several Fortune 50 chief technology officers on Artificial Intelligence. Each of the speakers averred that generative AI was “for real,” “a new computing paradigm” or “business transforming.” I honestly doubt that any of them knew how a transformer
model worked, could explain an attention layer, tokenization or back propagation. What they knew was that after getting beat up for years on cyber security and the cost of their data centers, they were no longer running a cost center. They were at the cutting edge of revenue generation for the enterprise. No more budget cutting. The CEO wanted to give them more headcount. Finally, they had been invited to a panel in front of capital allocators who cared about what they had to say.

Soon came the moment of truth. What types of AI projects are you deploying? An airline executive explained that his company is using generative AI to develop “narratives” for flight delays that can be pushed via text to displaced passengers. The algorithm takes operating data directly from the field—airport delays, weather inputs and crew scheduling systems—and generates a story for why the plane is delayed. Apparently, knowing why you’re waiting in the airport increases loyalty. The audience, buried
headfirst into laptop screens, nodded appreciatively. That was it? I was waiting for the conclusion that, as a result, the planes got to their destinations faster. Or at least the airline was able to reimburse fewer hotel nights. Nope. That was it.

How did we arrive at this magical moment? Because management consultants like McKinsey have been closely monitoring AI through their contacts in Silicon Valley. Why? Because their job is to sell the C-suite on the next new thing. It could be a threat like Y2K or cyber defense or an opportunity like the role of blockchain, IOT or the metaverse in the enterprise. The more abstruse, the better. Why hang out in Silicon Valley? Because that’s where new things come from. The business of management
consultants is to identify a complex solution and then go find a problem. Deus ex machina—generative AI. Never mind that AI was developed at Bell Labs in the 1950s. Never mind that the neural networks underlying machine learning and deep learning have been rapidly evolving and
widely deployed in the enterprise for 15 years. What matters is that a firm called OpenAI released a text-generator in 2022, and 100 million people downloaded it in three months. Sort of like the Miracle at Lourdes. For the faithful, that the Blessed Virgin Mary acted in the world was not news. That she made herself visible was glorious. Soon, McKinsey and Goldman Sachs had published forecasts that generative AI would increase GDP by $10-15 trillion in 10 years. Wait a second. Why did Goldman Sachs get in on the act? Well, just like McKinsey, Goldman’s job is to solve problems for CEOs. Except, the way they solve problems is by advising on mergers and acquisitions. The point is that $10 trillion gets the blood rushing in the c-suite. And every other week, Sam Altman, Satya Nadella and Jensen Huang are appearing on 60 Minutes literally applying the external defibrillators.
Sometime in 2023, literally every Board of Directors in America ordered a study (from a management consultant) on the impact of generative AI in their business. I am going to take a wild stab that not a single report was titled “AI: Nothing to See Here.” More likely, the conclusion was that [insert company name] could increase sales by 2% and cut costs by 3%. McKinsey drove the knife in a little deeper by asserting: “The leading companies are already ahead with AI.”1 What CEO wants to be leading a laggard company? Finally, the sell-side analysts from Wall Street explained the bloodless verdict of the market to the CEO: “If you articulate an ‘AI first’ strategy on the next earnings call,” they promised, “your PE multiple will go up 10%.” So each and every CEO called up her Chief Information Officer. “I need an ‘AI first’ strategy on my desk by next Monday.” And each CIO immediately called her IT consulting firm. “We need to discuss AI projects with you this weekend.” And that is how we got to that panel I attended.

In my younger and more vulnerable years, I used to think that financial bubbles were a simple story of a good idea carried to excess. That bubbles germinate inside a fundamental truth and are subsequently propagated by a wild distortion of that original insight. In the late 1990s, the world wide web promised a phase shift in connectivity to information. That was the elemental truth. There were two narrative distortions. The first lie was that internet traffic was following a hyperbolic curve. A 1999 report from PWC asserted that internet traffic was doubling every hundred days. That postulate soon became gospel. Cisco Systems, the leading networking equipment vendor, predicted that internet traffic would grow to 10 exabytes by 2002. This forecast was echoed by ISPs Compuserve and AOL and content delivery network Akamai. In other words, a widely accepted estimate. That forecast was off by an order of magnitude. The second lie was a fallacy of composition. What might be achievable for any individual business could not in aggregate be true. So the market share of any group of internet operators always summed to greater than 100. In the end, the promise of the internet was valid. The time frame and return expectations were preposterous.

I now believe that there is something more to bubbles. Eisenhower’s evocation of the military-industrial complex applies in the financial/corporate sphere. That there is a giant circle of vested interests—starting in Silicon Valley coursing through Wall Street investment banks and management consultants—that shapes incentives and behaviors to its own ends. And no one on the inside has any incentive to part from the script. It is working as a confidence game does. Here comes the retort that the infrastructure vendors Amazon/AWS, Microsoft/Azure and Nvidia have announced staggering sales increases and even bigger capital expenditure plans. Surely, that is prima facie evidence for success. Well, the picks-and-shovels business is booming. But unless the miners find gold, they will not be coming back for new Levis. We are now about six months on from this sequence, depending, of course, if your company was a “leader” or a “laggard.” Suddenly, all the sell-side analysts are asking for evidence of “monetization.”

I am going to hazard a guess. There won’t be. I have three reasons for doubt. First, generative AI is a best-fit predictor. That works for tasks where the objective function is narrow and well defined. Like a customer-service interaction. “My package did not arrive. What should I do?” Any task that must cope with subtlety (not well defined) or outliers (not narrow) is more likely to generate hallucinations in the output. [“Hallucination” is the term of art in AI for model failure, but it sounds better than “failure.”] So rolling out AI for mission-critical applications is going to be slow. Second, AI is expensive. An AI query is 3-5 times the cost of a google search. It has the burden of needing to be vastly more effective, and it is not. Third, more philosophically, the models’ training data is the internet. The model will produce the average response. Think about works of genius, whether its TS Eliot’s Wasteland, Mozart’s Kleine Nachtmusik or Abbott and Costello’s “Who’s on First?” routine. I would assert they hinge on surprise—the unexpected note or turn of phrase. A hidden connection that was not natural but, upon hearing, seemed like it was always there. So by all means, tell me why my plane is late or help me return a shirt. But declarations that generative AI should be viewed as the steam engine should be met with skepticism. Extraordinary claims require extraordinary evidence. My take is that only a person who’s never used a chainsaw could make such a claim.

1 McKinsey & Company. The State of AI in 2023: Generative AI’s Breakout Year.

—T. Brad Conger, CFA
Chief Investment Office
r


On a quarterly basis, Hirtle Callaghan publishes our perspective on the current market. If you would like to be added to our distribution list and receive the full version of our latest Investment Perspective piece, please contact us.

To download a pdf of the excerpt, click here: Investment Perspective Q2 2024 Excerpt.