The AI Boom Won't Save Every Tech Stock
Author: Chris Wood
The world of tech has a new killer app and history shows that killer apps in tech can have dramatic investment implications, including leading to investment bubbles.
This writer refers, of course, to AI.
If the interest on this issue was triggered by Microsoft’s announcement on 23 January of a “multiyear, multibillion dollar” investment in ChatGPT-maker OpenAI, the real excitement has been generated by the extraordinary take up of users of ChatGPT which is the most in history demonstrating the ultimate network effect.
ChatGPT reportedly reached 100m active users in January, just two months after launching in late November, with its website currently generating more than 1.8bn visitors per month.
In this sense AI has gone with the arrival of ChatGPT from being a business-to-business model, where companies use AI to improve efficiencies, to being a customer-facing product with massive monetisation potential and one which will, presumably, be incorporated into Microsoft’s suite of products (see Wall Street Journal article: “Microsoft Adds the Tech Behind ChatGPT to Its Business Software”, 16 March 2023).
True, the explosion of interest in AI, with Google (Alphabet) mentioning AI 52 times in one hour in its earnings call last April, has prompted a whole discussion on how the world should try to regulate AI before it takes off to try and avoid some of the undoubtedly negative consequences that happened with social media, such as the by now well understood echo chamber effect and the threat to privacy inherent in the business model now known as surveillance capitalism.
This is a worthy undertaking.
But in reality it will be very hard for regulators to control AI once it is upon us, which it now is.
This is why for now it is appropriate to address the investment consequences.
This writer had been operating on the assumption that FAANGM stocks peaked as a percentage of S&P500 market capitalisation back in September 2020. There can be no such conviction now. Microsoft and probably to a lesser extent Alphabet are certainly geared to AI or at least can be perceived as beneficiaries of it.
It is also the case that investors can have no idea at present which version of AI is going to succeed.
ChatGPT is all the rage right now, and first mover advantage helps a lot.
Still Elon Musk clearly aspires to build his own version, as was discussed in an interesting recent interview on FoxTV with the now departed (from that channel) Tucker Carlson, as do many others.
But what seems much more straightforward in an investment context is the need to own the AI equivalent of “picks and shovels”, to use the analogy of the mining sector.
And on this point the word is that AI servers use five to six times more DRAM than a regular server.
This is because AI is a function of pattern recognition which requires massive computing power to process all the data.
From everything this writer has heard, if there is only one AI play to own, it is Nvidia, which is why it has risen 165% year to date.
US Regulations are Supercharging Nvidia Sales in China
Meanwhile, if Nvidia looks set to enjoy an anticipated hockey stick pickup in revenues in coming quarters, on top of the 19% QoQ increase announced for last quarter ended 30 April, it is also interesting to note as an aside that Nvidia has also been benefitting from mainland China demand for so-called A800 GPUs.
These have been designed not to come under the US ban unlike the normal AI chips sold by Nvidia.
These so-called “work around chips” have the same specs as the A100 GPUs but a slower memory interface which means more have to be purchased for the same level of computing power.
With Chinese companies fearing, like the proverbial sword of Damocles, an extension of the current ban, they are currently buying all the A800s they can get their hands on.
Meanwhile if AI is the new killer app, with potentially much greater mass adoption than anything that has preceded it, the rest of the tech world, be it smartphones or personal computers, still faces recession risks.
For those absolute-return investors with the ability to do pair trades, the obvious approach is to be long those parts of the tech world linked to AI, and short those with no exposure.
Does AI Pose a Threat to Humanity?
What about the existential threat posed to humanity by AI?
A recent article in the Financial Times concluded that “keeping machines docile enough for humans to control ... will be the governance challenge of our age” (see Financial Times article: “Keeping ultra-intelligent machines docile is the challenge of our age” by John Thornhill, 21 April 2023).
The same article quoted Eliezer Yudkowsky, research lead at the Machine Intelligence Research Institute, that, if nothing changes, the most likely result of building “superhumanly smart AI” is that “literally everyone on Earth will die”.
Such a projected outcome reminds this writer of the epic first two Terminator movies (1984 and 1991) and the destructive role for mankind played by “Skynet”.
Still the good news is that, unlike the emergence of social media and the resulting explosion of surveillance capitalism, the arrival of ChatGPT, and the related appreciation that AI has now arrived, has caused many, including many technologists, to call for an inquiry to consider how best to handle the new phenomenon.
Thus, the same FT article noted that more than 27,000 people, including several leading AI researchers, have signed an open letter from the Future of Life Institute calling for a six-month moratorium on developing leading edge models while such an inquiry is held.
So, it certainly cannot be said that the potential threats to humanity represented by AI are being completely ignored in the same way that the disastrous implications of social media on traditional media were almost completely ignored.
Still if people have an awareness, if not a clear understanding, of the risks, any attempt to regulate seems very challenging, most particularly across national lines.
The reality is that the arrival of a new technology creates a competitive race to win, and the commercial incentives to push on are clearly overwhelming.
In this respect, by far the most insightful article this writer has read on the implications for mankind of AI was written by Henry Kissinger five years ago (see The Atlantic article: “How the Enlightenment Ends” by Henry Kissinger, June 2018).
In this article Kissinger made the fundamental point that computers cannot think conceptually, and Kissinger is the ultimate example of a conceptual thinker.
They, therefore, lack all important context in an AI world where “individuals turn into data, and data become regnant”.
Kissinger foresaw a world where a growing percentage of human activities would be driven by AI algorithms.
But, as he noted, these algos, “being mathematical interpretations of observed data”, do not explain the underlying reality that produces them.
As a result, Kissinger viewed AI as “inherently unstable”.
Kissinger ended his article by calling for a presidential commission of “eminent thinkers” to help develop a national vision on how to handle AI.
He also noted that “if we do not start this effort soon, before long we shall discover that we started too late”. That was five years ago
.