Is AI PC The Future of Personal Computing?
The AI Personal Computer, but basically gaming computers, because gaming computers don't go down as well with investors.
I’ve heard it, you’ve heard it, everyone's heard it: AI, LLM, GPU... it's the usual tech jargon that seems to foretell an upcoming revolution in technology. I'll try to approach it from a slightly novel angle: The AI PC. What does that mean? In modern PCs, the powerhouse of general computing is the CPU. However, given how computer usage is projected to change, these new AI PCs could begin to resemble old PCs. Those big bulky machines you keep in a corner of your living room of yesteryear.
Gone are the days of standard computers; we are transitioning into a world where GPUs are set to replace CPUs as the norm for personal computers. The key players in this space are Nvidia, Intel, & AMD. Nvidia is experiencing a second wind, due to the AI boom and prior to that benefitted from the crypto boom. AMD has received substantial attention as a strong GPU contender, while Intel appears to be the neglected child in the family. There's significant potential for Intel, the forefathers of Desktop computing, and there’s a huge CAPEX moat required to even participate in this race.
Intel's need for a new strategy has become evident. Recently, in their statement to investors, they introduced the concept of the AI PC — essentially, a computer with a robust GPU. This got me thinking about the potential future of general-purpose computing, and this piece has been my wild stab at it: a future where computing is done on one of two levels – either on the personal computer or in the cloud, similar to how it's currently managed. In this future I envision two potential architectures:
The "Chromebook" architecture, where everything about your computer is stored in the cloud and accessible via an LLM, making it queryable. Google has experimented with this before, but despite the technology being feasible, it didn’t see significant adoption. Cheap GPU access could be the killer use case here. The compute benefit being the economy of scale.
The architecture could involve Localized LLMs that operate on the Bulky personal machine, utilizing your actions from the past few days (phone/ipad/desktop) or the files you have on the system as the computational context. Think of it as RAG (Retrieval-Augmented Generation), but adding the fourth dimension of time – “4D RAG”, multimodality and cross device functionality.
My thesis on why this could become a reality consists of two parts: technological progress and demographic shift. Speaking on the latter; I recall reading an article a few years ago about how young people use computers. A professor mentioned that, since COVID, there’s been a noticeable paradigm shift in her computer science students. They struggled with understanding directory structures (folders, subfolders); this had to be taught explicitly as it’s becoming an increasingly obscure concept.
There was a whole generation of children raised with fuzzy search functionality built into everything, rendering the concept of a directory structure obsolete. The way you use computers today is becoming outdated if you have a structured and orderly file system. Initially, computers were built to be skeuomorphic, imitating real-life counterparts. You would have folders, and files not too dissimilar to a cabinet in an office. However, as they become more advanced and technological competence becomes ubiquitous, the need for such structures disappears.
Let's extrapolate this change to LLMs, Imagine a generation growing up with LLMs as the norm. This hasn’t happened yet but it will certainly happen soon. Currently, most mainstream LLMs are cloud-based, but as computation costs decrease, GPU computing power increases, and LLMs evolve towards the ideally weighted models, we might see viable on-device versions. On-device computation is set to become more powerful. Consider a service like Rewind, “a personalised AI powered by everything you've seen, said, or heard”, except it exists on your local machine, using your browsing history and documents for context, and actually functions effectively.
The way we currently use computers for searching purposes will soon be seen as archaic. Your smartphone will transform from a mere computer in your pocket to your all-around life companion with a second brain linked to your AI Personal Computer. Think of how Tony Stark used Jarvis in Iron Man, but imagine a Siri you wouldn’t be embarrassed to use in public. This is how I conceptualise “The AI PC” in a few decades.
Pair this with other exciting developments in tech, such as 3D graphic generation with techniques like Gaussian splatting (low compute photogrammetry), XR, etc. GPU compute has the potential to explode far beyond its current scope.
Now, going full circle to the CAPEX moat of the aforementioned companies... food for thought.
This is an interesting article. I am a firm believer that technology should not be more intrusive in our lives than it should be, but it will probably be difficult to maintain boundaries.