It’s clear that the wave of artificial intelligence is driving the technology sector today. Listen to any earnings call or watch any product announcement from the major hardware or software companies and “AI” will be one of the first and most repeated terms. For good reason — the innovation and change coming to us through the creation and improvement of AI applications will change all aspects of how you interact with technology.

But there is an interesting designation between “AI” and what I call “client AI.” Client AI is on-device AI processing, where work that is augmented or improved by artificial intelligence is done locally on your PC, smartphone, or laptop. This differs from how most AI is done currently, where it is handled by massive clusters of servers in the cloud or data center. 

Take Adobe’s
ADBE,
-0.10%
latest generative AI implementation in its photo editing tools. Today this works by having a user prompt the AI model with some text and then that prompt is sent to GPUs in the cloud to create or augment the image; then it is sent back to the consumers device. In a future where AI processing is readily available on the consumer platform itself, moving that processing to the client means it can be faster (lower latency) and cheaper (no need to have extensive server infrastructure). 

I expect we will see significant announcements this year from all the major computing hardware companies about the future of AI processing on your personal devices. Intel
INTC,
-3.06%
has already confirmed that its Meteor Lake processor will launch in December and Qualcomm
QCOM,
+0.03%
is preparing to host its annual Snapdragon Summit this week with product details coming in hot. Meanwhile, AMD
AMD,
-1.77%
has its Ryzen AI solution that started shipping this year, and it wouldn’t surprise me to see more about its AI ambitions at CES in January.

Taking the intricacy of differentiating between cloud and client AI processing off the table for now, many questions remain about how chip companies are poised to benefit, or falter, in the face of this looming shift.

For example, Intel began talking about how its chips could accelerate AI and machine learning as far back as its Ice Lake notebook CPUs in 2019. That momentum seemed to stall out over the last few years, as consumer interest in running AI on PCs was minimal.

But during its recent announcement of the upcoming chip code-named “Meteor Lake”, the company leaned heavily into the AI-readiness of the platform. This CPU will be the first from Intel to integrate a dedicated NPU (neural processing unit) based on the Movidius product line that it acquired in 2016. (This IP was previously called a VPU ‘visual processing unit’ but was simply renamed this year.) 

Details are still pending on whether Meteor Lake offers a performance advantages over competing products, but the primary problem for Intel is that its goal is to slow market share deterioration. As the clear and dominant leader in the PC CPU space, building a complex and powerful chip like Meteor Lake means Intel must claw back some designs from AMD or Qualcomm while also raising its ASP (average selling price) to its partners (Dell, HP, Lenovo) in order to profit from this investment.

AMD is also leaning into the future of AI on the PC. It announced its “Ryzen AI” integration in January of this year and started shipping processors with this acceleration engine towards the summer. This IP comes partially from its acquisition of Xilinx, finalized in 2022, but details are still sparse. 

This Ryzen AI integration is only included in a small slice of the company’s product portfolio, but it has plans to expand it widely. AMD also has a lot of experience with high-performance integrated graphics on its Ryzen CPUs, thanks to the Radeon family of graphics chips it builds, and that GPU can be used for some of more intense AI compute tasks on your laptop or PC.

If AMD and its Ryzen AI implementation can offer performance levels at or above the pending release of Intel’s Meteor Lake, then it could shift market share. But AMD needs to make up ground on the software side of things, a critical area where Intel has an advantage with its sheer scale of resources.

Qualcomm. meanwhile, started talking about AI acceleration as a part of its platform story back in 2015, leaning into its chips for smartphones that included a CPU, a GPU, and a DSP (digital signal processor). The San Diego-based company has been progressively adding in dedicated IP for AI processing through its product lines, including its chips meant for the laptop space

The company has stated that its next generation of computing platforms for the notebook space, to be announced this week at its annual technology summit, will offer significant improvements in CPU performance as well as AI acceleration. How this will translate into more partners and more design wins for Qualcomm’s PC portfolio is up for debate, as the company has admittedly struggled to gain traction over the last few years. 

The final company worth calling out here is Nvidia
NVDA,
+3.84%.
Clearly Nvidia is the king of AI and the company’s $1 trillion valuation is attributable to that. But what Nvidia is most known for is its massive GPUs and the server clusters using them to handle the necessary training of AI models and enabling companies like ChatGPT, Google, and Facebook to innovate with AI on a massive scale. It also has the most robust software ecosystem to enable AI processing of any technology company, with a multiyear head start over any competitor. 

We haven’t heard much from Nvidia when it comes to on-device, client AI. (Recently Nvidia disclosed a generative AI performance increase for its consumer GPUs.) Its GeForce products that power most gaming PCs on the planet are actually quite well-suited for high performance AI computing but are also quite expensive and use a lot of power. Laptops that integrate GeForce GPUs could potentially be the best place for software developers and content creators to utilize AI applications. But unless Nvidia has plans for a low-cost, low-power chip that is built for AI specifically on consumer devices, there is a risk that it misses out on the massive opportunity that Intel, AMD, and Qualcomm are competing over.

I expect a lot of volatility in this market for the next several months and into 2024 in how consumers and client devices will operate with the coming AI tidal wave. Watch what Microsoft
MSFT,
+0.81%
has to say about how this will play out; its Windows and Office 365 Copilot technology are two of the biggest showcases for how AI will influence how we live and work. Every chip company will have to prove it commands technology that is the most powerful, most exciting, and most likely to change your daily computing habits. 

Ryan Shrout is the founder and lead analyst at Shrout Research. Follow him on X (formerly Twitter) @ryanshrout.

Also read: Big-tech results will decide ‘where we go from here’ amid investor caution. They would fall if it weren’t for this one company

More: Israel-Hamas war threatens tech sector growth and innovation



Read the full article here

Share.
Exit mobile version