AI is no longer a promise. It’s a force reshaping the world in real-time.
For those building the future, the choices are clear and urgent – what architectures will scale? How do we deploy intelligence where it matters, efficiently, securely, and everywhere? These decisions define the future of technology, economies, and societies for generations.
At Arm we’re working alongside you to lead this transition – bringing AI everywhere from cloud to edge. Working together, this is our moment to ensure this ecosystem thrives, one that spans silicon providers, software developers, cloud and edge platforms, and the innovators building next-generation AI experiences. It’s this collaboration that will unlock the intelligence that transforms industries, communities, and lives.

The world won’t wait. Mr. Chris Bergey, Senior Vice President and General Manager, Client Line of Business, discusses how together we can accelerate the move forward into the next era of AI.
We are on the doorstep of the most important moment in the history of technology.
- AI is no longer an idea – it’s a force. We have a once in a lifetime opportunity to shape it and deliver breakthroughs that will transform billions of lives.
- AI workloads are more diverse, compute demands are surging, and the balance between performance and power efficiency has never been more critical. Software is becoming more complex and more costly.
- The Arm ecosystem is building the foundation of AI, from cloud to edge. For more than three decades, we’ve delivered world-class performance and world-leading power efficiency, and we’ve used our insights to evolve the Arm Compute Platform for the AI era.
Everything has changed, and we are living through a shift like no other.
- AI models have accelerated, with 150 new foundation models emerging in the last 18 months. We’re entering the multimodal era – where models can understand and generate text, audio, images, and videos. AI has gone mainstream. ChatGPT reached a million users in just five days, the fastest-growing consumer product of all time.
- AI is moving closer to us, running directly on our devices. Today, many AI assistants are being developed for the edge first, unlocking new experiences.
- AI agents are here, and physical AI is emerging. The next evolution is physical AI that can sense, interact with, and respond to the physical world. This will transform how we build, move, and operate everything.
Taiwan is at the epicenter of AI
- Taiwan is the backbone of the global AI supply chain. It’s your engineering, innovation, and integration – the tight collaboration across design, hardware, and manufacturing that makes this ecosystem so critical.
- But with AI’s rise come new demands, and meeting those demands will require more partnership, more innovation, and more ambition than ever before.
- The future of AI rests on three foundational elements: A ubiquitous platform from cloud to edge; supporting the world’s largest software developer ecosystem; and world leading performance per watt. This is the heart of what we do at Arm.
Arm is a platform company built for the age of AI.
- AI is transforming every major market—datacenters, smartphones, cars, even earbuds. To deliver dramatic performance gains without compromising efficiency, we built the Arm Compute Platform—a system-level foundation designed for this moment.
- Across infrastructure, client, automotive, and IoT, our Compute Subsystems deliver pre-integrated, validated platforms that reduce complexity, improve performance-per-watt, and accelerate time to market. We have created the most ubiquitous compute platform in history, with more than 310 billion Arm-based chips shipped. There is no other platform that touches more devices, more developers, or more end users.
The Arm Compute Platform isn’t just defined by its hardware
- The Arm platform is defined by the 22 million developers building on it. They choose Arm because we have created the world’s largest compute footprint. We know software is complex, expensive and time consuming – so our software investment needs to evolve for the age of AI. KleidiAI, announced last year, is a suite of AI software libraries that can be seamlessly plugged into all AI frameworks to optimize AI workloads running on the CPU.
- In just one year we’ve seen rapid progress. Today, KleidiAI is integrated into the world’s leading AI frameworks (ONNX Runtime, LiteRT, MediaPipe, ExecuTorch, PyTorch, llama.cpp, the MNN framework for Alibaba’s Qwen model, Tencent Hunyuan). This is unlocking real-world AI experiences across all of Arm’s markets – automotive, cloud, datacenter, IoT, mobile, PC.
AI at scale demands efficiency at scale.
- Performance per watt has been our obsession for more than 40 years, and in the age of AI, power efficiency matters more than ever. Arm has been in the datacenter for a long time, starting with our work with AWS. Today, more than 50% of AWS new CPU capacity was on the Arm-powered AWS Graviton. And its not just AWS – every major cloud provider is now building Arm-based instances, achieving at least 40% power efficiency savings.
- We’ve entered a new era in the datacenter, where AI-first workloads are more power hungry than ever. In Taiwan, datacenter power usage is expected to grow by 8x by 2028. It’s clear if we don’t solve for efficiency, AI can’t scale.
Yesterday’s legacy architectures weren’t built for AI
- We need a fundamentally different approach—one that rethinks how we integrate CPUs, GPUs, and the entire system around performance per watt. NVIDIA’s Grace Blackwell platform is an integrated system where every component is optimized to work together
- Arm Neoverse CPUs are tightly coupled with GPUs and DPUs to eliminate bottlenecks and data at unprecedented speeds – 25x what we had before.
- Every major hyperscaler is rethinking their infrastructure, understanding that the only way to meet the performance, efficiency, and scalability demands of AI is by designing that system from the ground up – with each piece purpose-built for the role it plays. Arm is at the center of this shift – we expect that nearly 50% of all new chips shipped to hyperscalers by the end of 2025 will be Arm-based.
Intelligence monetization only happens through inference
- While billions are being spent on training—pushing toward 10²⁵ to 10²⁶ FLOPs—the real opportunity is in what’s next. There’s a 10¹⁰x gap between training and inference workloads, but inference is where AI comes to life on your phone, in your car, at the edge. That’s how intelligence reaches people and scales, and where performance per watt is critical.
When you build a great PC it looks like a smartphone – and that’s exactly where Arm excels
- Arm powers 99% of smartphones, where our leadership in efficiency began. For years, we’ve been working with leading OEMs and platform providers to extend this to the PC.
- Last year the first wave of AI PCs hit the market, and since then, the momentum has only grown. Today, the Arm ecosystem is powering innovation across every major platform: Apple’s custom silicon, Windows on Arm devices, and now ChromeOS and Android with new ultra-efficient platforms built for AI. We expect more than 40% of PC and Tablets to be Arm-based in 2025.
Arm Compute Subsystems for Client – the compute platform for next-generation AI PCs.
- Last year we announced Arm Compute Subsystems for Client, delivering double-digit performance gains across both CPU and GPU. This goes beyond benchmarks – it’s about real tangible benefits for AI – faster launches, smoother experiences, and dramatically better performance for genAI workloads.
- Now we’re seeing this come to life in devices like the NVIDIA DGX Spark, where he GB10 pairs the NVIDIA Blackwell GPU with an 20-core Arm-based Grace CPU. The end result is enough compute to run 200 billion parameter models. What used to require racks of datacenter infrastructure now fits on your desk.
- What makes this so powerful is that the same Arm architecture behind the most advanced AI systems is also showing up in ultra-efficient form factors, like Chromebooks. The MediaTek Kompanio Ultra is a mobile-first compute solution, now scaled for thin-and-light laptops—bringing AI to the hands of students, creators, and everyday users.
What’s next: Double digit performance gains and AI acceleration
- A few years ago we introduced Armv9, purpose built for the AI era. Last year we launched ‘Blackhawk’ with double digit IPC performance, and since then we’ve seen amazing handsets and AI PCs from MediaTek and many other partners.
- Today we’re giving you a sneak peek at our next Armv9 flagship CPU, codename Travis. Its powerful performance delivered through new levels of IPC and fast frequency is accelerated by our latest version of Scalable Matrix Extensions (SME), boosting AI performance not just incrementally, but in the multi fold. With Travis and SME, we are seeing double digit IPC performance gains.
- Drage, the codename for our next generation Mali GPU, will unlock more sustained performance for longer gaming and richer content. Stay tuned for more information later this year to learn more about how this new platform will bring more AI capabilities to your devices and console-level graphics in the palm of your hand.
AI is everywhere. Arm is everywhere – everything we’ve built has prepared us for this moment.
- Together with the Taiwan ecosystem, we’re building the foundation for AI that runs everywhere: in the cloud, at the edge, and in everywhere in between. We have a once-in-a-lifetime opportunity – Let’s build the future of AI – from cloud to edge, on Arm, together.
Covered By: NCN MAGAZINE / Arm
If you have an interesting Article / Report/case study to share, please get in touch with us at editors@roymediative.com , roy@roymediative.com, 9811346846/ 9625243429