Skip to main content
Apple

Apple Boosts Machine Learning Speed with New M5 and A19 GPUs for Faster Workloads

Published by
SectorHQ Editorial
Apple Boosts Machine Learning Speed with New M5 and A19 GPUs for Faster Workloads

Photo by Alexandar Todov (unsplash.com/@alexandar_todov) on Unsplash

While developers once wrestled with sluggish ML tasks on Apple silicon, the latest M5 and A19 GPUs now promise dramatically faster workloads, reports indicate.

Key Facts

  • Key company: Apple

Apple’s new M5 and A19 GPUs are built on a revised architecture that expands the number of tensor cores and widens the on‑chip memory bus, according to the product announcement video posted by Apple. The company says the changes deliver up to a 2‑3× speedup for common machine‑learning primitives such as matrix multiplication and convolution, while also cutting latency for inference workloads that run on‑device. Apple highlights that the GPUs are integrated into the next generation of its Mac and iPhone silicon, allowing developers to offload more of the training pipeline to the client without relying on cloud services.

The performance gains stem from a shift to mixed‑precision compute, where the M5 and A19 units can process FP16 and bfloat16 data types natively. Apple’s engineers claim this reduces the amount of data moved between the GPU and the system memory, a bottleneck that has plagued earlier Apple silicon generations. In addition, the GPUs expose a new low‑level API that maps directly to the Metal Performance Shaders framework, giving developers finer control over kernel scheduling and memory allocation.

Apple’s internal benchmarks, shown in the same video, compare the M5 and A19 against the prior‑generation M4 and A18 GPUs on tasks such as image classification with ResNet‑50 and natural‑language processing using BERT‑base. The results indicate a 45 % reduction in inference time for ResNet‑50 and a 38 % reduction for BERT‑base, while training throughput on a synthetic dataset rises from 1,200 images / second to roughly 3,200 images / second on the M5. The company attributes these improvements to the larger shared L2 cache and the higher clock frequencies of the new tensor cores.

Analysts have noted that Apple’s focus on on‑device AI aligns with broader industry trends toward privacy‑preserving computation. Wired’s coverage of Apple’s “Neural Engine” emphasizes that the firm has been incrementally expanding its AI hardware capabilities since the iPhone X, and the latest GPUs represent the most substantial leap yet. ZDNet’s recent piece on Apple’s AI comeback similarly points out that the company’s hardware roadmap now includes dedicated accelerators for both training and inference, a capability previously dominated by Nvidia and AMD in the desktop market.

Overall, the M5 and A19 GPUs signal Apple’s intent to make high‑performance machine learning a native part of its ecosystem, reducing developers’ reliance on external compute resources and tightening the integration between software frameworks and silicon. The announced speedups, if realized in real‑world applications, could make Apple devices more competitive for on‑device AI workloads ranging from computer‑vision apps to real‑time language translation.

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories