Developer Demonstrates Neural Network Training on Apple Neural Engine Through Reverse-Engineered APIs
Key Takeaways
- ▸A developer successfully trained neural networks on Apple's Neural Engine using reverse-engineered private APIs, bypassing Apple's official inference-only restriction
- ▸The project demonstrates that the ANE hardware is capable of training operations, with limitations stemming from software support rather than hardware capability
- ▸The proof-of-concept includes benchmarks and documentation but is explicitly not intended as a production framework or CoreML replacement
Summary
A developer has successfully trained neural networks directly on Apple's Neural Engine (ANE) by reverse-engineering private APIs, bypassing Apple's official restriction of the hardware to inference-only operations through CoreML. The project, which has garnered over 3,800 stars on GitHub, demonstrates backpropagation running natively on the ANE without using CoreML training APIs, Metal, or GPU compute. The work reveals that the ANE hardware is capable of training operations, with the limitation being software support rather than hardware capability.
The developer, working under the username 'maderix,' created the proof-of-concept by accessing Apple's _ANEClient and _ANECompiler private APIs. The project includes benchmarks documenting ANE performance characteristics including throughput, power consumption, and SRAM behavior during training operations. The repository provides implementation code in Objective-C and documentation of the reverse-engineering process.
The creator emphasizes that this is a research project rather than a production framework, intended to demonstrate what's possible when hardware restrictions are bypassed. While Apple officially limits the Neural Engine to inference tasks through CoreML, this work shows the silicon itself has no such technical limitation. The project has attracted significant attention from the AI development community interested in leveraging specialized neural processing units beyond their vendor-imposed constraints.
- The work has received significant community attention with over 3,800 GitHub stars, highlighting interest in direct NPU access beyond vendor restrictions
Editorial Opinion
This project highlights a recurring tension in AI hardware: the gap between what chips can theoretically do and what vendors allow them to do. Apple's Neural Engine is clearly capable of training operations, yet Apple restricts it to inference through CoreML, likely for reasons of thermal management, power consumption, or market positioning. While this reverse-engineering work won't become production software, it raises important questions about whether hardware vendors should artificial limit capable silicon, especially as on-device training becomes increasingly relevant for privacy-preserving AI applications.



