
AICore DX-M1M Module Provides 25 TOPS Edge AI Acceleration in M.2 Form Factor
Key Takeaways
- •25 TOPS AI acceleration in 2242 M.2 form factor
- •Consumes ~3 W power, ideal for edge devices
- •Supports Windows, Ubuntu, Docker, multiple AI frameworks
- •Compatible with Raspberry Pi 5 and Radxa ROCK series
- •Offers 1 GB LPDDR4X memory, PCIe Gen3×2 interface
Summary
Radxa and DEEPX have launched the AICore DX‑M1M, a 2242‑size M.2 AI accelerator delivering up to 25 TOPS while consuming roughly three watts. The module integrates a DeepX DX‑M1M NPU, 1 GB LPDDR4X memory and PCIe Gen3 ×2 connectivity, fitting compact platforms such as Raspberry Pi 5 and Radxa ROCK boards. Software support spans Windows, Ubuntu and Docker, with the DXNN SDK providing end‑to‑end model compilation and runtime. Priced at about $85, the DX‑M1M targets edge inference workloads like image classification and object detection.
Pulse Analysis
The rapid growth of AI at the edge is driving demand for tiny, power‑efficient accelerators that can be dropped into existing single‑board computers. Radxa’s new AICore DX‑M1M answers that call with a 2242‑size M.2 module that delivers up to 25 tera‑operations‑per‑second while drawing only about three watts. By shrinking the footprint from the earlier DX‑M1’s 2280 form factor, the DX‑M1M fits into tighter enclosures such as the Raspberry Pi 5 or Radxa ROCK series, expanding the pool of devices that can run sophisticated vision models locally.
The module pairs the DeepX DX‑M1M SoC with 1 GB of LPDDR4X memory clocked to 4266 MT/s and a PCIe Gen3 ×2 link, which can be negotiated up to ×4 on the host side. This combination yields a balanced 25 TOPS throughput within a 3‑5 W envelope, making it suitable for image classification, object detection, segmentation and pose estimation tasks. Software support spans Windows 10/11, Ubuntu 20.04‑24.04 and Docker containers, while the DXNN SDK and DX‑All Suite provide end‑to‑end model compilation, runtime and GStreamer integration. The onboard QSPI flash provides fast boot storage, further reducing latency for time‑critical deployments.
Priced at roughly $85, the DX‑M1M undercuts many competing edge AI cards that often exceed $150, positioning it as an attractive option for developers and OEMs targeting cost‑sensitive applications. Its broad OS compatibility and plug‑and‑play M.2 installation lower integration barriers, enabling rapid prototyping of smart cameras, drones, and industrial sensors. As more AI workloads migrate from cloud to the edge to reduce latency and bandwidth, modules like the AICore DX‑M1M could accelerate adoption of on‑device inference, prompting larger players to offer similarly compact, low‑power solutions. Early adopters are already testing the module in autonomous retail kiosks, where real‑time object recognition drives inventory management.
Comments
Want to join the conversation?