The Coral M.2 Accelerator is an M.2 module (either A+E or B+M key) that brings the Edge TPU ML accelerator to existing systems and products. The Edge TPU is a small ASIC designed by Google that accelerates TensorFlow Lite models in a power efficient manner: it's capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. For example, one Edge TPU can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 frames per second. This on-device ML processing reduces latency, increases data privacy, and removes the need for a constant internet connection. The M.2 form-factor allows you to add local ML acceleration to products such as embedded platforms, mini-PCs, and industrial gateways that have a compatible M.2 card slot.
Performs high-speed ML inferencing
The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. See more performance benchmarks.
Works with Debian Linux and Windows
Integrates with any Debian-based Linux or Windows 10 system with a compatible card module slot.
Supports TensorFlow Lite
No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.
For more product and technical information, please visit Coral M.2 Accelerator A+E key website