01-06-2021 | Lattice Semiconductor Ltd | Design & Manufacture
Lattice Semiconductor Corporation now offers enhancements to its Lattice sensAI solution stack for accelerating AI/ML application development on low power Lattice FPGAs. Enhancements incorporate support for the Lattice Propel design environment for embedded processor-based development and the TensorFlow Lite deep-learning framework for on-device inferencing. The new version comprises the Lattice sensAI Studio design environment for end-to-end ML model training, validation, and compilation. With sensAI 4.0, developers can employ a simple drag-and-drop interface to produce FPGA designs with a RISC-V processor and a CNN acceleration engine to facilitate the quick and easy implementation of ML applications on power-constrained Edge devices.
AI/ML models can be trained to support applications for various devices that demand low-power operation at the Edge, including security and surveillance cameras, industrial robots, and consumer robotics and toys. The solution stack supports developers to rapidly create AI/ML applications that run on flexible, low power Lattice FPGAs.
“Lattice’s low-power FPGAs for embedded vision and sensAI solution stack for Edge AI/ML applications play a vital role in helping us bring leading-edge intelligent IoT devices to market quickly and efficiently,” said Hideto Kotani, unit executive, Canon Inc.
“With support for TensorFlow Lite and the new Lattice sensAI Studio, it’s now easier than ever for developers to leverage our sensAI stack to create AI/ML applications capable of running on battery-powered Edge devices,” said Hussein Osman, marketing director, Lattice.