


Is not available, which ensures your models can still run fast on a large set of TensorFlow Lite falls back to optimized CPU execution when accelerator hardware Neural Networks API to take advantage of these new accelerators as they come

More and more mobile devices today incorporate purpose-built custom hardware to Improved model loading times, and supporting hardware acceleration Optimized for mobile devices, including dramatically Models with a small binary size and fast initialization/startupĪ runtime designed to run on many different TensorFlow Lite enables low-latency inference of on-device machine learning models.Įnables inference of on-device machine learning Today, we're happy to announce the developer preview of TensorFlow Lite, TensorFlow’s lightweight solution for mobile and embedded devices! TensorFlow has always run on many platforms, from racks of servers to tiny IoT devices, but as the adoption of machine learning models has grown exponentially over the last few years, so has the need to deploy them on mobile and embedded devices.
