This is a configurable IP that can optimize performance and size that can be processed on the edge.
Features: ■ High Precision The DV700 series arithmetic units standardly support FP16 precision floating-point calculations, allowing the use of AI models trained on PCs or cloud servers without the need for retraining. This enables the maintenance of high inference accuracy, making it an ideal AI processor IP for AI systems that require high reliability, such as autonomous driving and robotics. ■ Compatibility with Various DNN Models The DV700 series has an optimal hardware configuration for deep learning inference processing, capable of performing inference with various DNN models, including object detection, semantic segmentation, pose estimation, and distance estimation. ■ Development Environment (SDK/Tool) that Facilitates AI Application Development The DV700 series provides a development environment (SDK/Tool) bundled with the IP core. The development environment (SDK/Tool) supports standard AI development frameworks (Caffe, Keras, TensorFlow), so if customers prepare models compatible with AI development frameworks, they can easily perform AI and deep learning inference processing with the DV700 series.
Inquire About This Product
basic information
ZIA DV740 Overview Specifications: - Up to 1kMAC (2TOPS @1GHz) - Replaced Processor with optimized controller - High bandwidth on-chip RAM (512KB to 4MB) - 8-bit weight compression - Framework - Caffe 1.x, Keras 2.x (export) - TensorFlow 1.15 (native) - ONNX format support
Price range
Delivery Time
Applications/Examples of results
Please contact us separately.
Company information
Our company has been developing and licensing proprietary GPU IP cores for embedded devices and related software under the slogan "Visualize the Future," as well as our own SoC business. Thanks to this, the total shipment of products equipped with our GPU IP cores, including game consoles, cameras, and printers, has exceeded 100 million units. Today, we are focusing not only on edge AI, which requires real-time inference processing of large amounts of data, such as in autonomous driving and factory automation, but also on the field of cloud AI, where learning capabilities to improve inference accuracy at the edge are key. Since our founding in 2002, as one of the world's leading GPU companies, we leverage our expertise in miniaturization, low power consumption, and high performance developed for embedded GPUs to provide highly competitive edge AI inference processor IP, module products equipped with it, as well as software products and cloud services that integrate DMP's AI and image processing technologies, all through our proprietary AI platform, the "ZIA series."