1~3 item / All 3 items
Displayed results
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationContact this company
Contact Us Online1~3 item / All 3 items
The MERA software framework from EdgeCortix allows for the seamless compilation and execution of standard or custom CNNs (Convolutional Neural Networks) developed with industry-standard frameworks on heterogeneous platforms, including the EdgeCortix SAKURA AI coprocessor, by installing it from the public pip repository. MERA is equipped with Apache TVM and provides a simple API to seamlessly perform graph compilation and inference of deep neural networks using SAKURA's DNA AI engine. Additionally, it offers profiling tools, code generators, and runtime, enabling the deployment of pre-trained deep neural networks after a simple calibration and quantization step. MERA supports models that are directly quantized in deep learning frameworks such as Pytorch and Tensorflowlite.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registration"EdgeCortix SAKURA" is an edge AI coprocessor SoC that achieves industry-leading computational efficiency and latency with a TSMC 12nm FinFET coprocessor (accelerator). SAKURA features our neural processing engine equipped with a Dynamic Neural Accelerator (DNA) IP, which is a single-core capable of 40 TOPS and includes a reconfigurable data path that connects all computational engines. Please feel free to contact us if you have any inquiries. 【Applicable Industries】 ■ Intelligent Mobility / Intelligent Electric Vehicles ■ Defense and Security ■ 5G Communication ■ VR/AR, etc. *For more details, please download the PDF or contact us.
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registrationThe "Dynamic Neural Accelerator (DNA)" is a flexible deep learning inference IP core characterized by high computational power, ultra-low latency, and a scalable inference engine. While boasting excellent power efficiency compared to other standard processors, it achieves ultra-low latency for inference in streaming data. Please feel free to contact us if you have any inquiries. 【Features】 ■ Ultra-low latency AI inference IP core ■ Robust open-source MERA software framework ■ Compatible with both FPGA and ASIC/SoC (The photo and link below show an example of DNA mounted on the Bittware (Molex Japan) FPGA card IA420F.) *For more details, please refer to the link below, download the PDF, or contact us. Reference link: https://www.bittware.com/ja/ip-solutions/edgecortix-dynamic-neural-accelerator/
Added to bookmarks
Bookmarks listBookmark has been removed
Bookmarks listYou can't add any more bookmarks
By registering as a member, you can increase the number of bookmarks you can save and organize them with labels.
Free membership registration