Myrtle.ai accelerates AI inference on FPGA cards deployed in cloud and on-premise data centres, achieving the highest throughput for latency-constrained workloads. This enables businesses to meet the low latency requirements of real-time applications while reducing data centre costs and energy consumption compared to alternatives such as GPUs.
With the rapid growth in AI based on DNNs, including speech services and recommendation systems, businesses using these to engage with their customers are struggling to scale up their IT resources and deal with the resulting increase in energy demand. To address this, cloud companies and enterprises running their own data centres are adopting FPGA accelerator cards such as the FPGA PAC cards from Intel or the Alveo cards from Xilinx. Myrtle.ai’s highly efficient MAU Accelerator, combined in its proprietary scalable architecture, enable it to build DNNs which are optimized for specific workloads and run on these FPGAs.
IQ Capital invested in Myrtle at Seed stage in 2017 and followed through to Series A.