Machine learning at the edge: AI chip company challenges Nvidia and Qualcomm

Couldn’t attend Transform 2022? Discover all the summit sessions now in our on-demand library! Look here.


Today’s demand for real-time data analytics at the edge marks the dawn of a new era in machine learning (ML): edge intelligence. This need for time-sensitive data is in turn fueling a massive market for AI chips, as companies seek to deliver ML models to the edge that have lower latency and more power efficiency.

Conventional ML edge platforms consume a lot of power, which limits the operational efficiency of smart devices, which live on the edge. These devices are also hardware-centric, which limits their computational capacity and makes them unable to handle varying AI workloads. They leverage low-power GPU or CPU-based architectures and are also not optimized for edge-embedded applications that have latency requirements.

Even though industry giants like Nvidia and Qualcomm offer a wide range of solutions, they mainly use a combination of GPU-based or data center-based architectures and adapt them to the integrated edge instead of creating a solution specifically designed from scratch. Additionally, most of these solutions are configured for larger customers, which makes them extremely expensive for small businesses.

Essentially, the global trillion-dollar embedded edge market depends on legacy technology that limits the pace of innovation.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, California.

register here

A new machine learning solution for the edge

ML company Sima AI seeks to fill these gaps with its on-chip machine learning system (MLSoc) that enables deployment and scaling of ML at the edge. The California-based company, founded in 2018, today announced that it has begun shipping the MLSoC platform to its customers, with the initial goal of helping solve computer vision challenges in the fields of intelligent vision, robotics, industry 4.0, drones, autonomous vehicles, healthcare and healthcare. government sector.

The platform uses a software-hardware codesign approach that emphasizes software capabilities to create edge-ML solutions that consume minimal power and can handle diverse ML workloads.

Built on 16nm technology, the MLSoc’s processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and high-performance application processors. Surrounding the intelligent real-time video processing are memory interfaces, communication interfaces and system management, all connected via a network-on-chip (NoC). The MLSoC features low operating power and high ML processing capability, making it ideal as an edge-based standalone system controller, or for adding an ML offload accelerator for CPUs, ASICs, and other devices .

The software-first approach includes carefully defined intermediate representations (including the TVM Relay IR), as well as new compiler optimization techniques. This software architecture allows Sima AI to support a wide range of frameworks (e.g. TensorFlow, PyTorch, ONNX, etc.) and compile over 120 networks.

The MLSoc promise – a software-centric approach

Many ML startups focus on building pure ML accelerators and not a SoC with a computer vision processor, application processors, CODECs and external memory interfaces that allow the MLSoC to be used as a standalone solution without the need to connect to a host processor. Other solutions typically lack network flexibility, performance-per-watt, and push-button efficiency – all of which are needed to make ML effortless for the embedded edge.

Sima AI’s MLSoC platform differs from other existing solutions because it solves all these areas at once through its software-first approach.

The MLSoC platform is flexible enough to meet any computer vision application, using any framework, model, network and sensor with any resolution. “Our ML compiler leverages the open-source Tensor Virtual Machine (TVM) framework as a front-end, thus supporting the widest range of ML models and ML frameworks for computer vision in the industry,” Krishna Rangasayee, CEO and founder of Sima AI, told VentureBeat in an email interview.

From a performance perspective, Sima AI’s MLSoC platform claims to deliver 10x better performance in key figures of merit such as FPS/W and latency than the alternatives.

The company’s hardware architecture optimizes data movement and maximizes hardware performance by accurately planning all calculations and data movements in advance, including internal and external memory to minimize latency.

Achieve scalability and immediate results

Sima AI offers APIs to generate highly optimized MLSoC code blocks that are automatically programmed on heterogeneous compute subsystems. The company has created a suite of specialized and generalized optimization and scheduling algorithms for the back-end compiler that automatically convert the ML network into highly optimized assembly codes that run on the learning accelerator block automatic (MLA).

For Rangasayee, Sima AI’s next phase of growth is focused on revenue and scaling their engineering and sales teams globally. As it stands, Sima AI has raised $150m in funding from top VCs such as Fidelity and Dell Technologies Capital. In a bid to transform the embedded edge market, the company also announced partnerships with key industry players such as TSMC, Synopsys, Arm, Allegro, GUC and Arteris.

VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Discover our Briefings.

Leave a Reply

%d bloggers like this: