Israel's Deci, Intel to jointly optimize deep learning inference on Intel's CPUs 

The deep learning company is to work with the tech giant to deploy innovative AI technologies to mutual customers

Photo: Bigstock

Deci, a deep learning company that says it is building the next generation of AI, announced Thursday a broad strategic business and technology collaboration with Intel to optimize deep learning inference on Intel Architecture (IA) CPUs. 

Deci was one of the first companies to join Intel Ignite, an accelerator program designed to support innovative startups in advancing new technologies in disruptive markets. The Tel Aviv-based company that was founded in 2019 said it will now work with Intel to deploy innovative AI technologies to mutual customers. 

The collaboration takes a significant step towards enabling deep learning inference at scale on Intel CPUs, reducing costs and latency, and enabling new applications of deep learning inference. New deep learning tasks can be performed in a real-time environment on edge devices and companies that use large scale inference scenarios can dramatically cut cloud or datacenter cost, simply by changing the inference hardware from GPU to Intel CPU, according to Deci.

Deci's deep learning platform is said to enable data scientists to transform their AI models into production-grade solutions on any hardware, crafting the next generation of AI for enterprises across the board. 

"By optimizing the AI models that run on Intel's hardware, Deci enables customers to get even more speed, allowing for cost-effective and more general deep learning use cases on Intel CPUs," said Deci CEO and co-founder Yonatan Geifman. "We are delighted to collaborate with Intel to deliver even greater value to our mutual customers and look forward to a successful partnership."

Deci and Intel's collaboration began with MLPerf where on several popular Intel CPUs, Deci's AutoNAC (Automated Neural Architecture Construction) technology accelerated the inference speed of the well-known ResNet-50 neural network, reducing the submitted models' latency by a factor of up to 11.8x and increasing throughput by up to 11x. Deci's AutoNAC technology uses machine learning to redesign any model and maximize its inference performance on any hardware – all while preserving its accuracy, according to Deci.

Monica Livingston, AI Solutions and Sales Director at Intel said, "Deci delivers optimized deep learning inference on Intel processors as highlighted in MLPerf. Optimizing advanced AI models on general purpose infrastructure based on Intel Xeon Scalable CPUs allows our customers to meet performance SLAs, reduce cost, decrease time to deployment, and gives them the ability to effectively scale."

img
Rare-earth elements between the United States of America and the People's Republic of China
The Eastern seas after Afghanistan: the UK and Australia come to the rescue of the United States in a clumsy way
The failure of the great games in Afghanistan from the 19th century to the present day
Russia, Turkey and United Arab Emirates. The intelligence services organize and investigate