USA automotive AI Voice Technology company $Cerence (CRNC.US)$ After the US stock market opened, it once rose nearly 40%, now up over 20%. In the previous Trading day, it had soared more than 140%. The company announced on January 3, Eastern Time, that it would deepen its cooperation with NVIDIA to enhance the performance of language models in its automotive systems.
Analysis points out that, with the technical support from NVIDIA, Cerence is expected to reduce R&D costs and accelerate the launch of new products and features. For a company with a Market Cap of only 0.334 billion USD, an alliance with an industry giant like NVIDIA undoubtedly adds significant credibility and technological strength. Moreover, NVIDIA's AI infrastructure is leading within the Industry, allowing Cerence to directly utilize these resources to develop more powerful language models and automotive systems.
The cooperation between Cerence and NVIDIA is a significant step in automotive AI deployment. Cerence will leverage NVIDIA's AI Enterprise Software and DRIVE AGX Orin Hardware platform's powerful computing capabilities to further optimize the performance of its CaLLM (Cloud Language Model) and CaLLM Edge (Edge Computing Language Model). NVIDIA's platforms are designed for high-performance AI computing, efficiently running complex AI models.
Through NVIDIA's DRIVE AGX Orin automotive Hardware, Cerence can also provide localized AI capabilities (running directly in the vehicle), which is very useful in scenarios where a constant internet connection cannot be maintained.
Additionally, NVIDIA provides Cerence with the TensorRT-LLM and NeMo frameworks, which help address the three common challenges faced by automotive AI assistants.
The first is the latency issue; automotive AI must respond quickly, as millisecond-level response times can be a matter of life and death in driving scenarios. The second is insufficient performance; the computational resources inside vehicles are limited and cannot be compared to those of a Datacenter. NVIDIA's TensorRT-LLM deeply optimizes models to maximize their effectiveness under limited hardware resources, enhancing overall performance. The third is high resource consumption; through deep optimization of both hardware and models, it reduces power consumption and occupied computational resources, allowing the technology to run on resource-constrained automotive devices.
Editor/lambor