Tag: on-device AI

  • Hyundai Motor Unveils AI-Powered Chip for DAL-e Delivery Robot

    Hyundai Motor Unveils AI-Powered Chip for DAL-e Delivery Robot


    Introduction to Hyundai Motor’s AI-Powered Chip

    Hyundai Motor Group’s Robotics Lab has successfully developed an on-device AI-powered chip, designed to run tasks locally on a device using AI. This chip, developed in collaboration with Korean AI semiconductor firm Deepx, is now ready for mass production. The announcement was made at the inaugural CES Foundry, a conference and exhibition focused on AI, blockchain, and quantum technologies.

    Key Features of the AI-Powered Chip

    The processor operates with less than 5 watts of power and is capable of detecting, recognizing, and making decisions in real-time using the provided data. Notably, it does not require a cloud or network connection, making it suitable for use in areas with unstable or non-existent network connections, such as underground parking lots or logistics centers. As an on-device chip, it offers faster and more secure performance compared to chips that rely on cloud computing.

    Implications and Future Plans

    According to Hyun Dong-jin, Hyundai Motor Group’s vice president and head of the Robotics Lab, the goal is to provide low-powered, efficient, and smart robots to more people, enabling them to become valuable and beneficial to users. The company aims to build a robust ‘physical AI’ infrastructure through the on-device AI chip, as stated in a press release. Previously, the lab co-developed an AI-powered controller used in the facial recognition system Facey and the DAL-e delivery robot.

    Expert Insights and Analysis

    Experts in the field view this development as a significant step towards enhancing the capabilities of robots and AI-powered devices. The ability to operate without a cloud or network connection expands the potential applications of such technology, particularly in areas where connectivity is a challenge. Furthermore, the emphasis on building a sustainable robot ecosystem underscores Hyundai Motor Group’s commitment to innovation and user benefit.

  • Revolutionizing On-Device AI: Liquid Ai’s LFM2.5

    Revolutionizing On-Device AI: Liquid Ai’s LFM2.5

    Introduction to LFM2.5

    Liquid Ai has released LFM2.5, a family of tiny on-device foundation models designed to power reliable on-device agentic applications. As noted on Hugging Face, LFM2.5 builds upon the success of LFM2, offering higher quality, lower latency, and broader modality support in the ~1B parameter class.

    Key Features of LFM2.5

    According to The Robot Report, LFM2 models are available under an open license based on Apache 2.0, allowing for free use in academic and research purposes, as well as commercial use for smaller companies. The model’s hybrid architecture delivers twice as fast decode and prefill performance as Qwen3 on CPU, making it ideal for efficient AI agents.

    Technical Analysis

    LFM2-8B-A1B is a notable model in the LFM2 family, with 1.5B active parameters and impressive performance on various benchmarks. As discussed on Medium, the edge-first design, minimal hybrid backbone, and optimized pre-training and post-training processes contribute to its efficiency and effectiveness.

    Market Impact and Future Implications

    The release of LFM2.5 is expected to have a significant impact on the AI industry, enabling the development of more efficient and effective on-device applications. As Liquid Ai’s blog notes, the model’s ability to balance quality, latency, and memory for specific tasks and hardware requirements is critical for deploying best-in-class generative models on any device.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every Day.

We don’t spam! Read our privacy policy for more info.