Rebellions introduced on Dec. 11 that it’ll unveil “Purple Hat OpenShift AI based mostly on Rebellions’ NPU” in collaboration with Purple Hat, a world chief in open-source options. This resolution combines Purple Hat OpenShift AI with Rebellions’ neural processing items (NPUs) and vLLM (LLM efficiency enhancement software program), a high-efficiency inference engine, to offer a verified full-stack enterprise AI platform.
Purple Hat OpenShift AI based mostly on Rebellions’ NPU is a verified full-stack AI inference platform that integrates core parts for AI inference optimization, overlaying all areas essential for inference from {hardware} (NPU) to mannequin serving (vLLM). Rebellions expects this platform to serve in its place for constructing steady and environment friendly inference infrastructure in numerous environments.
The platform additionally integrates key parts for buyer AI inference optimization. Rebellions’ NPU is designed with an structure optimized for AI inference, offering as much as 3.2 instances larger power effectivity in comparison with current GPUs and successfully decreasing information middle building and operational prices on the server and rack ranges. Moreover, by full-stack software program and help for main open-source AI frameworks, it offers a handy improvement atmosphere on the similar degree as GPUs.
Based mostly on this basis, the platform offers a verified full-stack AI inference platform overlaying all areas from {hardware} to mannequin serving. Rebellions’ software program stack is optimized to run on Purple Hat OpenShift AI, decreasing overhead and accelerating deployment.
The platform’s key options embrace: scalable enterprise-grade AI help (high-throughput, low-latency, high-efficiency inference operations by vLLM integration), enhanced safety and regulatory compliance (on-premises-based information safety and requirement response), operational simplification (built-in administration atmosphere that enables NPUs to be operated as simply as GPUs), and versatile scalability (infrastructure able to linear scaling from core to edge).
Park Sung-hyun, CEO of Rebellions, said, “As AI serving and inference turn into full-scale, firms want sensible infrastructure that satisfies efficiency, value, and information sovereignty all of sudden.” He added, “By way of this collaboration, Purple Hat and Revolt present an built-in and verified inference platform utilizing open supply from {hardware} to mannequin serving, as a substitute of the present method the place AI inference parts had been fragmented.” He additional defined, “It will help firms in increasing AI providers extra effectively and safely, whereas changing into the primary instance of presenting a brand new different for NPU-based inference infrastructure past GPU-centered environments.”
Brian Stevens, senior vice chairman and CTO of AI at Purple Hat, defined, “The way forward for enterprise AI should be capable of select architectures that transcend single-structure proprietary stacks. The collaboration with Revolt is important in implementing Purple Hat’s ‘any mannequin, any accelerator, any cloud’ technique.”