Jakarta, INTI – In the era of rapid artificial intelligence (AI) development, Red Hat has introduced significant updates to Red Hat OpenShift AI 2.18 and Red Hat Enterprise Linux AI (RHEL AI) 1.4. As a leading open-source solutions provider, Red Hat aims to enhance efficiency and security in AI deployment within hybrid cloud environments.
AI Integration for Model Lifecycle Management
Red Hat AI now combines OpenShift AI and RHEL AI into a unified platform that supports the entire AI model lifecycle, from training and fine-tuning to deployment and monitoring. Additionally, the platform supports both predictive AI and generative AI (GenAI), enabling organizations to develop smarter and more efficient models across various computing architectures.
Key Updates in Red Hat OpenShift AI 2.18
Red Hat OpenShift AI 2.18 introduces several enhancements, including:
- Distributed Serving with vLLM: This technology enables "distributed model serving" across multiple GPUs, reducing the burden on individual servers, increasing speed, and optimizing resource utilization.
- End-to-End Model Tuning: This feature simplifies the fine-tuning process of large language models (LLMs) using InstructLab and OpenShift AI data pipelines, making them easier to manage and scale in production environments.
- AI Guardrails: Provides mechanisms to detect and mitigate harmful content, personally identifiable information, and competitive data, while ensuring AI models align with corporate policies.
- Model Evaluation: Utilizing the lm-eval component, data scientists can benchmark LLM performance across various tasks, enhancing model responsiveness and effectiveness.
Updates in RHEL AI 1.4
As an integral part of the Red Hat AI portfolio, RHEL AI provides a reliable platform for developing, testing, and running enterprise-grade AI models. The latest version, RHEL AI 1.4, introduces key features such as:
- Granite 3.1 8B Model: Supports multilingual inference and taxonomy/knowledge customization with an extended 128k context window, improving summarization and retrieval-augmented generation (RAG) tasks.
Red Hat's Commitment to Enhancing AI Efficiency
Red Hat is committed to helping enterprises reduce AI deployment costs, integrate private data, and scale AI models across hybrid cloud environments. Joe Fernandes, Vice President and General Manager of Red Hat's AI Business Unit, emphasized the challenges of large-scale AI deployment and stated that Red Hat AI provides the flexibility to deploy models across on-premises, cloud, and edge computing environments.
Furthermore, Red Hat has introduced AI InstructLab on IBM Cloud to simplify AI model training and deployment while enhancing data security. According to Javier Olaizola Casin, Global Managing Partner of Hybrid Cloud and Data at IBM Consulting, the consistency, reliability, and speed of Red Hat AI are crucial for supporting AI adoption across various hybrid cloud scenarios.
Free Training and Industry Collaboration
To support broader AI adoption, Red Hat now offers free online AI Foundations training courses, designed for both business leaders and AI beginners. Anand Swamy, EVP and Global Head of Ecosystems at HCLTech, highlighted the importance of flexible infrastructure in AI development. By combining Red Hat AI technologies with HCLTech's expertise, enterprises can overcome common challenges such as data security, AI scalability, and infrastructure costs.
Conclusion
With these innovations, Red Hat further strengthens its position as a reliable open-source AI solutions provider for enterprises looking to maximize AI potential in the digital era. Updates to OpenShift AI and RHEL AI enable more efficient, secure, and flexible AI deployments across various environments. With free training support and collaborations with industry partners, Red Hat aims to help organizations optimize their AI strategies for better digital transformation outcomes.
Read More : Gemini 2.5 Pro: Google's Smartest AI Revolution