Rise CAMP
AI Computing Power Scheduling Platform: Unified management and scheduling of heterogeneous computing resources, simplifying AI application development and deployment
Rise VAST
AI Computing Power Management Platform: Pooling and virtualization of heterogeneous GPU resources to improve resource utilization(HAMi Enterprise Edition)
Rise Model X
AI Model Management Platform: Integrated solution designed for enterprise AI computing scenarios, reducing the cost of AI application development and deployment
Rise MAX
AI Computing Power Management Appliance: Integrated solution designed for enterprise AI computing scenarios, reducing the cost of AI application development and deployment
Rise ModelX
RiseUnion pioneers AI-driven computing resource management, delivering efficiency,
flexibility, and security to power digital transformation and intelligent enterprise growth.
DeepSeek R2 vs Qwen3: 2025 China LLMs Face-Off
HAMi Configuration Guide: GPU Resource Pool Management
A2A vs MCP: Core Protocols Enabling AI Team Collaboration
Iluvatar GPU Virtualization Guide: MR-V100/BI-V150 Best Practices
Ascend NPU Virtualization Guide: 910 Series & 310P Best Practices
HAMi Source Code Analysis: Device Management and Scheduling
QwQ-32B vs DeepSeek-R1: Which AI Excels for Your Use Case?
DeepSeek-V3/R1 671B Deployment Guide: Hardware Requirements
DeepSeek-V3/R1 671B Deployment Guide: GPU Requirements
Why DeepSeek-V3 and Qwen2.5-Max Choose MoE as the Core Architecture?
DeepSeek-V3 vs. R1: Model Comparison Guide
DeepSeek-R1 Model Series: From Light Distillation to Full-Scale
HAMi Dynamic MIG Design Principles
HAMi v2.5.0 Released: Dynamic MIG Support and Enhanced Stability
HAMi Introduces Dynamic MIG Support for NVIDIA GPUs
GPU Virtualization Deep Dive: User-space vs Kernel-space Solutions
Navigating the Compute Challenges in the AI Cloud-Native Era
NVIDIA Acquires Run:ai: Deep Integration of AI Infrastructure
Fine-Tuning Mainstream LLMs on the Ascend Platform with Torchtune
[Q&A] HAMi Frequently Asked Questions - Series 1
Break the Misconception! GPU Pooling for Accelerated AI Training
Full-model Fine-tuning vs. LoRA vs. RAG
Understanding the Role of RAG, Fine-Tuning, and LoRA in GenAI
HAMi: Open Source GPU Virtualization for AI Computing
How HAMi(GPU Virtualization Technology) Can Save You Money!
HAMi Community First Offline Salon Successfully Held
HAMi vGPU code Analysis Part2: hami-webhook
HAMi vGPU code Analysis Part 1: hami-device-plugin-nvidia
Open Source vGPU Solution HAMi: Core & Memory Isolation Test
Why K8s Cannot Meet AI Computing and Large Model Scheduling Needs
Complete Guide to PyTorch Distributed Training: From Basics to Mastery
Multi-GPU Deep Learning Guide: Model & Data Parallelism Explained
Why Top AI Companies Choose Kubernetes
HAMi 2.4.0 Major Update: Making AI Computing Management More Efficient