Rise MAX: AI Compute Appliance
Pre-integrated full stack, out-of-the-box, deploy in 15 minutes
Product Overview
Deployment to production
GPU utilization boost
Concurrent user support
Domestic chip certifications
Core Features
Full-stack Pre-integrated
Pre-installed Rise VAST + Rise CAMP + K8s Dashboard + distributed storage. Hardware-software integrated delivery, 15 minutes from bare metal to production-ready platform.
Unified Heterogeneous Control
Supports NVIDIA, Ascend, Hygon, Cambricon and more under unified management. Built-in vGPU slicing and intelligent scheduling, no vendor lock-in.
One-stop K8s Operations
Built-in K8s workload deployment and operations views covering workloads, networking, and storage. Significantly lowers operations learning curve with multi-channel deployment for diverse customer environments.
Elastic Smooth Scaling
Scale smoothly from 3-node clusters to cross-datacenter deployments. Intelligent scheduling for resource balancing and self-healing, with cloud-edge collaboration and on-demand elastic scaling.
Key Benefits
Lower TCO
No dedicated storage or network hardware required, built on standard servers. Powered by Rise VAST and Rise CAMP, GPU utilization rises from 30% to 70%+, significantly reducing hardware investment for the same workload.
Ultra-fast Deployment
Integrated hardware-software design reduces deployment from weeks to 15 minutes. Pre-installed full-stack platform for rapid business launch.
Performance & Reliability
Distributed architecture with intelligent scheduling supports 1000+ concurrent users. Multi-tenant isolation with built-in monitoring and alerting for 24/7 stable operation.
Open Ecosystem, No Lock-in
Fully open architecture compatible with third-party security, backup, and DR solutions for hybrid cloud. Supports both CAMP and EDGE cloud-native deployments with standalone deployment option.
DeepSeek AI Compute Appliance
Rise MAX-DS is an industry-leading AI-native compute appliance with integrated resource pooling and dynamic scheduling via Rise CAMP. It reimagines AI architecture for intelligent, elastic, and efficient DeepSeek model deployment.
- Pre-installed with full DeepSeek model series (1.5B to 671B), ready to use
- Intelligent compute pooling for high-concurrency multi-task collaboration
- Automated resource scheduling, boosting GPU utilization by 30%+
- Cloud-edge collaboration with on-demand elastic scaling
Use Cases
LLM Training & Inference
Out-of-the-box LLM training and inference environment with built-in intelligent scheduling and vGPU virtualization for multi-GPU training and dynamic allocation. An AI company deployed LLM training clusters with Rise MAX, completing setup in 15 minutes with 50% training efficiency improvement.
Enterprise Private AI Deployment
Private AI platform with data sovereignty. Pre-installed DeepSeek, Qwen and other major LLM images, one-click inference service publishing with API access and multi-tenant isolation for finance and government compliance requirements.
Shared AI R&D Platform
Unified dev/test environment for multiple R&D teams with built-in Jupyter/VSC and distributed training management. A research institute unified compute resources with Rise MAX, achieving multi-team sharing with utilization from 30% to 80% and 40% less hardware investment.
Domestic Compute Foundation
Certified for domestic chip compatibility, unified management of Ascend, Hygon and other domestic accelerators. A state-owned enterprise built a domestic compute platform with Rise MAX, managing hundreds of heterogeneous servers with 60%+ utilization improvement.