Rise CAMP

Rise CAMP

Enterprise-Grade AI Workload Orchestration Platform

Product Overview

Rise CAMP(Computing AI Management Platform) is an enterprise-grade AI workload orchestration platform based on Rise VAST (HAMi Enterprise Edition) that helps organizations maximize the utilization of compute resources. By providing unified management and scheduling of heterogeneous computing resources, CAMP enables resource pooling, compute virtualization, and fine-grained orchestration to improve resource utilization and reduce AI development and deployment costs. As the core scheduling engine for Rise MAX appliance, Rise CAMP provides powerful computing support for large model deployment.

10X

GPU utilization improvement

70%

Resource cost reduction

5min

Quick workload deployment

99%

Hardware compatibility

Key Benefits

Optimized Utilization of Computational Resources

Assists enterprises in intelligently scheduling and optimizing AI resources, enhancing hardware utilization, reducing idle resources, and lowering costs.

Streamlined Workload Management

Through automation and intelligent scheduling, reduces manual intervention, enabling AI teams to focus more on model development and innovation.

Scalable Expansion

Supports the integration of various hardware platforms and cloud environments, allowing for flexible scaling of computational resources to meet the demands of large-scale AI computing tasks.

Enhanced Development Efficiency

Provides developers with convenient tools and integrations, simplifying resource configuration and management, enabling teams to transition from development to production deployment more quickly.

Core Features

Unified Resource Management

Centrally manage and schedule diverse AI accelerators including NVIDIA GPUs, domestic GPUs, and NPUs, creating a flexible heterogeneous compute resource pool.

GPU Virtualization & Pooling

Support GPU resource fractionalization, multiple pool definitions, memory overcommitment, and various isolation strategies for flexible resource management.

Smart Workload Orchestration

Offer multiple scheduling policies including fixed quota, priority-based, load-aware scheduling, and minimum guarantees for efficient resource utilization.

Inference Optimization

Enhance inference performance through compute multiplexing and QoS-based resource scheduling to optimize resource efficiency.

Simplified Operations

Provide unified operations platform for standardized, visualized compute management, supporting host maintenance and workload migration.

Security & Compliance

Support multi-tenancy with comprehensive security mechanisms including user management, access control, and audit capabilities to ensure data security and privacy.

Enterprise Professional Services & Support

Dedicated Technical Support

Our expert technical team provides dedicated support with deep understanding of your AI computing needs and business scenarios, delivering optimization recommendations to ensure optimal performance of AI workloads

24/7 Operational Support

Enterprise-grade SLA with 24/7 professional support services, real-time monitoring of computing resources, and rapid response to platform issues, ensuring continuous and stable computing resource availability

Computing Scheduling Optimization

Deliver tailored best practices for computing resource scheduling based on your business scenarios, including resource pool segmentation, scheduling strategy configuration, and priority management for optimal resource allocation

Continuous Platform Enhancement

Regular platform updates and performance optimizations, including new scheduling algorithms, resource management strategies, and security patches, ensuring your AI computing management platform maintains peak efficiency and security

Professional Training Programs

Comprehensive training courses for managers, developers, and operators to master computing resource management and scheduling techniques, enhancing AI development and operational efficiency

Ecosystem Integration

Seamless integration with mainstream AI frameworks, container platforms, and cloud services, providing unified multi-cloud computing management interfaces for integrated resource scheduling across hybrid cloud environments

Use Cases

Heterogeneous Resource Pool

Help enterprises build heterogeneous compute resource pools to manage GPUs, NPUs, and other accelerators. A state-owned bank leveraged Rise CAMP to build a resource pool managing over 600 servers, improving resource utilization by 50%.

AI Training & Inference

Provide efficient compute support for AI model training and inference. A telecom operator built a software-defined GPU pool using Rise CAMP, managing over 100 servers and deploying more than 500 model services and AI applications.

Multi-tenant Inference Platform

Enable multiple business teams to share GPU resources with on-demand allocation. A financial institution deployed AI applications for risk control, marketing, and customer service, supporting stable operation of hundreds of model services.

Hybrid Cloud Management

Unified management of on-premises and cloud compute resources enabling flexible scheduling in hybrid environments. A manufacturing company achieved elastic scaling and 60% higher resource utilization through Rise CAMP.