Rise Union: Making Your AI Infrastructure More Efficient

Rise Union: Making Your AI Infrastructure More Efficient

Overall Architecture

Rise Union provides full-stack AI infrastructure solutions, enabling end-to-end intelligent support from underlying computing power to upper-layer applications

Rise ModelX

Model Management Platform

3rd Party AI Ecosystem

Rise CAMP

AI Computing Scheduling Platform

Rise VAST

AI Computing Management Platform

Rise MAX

AI Computing Management Appliance

Rise VAST

Through software-defined heterogeneous computing management platform, achieve efficient resource scheduling and intelligent operations to help enterprises accelerate AI innovation

Intelligent Scheduling Engine

Topology-aware scheduling algorithms providing optimal resource allocation strategies for AI lifecycle

Computing Virtualization

vGPU technology enables fine-grained computing resource slicing, supporting multi-tenant sharing and improving utilization

Heterogeneous Resource Pooling

Unified management of various domestic/international AI accelerators, enabling resource pooling and flexible scheduling

Intelligent Operations

Visual monitoring and automated operations capabilities reduce management costs and improve operational efficiency

Rise CAMP

AI computing scheduling platform for enterprises, providing unified development environment and resource management

Ready to Use

Complete AI development environment supporting mainstream deep learning frameworks for quick project startup

Multi-tenant Management

Support multi-team collaborative development with resource isolation and permission management to ensure development environment security

Version Control

Built-in code and model version management supporting team collaboration and ensuring project traceability

Task Management

Unified management of AI training and inference tasks with workflow orchestration and monitoring to improve development efficiency

Rise ModelX

Comprehensive model lifecycle management platform helping enterprises efficiently manage and optimize AI models

Model Marketplace

Provides unified management and model access capabilities for foundation models in the marketplace, while users can also integrate fine-tuned models

Model Evaluation

Model evaluation is a key step in model validation. Model optimization teams can customize evaluation datasets to verify the effectiveness of fine-tuned models

Model Deployment

One-click deployment of model services based on fine-tuned model files, with quick testing and validation of deployed model services

Model Fine-tuning

Support creating model fine-tuning tasks, enabling SFT, LoRA, and full parameter fine-tuning of foundation models based on uploaded training datasets

Rise MAX

All-in-one solution designed for enterprise AI computing scenarios, offering out-of-the-box deployment and reduced AI application development costs

Rapid Deployment

Complete deployment and configuration in 15 minutes, ready for immediate use

Resource Management

Support management of various heterogeneous devices with unified scheduling and monitoring

Performance Optimization

70% improvement in resource utilization, significantly reducing operational costs

High Concurrency

Stable support for 1000+ concurrent users, meeting enterprise-level demands

Core Advantages

Leading Technical Innovation

  • Self-developed vGPU virtualization technology
  • Core contributor to HAMi open source community
  • Multiple core patents in computing power management and scheduling

Complete Ecosystem Compatibility

  • Support Multiple AI accelerators
  • Multi-GPU vendor compatibility certification
  • Seamless upstream and downstream industry chain integration

Rich Industry Experience

  • Serving leading customers in finance, telecom, and energy
  • Extensive experience in large-scale cluster management
  • Deep expertise in computing power management and scheduling solutions

Industry Recognition

  • Pioneer in AI Infrastructure Orchestration
  • Multiple industry awards for AI innovation excellence
  • Strategic partnerships with leading cloud and hardware vendors

Use Cases

Large-Scale Model Training

Support distributed training clusters of hundreds of nodes for efficient large model training. A leading internet company using our solution achieved 40% improvement in training efficiency with 85% resource utilization.

Inference Service Platform

Provide high-performance, low-latency model inference services. A financial institution deployed 100+ model services supporting thousands of concurrent requests with over 50% cost reduction.

R&D Testing Environment

Provide unified development environment for AI R&D teams. A tech company reduced environment deployment time from 3 days to 30 minutes, significantly improving development efficiency.

Computing Resource Pool

Build enterprise-level computing resource pools for resource sharing and unified management. A research institute supporting multiple research groups improved average resource utilization from 30% to 80%.

Update