Rise VAST (Virtualized AI Computing Scalability Technology) is jointly developed by RiseUnion and 4Paradigm as the "HAMi Enterprise Edition", aiming to achieve efficient resource scheduling and intelligent operations through software-defined heterogeneous computing resource pooling. The platform supports computing virtualization and priority management, improving resource utilization and reducing AI infrastructure costs, helping enterprises accelerate AI innovation and application deployment. Rise VAST serves as the foundation for Rise CAMP and provides powerful computing management capabilities for Rise MAX appliance.
Rise VAST builds upon the HAMi Open Source Edition, adding numerous enterprise-grade features including computing power and memory over-provisioning, resource expansion and preemption, computing specifications definition, nvlink topology awareness, differentiated scheduling strategies, enterprise-level isolation, resource quota control, multi-cluster management, audit logging, high availability assurance, and detailed operational analytics. As the foundation of RiseUnion's product ecosystem, Rise VAST provides powerful underlying support for Rise CAMP and Rise MAX.
RiseUnion and 4Paradigm have reached a strategic partnership, jointly launching an enterprise-level AI computing resource pooling platform: Rise VAST(HAMi Enterprise Edition), and will deepen cooperation in AI computing resource management, model training optimization, and other aspects. By integrating HAMi's computing scheduling capabilities and 4Paradigm's AI platform advantages, they will provide end-to-end AI infrastructure solutions for enterprises.
Support for building shared and dedicated computing pools
Support for both computing power and memory dimensions to improve resource utilization
Support for mixed deployment and unified scheduling of domestic and foreign computing resources
Effectively improve inference performance through resource sharing and quality control
Multiple types of GPU computing supply for smooth domestic transition
Unified operations and maintenance with global standardization and visualization
Our dedicated technical support team deeply understands your business scenarios and technical architecture, ensuring optimal AI infrastructure performance and helping maximize your ROI
Enterprise-grade SLA with 24/7 professional support services, ensuring rapid response and resolution for platform issues to maintain continuous stability of your AI workloads
Deliver best practices for compute resource configuration based on your business scenarios, including GPU pool segmentation, elastic scaling strategies, and priority queue management for optimal resource allocation
Regular feature updates and performance optimizations with real-time security patches, keeping your AI infrastructure current and secure while following industry best practices
Professional training programs tailored for managers, developers, and operators to master platform capabilities, enhance AI development efficiency, and accelerate innovation deployment
Seamless integration with mainstream CI/CD tools, container orchestration platforms, and cloud services, providing unified multi-cloud management interfaces for centralized GPU resource orchestration across hybrid cloud environments