Introduction to the New Server

Puget Systems is set to unveil its latest on-premises Generative AI and Machine Learning solution at SIGGRAPH 2024. This innovative server is designed to generate high-quality imagery with smooth frame rates, leveraging a unique combination of AMD and NVIDIA technologies.

Cutting-Edge AI Training and Inference Server

At the heart of Puget Systems’ showcase is the new custom Large Language Model (LLM) AI Training and Inference server. This server supports up to eight NVIDIA RTX Ada generation GPUs, NVIDIA L40S GPUs, or NVIDIA H100 Tensor Core GPUs, and incorporates NVIDIA Networking Platforms. Optimized for maximum GPU performance, this 4U server is ideal for on-premises local data center deployments.

Advanced Hardware Specifications

The server features AMD’s EYPC line of processors, which are engineered for demanding server workloads. These processors offer up to 128 cores, support 1.5TB DDR5 ECC RAM, and feature 128 PCIe Gen 5 lanes. The server utilizes a pair of these CPUs to accommodate up to eight dual-width GPUs in a 4U rack mount chassis. Additionally, the server includes several hot-swap drive bays on the front, along with a pair of USB ports for easy access.

Live Demonstration at SIGGRAPH 2024

To highlight the capabilities of their specialized AI Training and Inference servers, Puget Systems will conduct a live demonstration at SIGGRAPH 2024 in booth #536. The demonstration will showcase a GPU-intensive, real-time AI image generation solution using a StreamDiffusion pipeline. This will be run locally on the Puget Systems server to illustrate how these complex, hardware-intensive workflows can be efficiently managed.

Real-Time AI Image Generation

Attendees will witness how the Puget Systems Generative AI and Machine Learning Server processes a source image through a Stream Diffusion operator in TouchDesigner. The GUI allows users to control various parameters, such as the Acceleration Library, Resolution, Steps, and Prompt. The system supports multiple input methods, including webcams, visual noise, Adobe Photoshop for live AI painting, and a mouse pointer for physical fluid simulation. The processed input image is then converted into an AI-generated output image in real-time at high frame rates.

Ideal for GPU-Intensive Workflows

The Puget Systems AI Training and Inference Servers are specifically designed for GPU-intensive generative AI workflows, such as the StreamDiffusion demo. With support for up to eight NVIDIA GPUs and up to 752GB of total VRAM, these servers are perfect for machine learning, AI, and rendering workloads. The server also supports the NVIDIA L40S or NVIDIA H100 NVL Tensor Core GPUs, which are ideal for large language model (LLM) inference thanks to their high compute density, memory bandwidth, and energy efficiency. The inclusion of the NVIDIA NVLink architecture further enhances performance, making it suitable for the world’s highest-performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. Select Puget Systems servers also support the NVIDIA BlueField networking platform, including the NVIDIA BlueField-3 DPU.

Comprehensive AI and Machine Learning Solutions

Puget Systems’ custom AI Training and Inference servers are designed to meet a wide range of generative AI applications. These servers provide robust solutions for enterprises and data centers that require powerful hardware to manage complex AI and machine learning tasks.

Availability and Support

For those interested in exploring Puget Systems’ AI Training and Inference servers, configurations are available to suit various generative AI needs. More information can be found through their official channels. Additionally, Puget Systems offers consulting and sales operations for customers in Canada, providing localized support and services.

Conclusion

Puget Systems’ new Generative AI and Machine Learning server, showcased at SIGGRAPH 2024, represents a significant advancement in on-premises AI solutions. By combining the strengths of AMD and NVIDIA technologies, the server offers unparalleled performance for demanding AI and machine learning applications. Attendees at SIGGRAPH will have the opportunity to see firsthand how this powerful hardware can transform AI workflows, delivering high-quality, real-time results.

Author

  • Kathy Brownell is a dedicated writer and photography enthusiast. With a keen eye for detail and a love for storytelling, she is dedicated in content creation that delve into technology, video production, and industry trends.

    View all posts

AlphaAV is a premier online publication organization dedicated to providing photography enthusiasts, professionals, and videographers with the latest news, reviews, techniques and buying guides in the world of audiovisual technology.

© 2024 ThemeSphere. Designed by ThemeSphere.
Exit mobile version