Presentation
One Stop Systems: AI on the Fly ®: Bringing Datacenter HPC/AI Performance to the Edge to Turn Raw Data into Actionable Intelligence
SessionExhibitor Forum
Speaker
Event Type
Exhibitor Forum
Pre-Recorded
TimeWednesday, June 24th7:26pm - 7:41pm
LocationDigital
DescriptionToday a new computing paradigm is emerging that puts computing and storage resources for AI and HPC workflows on the edge near the data sources rather than in the datacenter. Applications are emerging for this new paradigm in diverse areas including autonomous vehicles, precision medicine, battlefield command and control, industrial automation and media and entertainment. The common elements of these solutions are high data rate acquisition, high speed low latency storage and efficient high performance compute analytics all configured to meet the unique environmental conditions of edge deployments.
OSS has established leadership in this paradigm with its AI on the Fly initiatives. Their AI on the Fly building blocks include high slot count PCI Express expansion systems capable of acquiring 100’s of GBs per second of data, NVMe storage nodes providing up to a petabyte of high speed, low latency solid state storage as well as platforms capable of housing up to 16 of the latest NVIDIA GPU or other HPC/AI accelerators for high end compute engine requirements. All of these building block elements are connected seamlessly with memory mapped PCI Express interconnect configured and customized as appropriate, to meet the specific environmental, form factor, or ruggedized requirements of ‘in the field’ installations.
AI on the Fly® platforms and building blocks are being used today by OEMs to build and deliver solutions addressing ever bigger and more complex problems at the edge.
OSS has established leadership in this paradigm with its AI on the Fly initiatives. Their AI on the Fly building blocks include high slot count PCI Express expansion systems capable of acquiring 100’s of GBs per second of data, NVMe storage nodes providing up to a petabyte of high speed, low latency solid state storage as well as platforms capable of housing up to 16 of the latest NVIDIA GPU or other HPC/AI accelerators for high end compute engine requirements. All of these building block elements are connected seamlessly with memory mapped PCI Express interconnect configured and customized as appropriate, to meet the specific environmental, form factor, or ruggedized requirements of ‘in the field’ installations.
AI on the Fly® platforms and building blocks are being used today by OEMs to build and deliver solutions addressing ever bigger and more complex problems at the edge.
Speaker