As AI deployments are picking up, a revolution is taking place on how networks are built within and across datacenters. This session will provider a deeper insight in the challenging networking demands of AI workloads. It then continues on the implications architecturally, i.e. how to build out reliable server and datacenter interconnectivity to deliver the best possible performance for every AI training and inference task. It also touches on the operational aspects when deploying GPU server infrastructure, specifically on how to control deployments in a reliable way while still ensuring a fast time to market. Finally, it concludes with giving a few pointers on what Nokia offers in this space.
Download the slide deck here.