Challenges of Scale in Deep Learning, Adam Grzywaczewski from NVIDIA

Adam Grzywaczewski from NVIDIA will present a seminar on the Wednesday 18th of October 2017, (at 1pm) entitled:

Challenges of Scale in Deep Learning

Abstract
Deep learning algorithms require substantial computational resource and were made possible exclusively due the exponential nature of the Moore’s law. Even though the above is a common knowledge very few people understand just how much compute is required for the real life problems (like the ones involved in development of the self driving car). This compute requirement frequently exceeds not only the capability of a single GPU but also a single multi GPU system (leading to training times of months if not years). As a consequence frequently it is critical to scale to tens if not hundreds of GPUs in order to allow for reasonable training time.

This talk will provide an overview of the challenges (both hardware, software and algorithm related) of achieving this required scale and ways of addressing them.

About the speaker
Adam Grzywaczewski is a deep learning solution architect at NVIDIA, where his primary responsibility is to support a wide range of customers in delivery of their deep learning solutions. Adam is an applied research scientist specializing in machine learning with a background in deep learning and system architecture. Previously, he was responsible for building up the UK government’s machine-learning capabilities while at Capgemini and worked in the Jaguar Land Rover Research Centre, where he was responsible for a variety of internal and external projects and contributed to the self-learning car portfolio.