GPU Technology Conference Europe 2018
OCT 2018
9 - 11
Munich, Germany

NVIDIA’s GPU Technology Conference (GTC) Europe is part of the largest global series of events focused on Artificial Intelligence and its applications across many important fields.
Join the Conference in Munich and discover the latest breakthroughs in autonomous vehicles, high performance computing, healthcare, big data, and more.
Sessions: AI and Deep Learning
Choose from over a hundred sessions at GTC Europe 2018 where rising AI startups, innovative researchers, and leading enterprises present critical breakthroughs, industry-changing technologies, and successful implementations.
Sessions
Telecommunications: AI And ML Inference in 5G Era
October 11, 2018 | 12.25-1.10 pm
Speaker:
- Stephen Jones, NVIDIA
- Soma Velayutham, NVIDIA
- Keith Morris, NVIDIA
- Tero Rissa, Nokia
- Alexander Keller, NVIDIA
- Slawomir Stanczak, Fraunhofer HHI
Join the interactive panel discussion on AI Inference in the 5G era. Learn about NVIDIA GPU, NVIDIA Software stacks, Inference use cases, challenges in 5G and innovation needs. We will have technical experts and open conversation on the challenges and how 5G era is a great opportunity to utilize AI and inference and address complexities of this evolving infrastructure. Discuss the possible solutions, progress by front runners and how forward looking Telco leaders are contributing to reinvention of the industry.
Machine Learning Enabled 5G Wireless Networks - GPU Convex Feasibility Solvers
October 11, 2018 | 1.30-2.15 pm
Speaker:
Prof. Dr.-Ing. habil. Slawomir Stanczak, Head of Wireless Communications and Networks Department, Fraunhofer Heinrich Hertz Institute
In current wireless networks, most algorithms are iterative and might not be able to meet the requirements of some 5G technologies such as ultra-reliable low-latency communication within a very low latency budget. For instance, requiring and end-to-end latency below 1ms, many signal processing tasks must be completed within microseconds. Therefore, only a strictly limited number of iterations can be performed, which may lead to uncontrollable excessive errors. We argue in favor of formulating the underlying optimization problems as convex feasibility problems in order to enable massively parallel processing on GPUs for online learning for fast and robust tracking. Moreover, convex feasibility solvers allow for an efficient incorporation of context information and expert knowledge, and can provide robust results based on relatively small data sets. Our approach has numerous applications, including channel estimation, peak-to-average power ratio (PAPR) reduction in Orthogonal Frequency Division Multiplexing (OFDM) systems, radio map reconstruction, beam forming, localization, and interference reduction. We show that they can greatly benefit from the parallel architecture of GPUs.
More information can be found here .