Accelerating AI Compute for Space, Planetary Science, and Aeronautics at Wafer-Scale

Dr. Andy Hock, PhD

VP of Product, Cerebras Systems

Dr. Andy Hock is VP of Product at Cerebras. Andy came to Cerebras from Google, where he led Data and Analytics Product for the Terra Bella project, using deep learning and AI to create useful data for maps and enterprise applications from satellite imagery. Computation was a challenging constraint for this work; in Cerebras, Andy saw an opportunity to help deliver the right compute solution to accelerate deep learning and AI research by orders of magnitude and advance the field. Before Google, Andy was a Senior Scientist and Senior Technical Program Manager at Arete Associates, where he led research for signal processing algorithm development. He has a PhD in Geophysics and Space Physics from UCLA and a BA in Astronomy-Physics and Molecular Biology from Colgate University. Specific to this forum, Andy is also a proud alum of the NASA Academy and was a NASA graduate research program (GSRP) fellow.

Contact information andy@cerebras.net

Dr_Andy_Hock_Bio_Photo

Description
Artificial Intelligence (AI) has great potential for NASA research, but our ability to test new ideas and develop new applications is compute-limited today. State of the art deep learning models take too long to train on and process real-world datasets. Moreover, significant opportunities exist at the intersection of AI with traditional HPC work, e.g. where AI can be used to augment or accelerate traditional HPC modeling and simulation. However, compute and datasets in this space are often massive and sparse – legacy, general purpose processors such as graphics processing units were not built for this work. A new compute solution is needed to accelerate AI and HPC in this domain.

Cerebras Systems has developed such a system, the Cerebras CS-1. The CS-1 is the fastest AI computer system available today, powered by the largest chip ever made: Cerebras’ Wafer-Scale Engine (WSE). The WSE is a 400,000-core fully-programmable processor optimized for sparse linear algebra workloads such as neural network training and inference. It is 56 times larger than the largest GPU, delivers the deep learning compute performance of a cluster in a single system with the programming simplicity of a single node. Cerebras has multiple successful CS-1 deployments for scientific AI and HPC computing, demonstrating orders of magnitude acceleration beyond customers’ existing GPU cluster solutions.

In this talk, we will introduce Cerebras and the CS-1, and discuss applications AI research and development for aeronautics, space and planetary science.

Date/Time
Wednesday, April 7, 2021, 11am-12pm EST
This seminar can be viewed remotely via Microsoft Teams: Join here

Recorded session is available through the Goddard Library

IS&T Colloquium Committee Host: Matt Dosberg