Model Parallelism: Building and Deploying Large Neural Networks (MPBDLNN)

 

Resumen del Curso

Very large deep neural networks (DNNs), whether applied to natural language processing (e.g., GPT-3), computer vision (e.g., huge Vision Transformers), or speech AI (e.g., Wave2Vec 2) have certain properties that set them apart from their smaller counterparts. As DNNs become larger and are trained on progressively larger datasets, they can adapt to new tasks with just a handful of training examples, accelerating the route toward general artificial intelligence. Training models that contain tens to hundreds of billions of parameters on vast datasets isn’t trivial and requires a unique combination of AI, high-performance computing (HPC), and systems knowledge.

Prerrequisitos

Familiarity with:

  • Good understanding of PyTorch
  • Good understanding of deep learning and data parallel training concepts
  • Practice with deep learning and data parallel are useful, but optional

Precios & Delivery methods

Entrenamiento en línea

Duración
1 día

Precio
  • Consulta precio y disponibilidad
Classroom training

Duración
1 día

Precio
  • Consulta precio y disponibilidad

Presionar el boton sobre el nombre de la ciudad o "Entrenamiento en línea" para reservar Calendario

Instructor-led Online Training:   Este es un curso en línea Guiado por un Instructor. If you have any questions about our online courses, feel free to contact us via phone or Email anytime.

Estados Unidos de América

Entrenamiento en línea 07:30 Pacific Daylight Time (PDT) Este curso será presentado por un socio Inscripción