Transformer architectures underpin modern natural language processing and large language models. We believe organisations that master AI, Cloud, and Data technologies gain a decisive advantage in building scalable, intelligent applications. [...]
  • NVBTBNLP-QA
  • Cena na vyžádání

Transformer architectures underpin modern natural language processing and large language models. We believe organisations that master AI, Cloud, and Data technologies gain a decisive advantage in building scalable, intelligent applications. This hands-on workshop equips learners with the skills to construct, fine-tune, and deploy Transformer-based deep learning models for real-world NLP tasks.Over eight hours, participants will build a Transformer neural network in PyTorch, develop a named-entity recognition application using BERT, and deploy the solution with ONNX and NVIDIA TensorRT to an NVIDIA Triton Inference Server. By the end of the course, learners will be proficient in applying task-agnostic Transformer-based models to text classification, named-entity recognition, and question answering, and deploying them into production-ready inference environments.

  • Explain how Transformer architectures function as foundational building blocks of modern large language models for NLP
  • Describe how self-supervision enhances Transformer-based models such as BERT and other LLM variants to deliver superior NLP results
  • Construct a Transformer neural network in PyTorch, including implementation of self-attention mechanisms
  • Leverage pretrained Transformer models to solve NLP tasks such as text classification, named-entity recognition, and question answering
  • Build and fine-tune a named-entity recognition application using BERT
  • Manage inference challenges associated with NLP workloads, including optimisation and deployment constraints
  • Prepare, optimise, and deploy refined models using ONNX, NVIDIA TensorRT, and NVIDIA Triton Inference Server

Mám zájem o vybraný QA kurz