Select your language

   +(36) 88 624 023 |    dekanititkarsag@mik.uni-pannon.hu |    H-8200, Veszprem, Egyetem str. 10, Building I.

Select your language

Ágnes Vathy-Fogarassy, PhD; Tibor Dulai, PhD


Prerequirements:


Topics covered
:

Students will learn about the following topics, including their individual training plans and interests:

T1. The transformer architecture and its variants:

  • description of Generative Pre-trained Transformer, XLNet, and Text-to-Text Transfer Transformer architectures and their properties
  • detailed explanation of transformer architectures (multi-head self-attention mechanism, positional encoding, their training process)
  • transfer learning and fine-tuning for applying pre-trained transformers in new domains

T2. Generative Adversarial Networks (GANs) and the transformer-based generative models:

  • the structure and training of GANs (the tasks of the generator and discriminator)
  • the structure and operation of transformer-based generative models

T3. Embeddings and vector databases. The application of generative models and its ethical implications:

  • the concept of embedding, possible formats of input and their mapping
  • specialties and characteristics of vector databases
  • possible applications of generative models with their ethical implications (e.g. content generation, style transfer, data correction)
  • responsible AI


Literature
:

1.      David Foster: Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play. O’Reilly Media, 2019.

2.      Ian Goodfellow, Yoshua Bengio, Aaron Courville: Deep learning. MIT Press, 2016.