Welcome to the First


Summer School on Large Language Models

Yerevan, Armenia


July 1-7, 2024


About the summer school
Logo

Welcome to the LLM Armenian Summer School, an immersive educational experience designed to explore the dynamic fields of Large Language Models (LLM) and its variations. This program is tailored for enthusiasts, scholars, and professionals keen to delve into the world of LLM generative approaches and multihop-reasoning, explainability, interpretability, risk assessment, fairness and transparency and retrieval-based approaches in AI-driven content generation. Our summer school offers a comprehensive curriculum, hands-on projects, and expert-led lectures, providing participants with theoretical knowledge and practical skills.

Objective and Scope The summer school will offer a series of workshops, lectures, and hands-on sessions, aiming to bridge the knowledge gap among members of academia and industry in the fast-developing research area of foundational large language models (LLMs). The curriculum will cover a range of both core and specialized topics in LLMs.

Topics
  • Transformer architectures
  • Pretraining
  • Model alignment, parameter-efficient tuning
  • In-context learning and reasoning
  • Multimodality and Vision LLMs
  • LLM Interpretability, Safety & Privacy
  • Knowledge Graphs
  • Retrieval Augmented Generation (RAG)

Format The school agenda will consist of lectures, followed by more hands-on practice sessions where students will do different exercises and mini-projects.

Expected Outcomes The participants are expected to gain an understanding of the research topics, run relevant baseline models, and conduct supervised research. We also envision that some of the participants will engage in collaborative research projects beyond the summer school.

Venue The workshop will take place at the Small Auditorium, Main Building, American University of Armenia (40 M. Baghramyan Avenue, Yerevan).

Target Audience The school is designed for senior undergraduate and graduate students in Armenia, Diaspora and abroad, pre-doctoral students, as well as researchers both from academia and industry.

Speakers
Registration

The school will be open to a limited number of qualified and motivated candidates. Students (Bachelor, Masters, Ph.D.), researchers (both academic and industrial), and academic/industrial professionals are encouraged to apply.
At the end of the application period, all applicants will be informed whether or not their summer school application is accepted.

Applications to attend the summer school should be received before May 3, 2024.
Applicants will receive notification of acceptance by May 31, 2024. Accepted applicants will then be invited to pay the registration fee to finalize their registration to the summer school.


Application Form.

Please submit your application by filling out the Application Form.
Once we receive your application, we will send you a test assessing your Python programming skills. This test is mandatory, and all selected participants should have satisfactory results.
Note that submitting an application does not guarantee that you will be able to attend -- you must wait to receive a notification of acceptance before you can do so.


Fees.

The registration fee is 300 USD or 120,000 AMD.
The fee will cover access to GPUs during the course of the summer school, which you will need to do the projects.


Financial Assistance.

All applicants are eligible for financial assistance. Students enrolled at Armenian universities automatically qualify for a reduced fee of 10,000 AMD.
If you would like to be considered for registration fee reduction, please fill out the Financial Assistance Form.


Important Dates.
  • Application deadline: May 3, 2024
  • Notification of acceptance: May 31, 2024
  • Summer School dates: July 1 - 7, 2024
Hosts

The summer school is co-hosted by the American University of Armenia and YerevaNN. The lectures will take place at the AUA campus.

Program

  • Module 1: Intro & Pretraining (Armen Aghajanyan)
    • July 1
      • 9:00-10:30 Sequence Modeling as a unified framework for ML
      • 10:30-10:45 break
      • 10:45-12:00 Transformer architecture & self-attention mechanism, multi-head attention
      • 12:00-13:00 Lunch break
      • 13:00-17:00 Module 1 Practice Session
    • July 2
      • 9:00-10:30 Pre-training, masked language models, fine-tuning
      • 10:30-10:45 break
      • 10:45-12:00 Scaling Laws: Model performance vs model and data size, compute time
      • 12:00-13:00 Lunch break
      • 13:00-17:00 Module 1 Practice Session
  • Module 2: Alignment
    • July 3 (Morning session: Jiao Sun, Afternoon session: Karen Hambardzumyan)
      • 9:00-10:30 Instruction tuning, LIMA
      • 10:30-10:45 Coffee break
      • 10:45-12:00 LLM evaluation
      • 12:00-13:00 Lunch break
      • 13:00-14:00 Efficient fine-tuning approaches: Adapters, PET, Prefix Tuning, Prompt Tuning, LoRA
      • 14:00-14:30 API-only learning approaches: Prompts and Instructions, In-Context Learning, Chain-of-Thought
      • 14:30-14:45 Break
      • 14:45-17:00 Practice session: Efficient model loading from huggingface hub, Text generation with model.generate(), Using automatic mixed precision, Dataset loading, In-Context learning to parse free-text listings, Chain-of-Thought on GSM8K
    • July 4 (Jon May)
      • 9:00-10:30 Rewards and Reinforcement Learning
      • 10:30-10:45 Coffee break
      • 10:45-12:00 Proximal Policy Optimization (PPO), RLHF
      • 12:00-13:00 Lunch break
      • 13:00-14:00 Direct/Contrastive Preference Optimization (DPO/CPO)
      • 14:00-14:30 DPO+MBR (Minimum Bayes Risk), ASTRAPOP/STAMP
      • 14:30-14:45 Break
      • 14:45-17:00 Practice session:PPO sentiment tuning example from trl, PPO vs DPO vs CPO (ASTRAPOP)
  • Module 3: Vision LLMs (Alex Andonian, Armen Aghajanyan)
    • July 5
      • 9:00-10:45 Intro to Vision Language modeling, Late-fusion approaches
      • 10:45-11:00 break
      • 11:00-12:00 Early fusion and future vision language models
      • 12:00-13:00 Lunch break
      • 13:00-17:00 Module 3 Practice Session
  • Module 4: LLM Interpretability, Safety & Privacy
    • July 6
    • (Morning session: Narine Kokhlikyan, Afternoon session: Eugene Bagdasaryan)
      • 9:00-10:30 Introduction to Attribution algorithms, Captum Library
      • 10:30-10:45 coffee break
      • 10:45-12:00 Practice session: Prompt Explainability Applications with Captum, Exercises on Prompt Explainability
      • 12:00-13:00 Lunch break
      • 13:00-14:00 LLM Privacy: training and inference privacy, contextual integrity
      • 14:00-15:00 LLM Security: adversarial attacks, prompt injections, poisoning
      • 15:00-15:15 Break
      • 15:15-17:00 Practice session: poisoning and adversarial examples on llms and multi modal llms, injecting and detecting bias and propaganda in LLMs
  • Module 5: RAG & KG-enhanced LLMs
    • July 7 (Morning Session: Erik Arakelyan, Afternoon Session: Hrayr Harutyunyan)
      • 09:00-10:30 Introduction to Knowledge Graphs and Complex Reasoning Over Them
      • 10:30-10:45 coffee break
      • 10:45-12:00 Practical for KGs
      • 12:00-13:00 Lunch Break
      • 13:00-14:00 Efficient Indexing for Retrieval
      • 14:00-15:00 Retrieval-augmented generation
      • 15:00-15:15 coffee break
      • 15:15-17:00 Common retrieval techniques, RAG architectures: kNN-LM, REALM, RETRO

Our supporters

We thank Vahe Kuzoyan and Picsart for their generous financial contributions in support of the summer school. We also thank Nebius AI for their generous support with the compute infrastructure for the school.

Organizers: Aram Galstyan, Perouz Taslakian, Erik Arakelyan, Armen Aghajanyan, Hrant Khachatrian.
For further questions, please email us at armeniallm@gmail.com