This is the equivalent of ENAI606 for Spring 2026 only)
This course introduces graduate engineering students to the foundations and frontiers of Large Language Models (LLMs), with an emphasis on both conceptual understanding and practical skills. We begin with core topics in natural language processing, tokenization, and language representation, then explore the transformer architecture, attention mechanisms, and the full LLM pipeline-from training to deployment. Students will examine key developments such as in-context learning, emergent capabilities, and prompt engineering, as well as multimodal extensions like Vision-Language Models (VLMs). Later weeks will address ethical considerations and recent trends in unified multimodal models. Through a mix of lectures, discussions, and hands-on assignments, students will be well-prepared to engage with cutting-edge research and real-world applications in generative AI.