Quinnipiac University

CSC 375/575 Generative AI

Course Syllabus

CSC 375/575 | Generative AI

Fall 2025 | August 25 - December 13, 2025

Finals Week: December 8-13, 2025

Instructor Information

Instructor: Ron (Rongyu) Lin

Email: rongyu.lin@quinnipiac.edu

Office Hours:

  • Monday: 1:30 PM - 3:15 PM (In-person)
  • Tuesday: 4:00 PM - 6:00 PM (In-person)
  • Friday: 2:00 PM - 3:00 PM (Virtual) - Zoom Link
  • By appointment

Class Schedule: Mondays and Wednesdays

Location: This is an in-person class

Classroom: Tator Hall, Room 130

Schedule: Mondays & Wednesdays, 3:30 PM - 4:45 PM

Semester Dates: August 25, 2025 - December 13, 2025

Course Description

This course provides a comprehensive, hands-on approach to understanding and implementing Generative AI systems, with a focus on Large Language Models (LLMs). Students will build complete generative models from fundamental principles, covering transformer architecture, attention mechanisms, advanced prompting strategies, alignment methods, and inference optimization. The course emphasizes both theoretical understanding and practical implementation, with significant focus on modern techniques like chain-of-thought reasoning, instruction fine-tuning, human feedback alignment (RLHF), and efficient inference methods. Students will create their own functional generative AI applications incorporating state-of-the-art prompting and alignment techniques.

Course Objectives

By the end of this course, students will be able to:

  • Implement generative AI models from scratch, understanding transformer architecture and modern generative systems
  • Master attention mechanisms including self-attention, multi-head attention, and causal masking for sequence generation
  • Build complete training pipelines with proper optimization, evaluation metrics, and model persistence for generative tasks
  • Apply fine-tuning techniques for both classification and instruction-following in generative AI applications
  • Understand scaling principles and the relationship between model architecture, training data, and generative performance
  • Critically evaluate modern generative AI capabilities, limitations, ethical considerations, and societal impact

Textbooks/Materials

Build a Large Language Model (From Scratch)
by Sebastian Raschka
Foundations of Large Language Models
by Tong Xiao and Jingbo Zhu, NiuTrans

Course Policies

  • Attendance & Participation: This course meets in regularly scheduled sessions each week, and your consistent presence is essential. In-class activities and discussions count toward your grade. If you miss a class, email the instructor in advance to arrange make-up work.
  • Late Work: Assignments are due before class starts on the specified due date. Late work will incur a 10% penalty for each day it is late (days 1-5). After 5 days late, the maximum possible score is 50%. No late work accepted without prior approval.
  • Academic Integrity: Students are expected to maintain the highest standards of academic integrity. Cheating, plagiarism, and any form of academic dishonesty, including unauthorized use of ChatGPT or other AI tools, are strictly prohibited. Violations will result in disciplinary actions, which may include failing the course. Use of AI tools is permitted only when explicitly authorized.
  • Accommodation: Students who require accommodation for a disability should contact the Office of Student Accessibility as soon as possible. The instructor will work with you to ensure that all necessary accommodation is made to support your learning needs. Please provide your accommodation letter early in the semester.

Course Schedule

Lecture Date Topics Assignments Due Notes
Lecture 1Mon Aug 25Course Overview: Introduction to Generative AI and Course Roadmap
Lecture 2Wed Aug 27LLM Foundations: Understanding Large Language Models and Pre-training
-Mon Sep 1Labor Day - No ClassHoliday
Lecture 3Wed Sep 3Text Data Processing: Tokenization and Data Preparation
Lecture 4Mon Sep 8Attention Mechanisms: Understanding Self-Attention and Transformer Basics
Lecture 5Wed Sep 10Building GPT Architecture: Implementing Core Model Components
Lecture 6Mon Sep 15Model Training Pipeline: Pre-training Large Language Models from Scratch
Lecture 7Wed Sep 17Fine-tuning Fundamentals: Supervised Fine-tuning for Text ClassificationProject 1 Due [NEW]
Lecture 8Mon Sep 22Transformer Deep Dive: Multi-layer Architecture and Parameter Optimization
Lecture 9Wed Sep 24Instruction Fine-tuning: Aligning Models with Human Instructions
Lecture 10Mon Sep 29Advanced Training Techniques: Learning Rate Scheduling and Regularization
Lecture 11Wed Oct 1Generative Model Architecture: Decoder-Only Models and Text GenerationProject 2 Due
Guest LectureMon Oct 6Industry Applications of Generative AIMidterm Week
Guest LectureWed Oct 8Current Research Frontiers in LLMsMidterm Week
Lecture 12Mon Oct 13Advanced Self-Attention: Causal Masking and Sequence Dependencies
Lecture 13Wed Oct 15Multi-Head Attention: Parallel Attention Mechanisms and Implementation
Lecture 14Mon Oct 20Advanced Attention Patterns: Sparse Attention and Efficient Transformers
Lecture 15Wed Oct 22Training at Scale: Distributed Training and Memory OptimizationProject 3 Due
Lecture 16Mon Oct 27Long Sequence Modeling: Position Embeddings and Context Length
Lecture 17Wed Oct 29Optimization Strategies: Advanced Optimizers and Training Stability
Lecture 18Mon Nov 3Prompting Fundamentals: Chain-of-Thought and Few-Shot Learning
Lecture 19Wed Nov 5Advanced Prompting: Template Design and Context Engineering
Lecture 20Mon Nov 10Automatic Prompt Optimization: Learning-Based Prompt Generation
Lecture 21Wed Nov 12In-Context Learning: Few-Shot and Zero-Shot CapabilitiesProject 4 Due
Lecture 22Mon Nov 17Human Feedback Alignment: RLHF and Preference Learning
Lecture 23Wed Nov 19Advanced RLHF: Constitutional AI and Safety Alignment
-Nov 24-29Thanksgiving Break - No ClassesHoliday Week
Lecture 24Mon Dec 1Inference Optimization: Decoding Strategies and Sampling Methods
PresentationsWed Dec 3Final Project Presentations and Course Wrap-upProject 5 Due
Finals WeekDec 8-13Final Projects DueFinal Project Due Dec 13No Classes

Grading Breakdown

Item Total Possible Points Percent of Grade
Assignments30060%
Project 1: Pre-training Foundations and LLM Architecture Implementation60
Project 2: Embeddings & Advanced Tokenization60
Project 3: Generative Models and Advanced Self-Attention Systems60
Project 4: Multi-Head Attention and Advanced Prompting Strategies Implementation60
Project 5: Complete GPT Model Training with Instruction Fine-tuning and Human Feedback Alignment60
Team Final Project13026%
Attendance and Participation7014%
14 weeks5/week
Bonus Points (Optional)Up to 3%3%
Computer Science lectures/seminars attendance, course improvement suggestions, and other eligible activities (announced in advance)0.5% each