Course Syllabus
CSC 375/575 | Generative AI
Fall 2025 | August 25 - December 13, 2025
Finals Week: December 8-13, 2025
Instructor Information
Instructor: Ron (Rongyu) Lin
Email: rongyu.lin@quinnipiac.edu
Office Hours:
- Monday: 1:30 PM - 3:15 PM (In-person)
- Tuesday: 4:00 PM - 6:00 PM (In-person)
- Friday: 2:00 PM - 3:00 PM (Virtual) - Zoom Link
- By appointment
Class Schedule: Mondays and Wednesdays
Location: This is an in-person class
Classroom: Tator Hall, Room 130
Schedule: Mondays & Wednesdays, 3:30 PM - 4:45 PM
Semester Dates: August 25, 2025 - December 13, 2025
Course Description
This course provides a comprehensive, hands-on approach to understanding and implementing Generative AI systems, with a focus on Large Language Models (LLMs). Students will build complete generative models from fundamental principles, covering transformer architecture, attention mechanisms, advanced prompting strategies, alignment methods, and inference optimization. The course emphasizes both theoretical understanding and practical implementation, with significant focus on modern techniques like chain-of-thought reasoning, instruction fine-tuning, human feedback alignment (RLHF), and efficient inference methods. Students will create their own functional generative AI applications incorporating state-of-the-art prompting and alignment techniques.
Course Objectives
By the end of this course, students will be able to:
- Implement generative AI models from scratch, understanding transformer architecture and modern generative systems
- Master attention mechanisms including self-attention, multi-head attention, and causal masking for sequence generation
- Build complete training pipelines with proper optimization, evaluation metrics, and model persistence for generative tasks
- Apply fine-tuning techniques for both classification and instruction-following in generative AI applications
- Understand scaling principles and the relationship between model architecture, training data, and generative performance
- Critically evaluate modern generative AI capabilities, limitations, ethical considerations, and societal impact
Textbooks/Materials
by Sebastian Raschka
by Tong Xiao and Jingbo Zhu, NiuTrans
Course Policies
- Attendance & Participation: This course meets in regularly scheduled sessions each week, and your consistent presence is essential. In-class activities and discussions count toward your grade. If you miss a class, email the instructor in advance to arrange make-up work.
- Late Work: Assignments are due before class starts on the specified due date. Late work will incur a 10% penalty for each day it is late (days 1-5). After 5 days late, the maximum possible score is 50%. No late work accepted without prior approval.
- Academic Integrity: Students are expected to maintain the highest standards of academic integrity. Cheating, plagiarism, and any form of academic dishonesty, including unauthorized use of ChatGPT or other AI tools, are strictly prohibited. Violations will result in disciplinary actions, which may include failing the course. Use of AI tools is permitted only when explicitly authorized.
- Accommodation: Students who require accommodation for a disability should contact the Office of Student Accessibility as soon as possible. The instructor will work with you to ensure that all necessary accommodation is made to support your learning needs. Please provide your accommodation letter early in the semester.
Course Schedule
Lecture | Date | Topics | Assignments Due | Notes |
---|---|---|---|---|
Lecture 1 | Mon Aug 25 | Course Overview: Introduction to Generative AI and Course Roadmap | ||
Lecture 2 | Wed Aug 27 | LLM Foundations: Understanding Large Language Models and Pre-training | ||
- | Mon Sep 1 | Labor Day - No Class | Holiday | |
Lecture 3 | Wed Sep 3 | Text Data Processing: Tokenization and Data Preparation | ||
Lecture 4 | Mon Sep 8 | Attention Mechanisms: Understanding Self-Attention and Transformer Basics | ||
Lecture 5 | Wed Sep 10 | Building GPT Architecture: Implementing Core Model Components | ||
Lecture 6 | Mon Sep 15 | Model Training Pipeline: Pre-training Large Language Models from Scratch | ||
Lecture 7 | Wed Sep 17 | Fine-tuning Fundamentals: Supervised Fine-tuning for Text Classification | Project 1 Due [NEW] | |
Lecture 8 | Mon Sep 22 | Transformer Deep Dive: Multi-layer Architecture and Parameter Optimization | ||
Lecture 9 | Wed Sep 24 | Instruction Fine-tuning: Aligning Models with Human Instructions | ||
Lecture 10 | Mon Sep 29 | Advanced Training Techniques: Learning Rate Scheduling and Regularization | ||
Lecture 11 | Wed Oct 1 | Generative Model Architecture: Decoder-Only Models and Text Generation | Project 2 Due | |
Guest Lecture | Mon Oct 6 | Industry Applications of Generative AI | Midterm Week | |
Guest Lecture | Wed Oct 8 | Current Research Frontiers in LLMs | Midterm Week | |
Lecture 12 | Mon Oct 13 | Advanced Self-Attention: Causal Masking and Sequence Dependencies | ||
Lecture 13 | Wed Oct 15 | Multi-Head Attention: Parallel Attention Mechanisms and Implementation | ||
Lecture 14 | Mon Oct 20 | Advanced Attention Patterns: Sparse Attention and Efficient Transformers | ||
Lecture 15 | Wed Oct 22 | Training at Scale: Distributed Training and Memory Optimization | Project 3 Due | |
Lecture 16 | Mon Oct 27 | Long Sequence Modeling: Position Embeddings and Context Length | ||
Lecture 17 | Wed Oct 29 | Optimization Strategies: Advanced Optimizers and Training Stability | ||
Lecture 18 | Mon Nov 3 | Prompting Fundamentals: Chain-of-Thought and Few-Shot Learning | ||
Lecture 19 | Wed Nov 5 | Advanced Prompting: Template Design and Context Engineering | ||
Lecture 20 | Mon Nov 10 | Automatic Prompt Optimization: Learning-Based Prompt Generation | ||
Lecture 21 | Wed Nov 12 | In-Context Learning: Few-Shot and Zero-Shot Capabilities | Project 4 Due | |
Lecture 22 | Mon Nov 17 | Human Feedback Alignment: RLHF and Preference Learning | ||
Lecture 23 | Wed Nov 19 | Advanced RLHF: Constitutional AI and Safety Alignment | ||
- | Nov 24-29 | Thanksgiving Break - No Classes | Holiday Week | |
Lecture 24 | Mon Dec 1 | Inference Optimization: Decoding Strategies and Sampling Methods | ||
Presentations | Wed Dec 3 | Final Project Presentations and Course Wrap-up | Project 5 Due | |
Finals Week | Dec 8-13 | Final Projects Due | Final Project Due Dec 13 | No Classes |
Grading Breakdown
Item | Total Possible Points | Percent of Grade |
---|---|---|
Assignments | 300 | 60% |
Project 1: Pre-training Foundations and LLM Architecture Implementation | 60 | |
Project 2: Embeddings & Advanced Tokenization | 60 | |
Project 3: Generative Models and Advanced Self-Attention Systems | 60 | |
Project 4: Multi-Head Attention and Advanced Prompting Strategies Implementation | 60 | |
Project 5: Complete GPT Model Training with Instruction Fine-tuning and Human Feedback Alignment | 60 | |
Team Final Project | 130 | 26% |
Attendance and Participation | 70 | 14% |
14 weeks | 5/week | |
Bonus Points (Optional) | Up to 3% | 3% |
Computer Science lectures/seminars attendance, course improvement suggestions, and other eligible activities (announced in advance) | 0.5% each |