Course Syllabus
CSC 375/575 | Generative AI
Fall 2025 | August 25 - December 13, 2025
Finals Week: December 8-13, 2025
Instructor Information
Instructor: Ron (Rongyu) Lin
Email: rongyu.lin@quinnipiac.edu
Office Hours:
- Monday: 1:30 PM - 3:15 PM (In-person)
- Tuesday: 4:00 PM - 6:00 PM (In-person)
- Friday: 2:00 PM - 3:00 PM (Virtual) - Zoom Link
- By appointment
Class Schedule: Mondays and Wednesdays
Location: This is an in-person class
Classroom: Tator Hall, Room 130
Schedule: Mondays & Wednesdays, 3:30 PM - 4:45 PM
Semester Dates: August 25, 2025 - December 13, 2025
Course Description
This course provides a comprehensive, hands-on approach to understanding and implementing Generative AI systems, with a focus on Large Language Models (LLMs). The course follows a progressive learning structure across five distinct phases:
Foundations Phase covers core concepts using Sebastian Raschka's "Build a Large Language Model (From Scratch)", establishing fundamental understanding of transformer architecture, attention mechanisms, and basic model training.
Large-Scale Training Systems Phase transitions to advanced topics from Xiao & Zhu's "Foundations of Large Language Models", focusing on distributed training, scaling laws, data processing, and efficient architectures for production-scale systems.
Prompting & Tool Integration Phase explores systematic prompt design, chain-of-thought reasoning, retrieval-augmented generation (RAG), and automatic prompt optimization techniques.
Alignment & Inference Optimization Phase covers reinforcement learning from human feedback (RLHF), constitutional AI, safety alignment, and efficient inference strategies including inference-time scaling.
Students will build complete generative models from fundamental principles, master state-of-the-art techniques, and create functional AI applications incorporating modern prompting, alignment, and optimization methods.
Course Objectives
By the end of this course, students will be able to:
- Implement generative AI models from scratch, mastering transformer architecture and modern attention mechanisms
- Design and execute large-scale training systems using distributed training, scaling laws, and efficient architectures
- Develop sophisticated prompting strategies including chain-of-thought reasoning, RAG integration, and automatic optimization
- Apply advanced alignment techniques including RLHF, constitutional AI, and human preference learning for safe AI systems
- Optimize inference performance through efficient decoding strategies, caching mechanisms, and inference-time scaling
- Integrate theoretical understanding with practical implementation across the complete AI development lifecycle
Textbooks/Materials
by Sebastian Raschka
Used in Foundations Phase (Lectures 1-7) - Core transformer implementation and basic training
by Tong Xiao and Jingbo Zhu, NiuTrans
Used in Advanced Phases (Lectures 8-27) - Large-scale training, prompting, alignment, and inference optimization
Course Policies
- Attendance & Participation: This course meets in regularly scheduled sessions each week, and your consistent presence is essential. In-class activities and discussions count toward your grade. If you miss a class, email the instructor in advance to arrange make-up work.
- Late Work: Assignments are due before class starts on the specified due date. Late work will incur a 10% penalty for each day it is late (days 1-5). After 5 days late, the maximum possible score is 50%. No late work accepted without prior approval.
- Academic Integrity: Students are expected to maintain the highest standards of academic integrity. Cheating, plagiarism, and any form of academic dishonesty, including unauthorized use of ChatGPT or other AI tools, are strictly prohibited. Violations will result in disciplinary actions, which may include failing the course. Use of AI tools is permitted only when explicitly authorized.
- Accommodation: Students who require accommodation for a disability should contact the Office of Student Accessibility as soon as possible. The instructor will work with you to ensure that all necessary accommodation is made to support your learning needs. Please provide your accommodation letter early in the semester.
Course Structure & Phases
Course Schedule
Note: This schedule provides a general framework. Specific pacing and assignment deadlines will be announced in class and via the course website to maintain flexibility.
| Phase | Topics | Readings |
|---|---|---|
| Phase 1 LLM Foundations & Architecture |
• Introduction to generative AI and historical perspective • Understanding large language models and pre-training • Text data processing and tokenization • Attention mechanisms and transformer basics • Building GPT architecture components |
Raschka Ch.1-4 |
| Phase 2 Model Training & Fine-tuning |
• Pre-training pipeline from scratch • Classification fine-tuning techniques • Instruction fine-tuning and alignment • Scaling laws and model behavior |
Raschka Ch.5-7 Xiao&Zhu Ch.2 |
| Phase 3 Large-Scale Training Systems |
• Distributed training architecture and design • Data parallel vs model parallel techniques • Large-scale data processing pipelines • Training infrastructure and optimization strategies |
Xiao&Zhu Ch.2 |
| Phase 4 Efficiency & Long Context |
• Long context processing and HPC optimization • Efficient attention variants (sparse, linear) • Memory optimization and caching strategies • Advanced position encoding and extrapolation |
Xiao&Zhu Ch.2 |
| Phase 5 Prompt Engineering & RAG |
• Systematic prompt design and best practices • Chain-of-thought reasoning techniques • Retrieval-augmented generation (RAG) systems • Tool integration and automatic optimization • RLHF and human feedback integration |
Xiao&Zhu Ch.3-4 |
| Phase 6 AI Safety & Alignment |
• Constitutional AI and value alignment • Safety considerations and ethical implications • Human preference learning and alignment • Advanced RLHF techniques |
Xiao&Zhu Ch.4 |
| Phase 7 Inference & Future Directions |
• Efficient inference and decoding strategies • Inference-time scaling and optimization • Current trends and research directions • Final project presentations and discussions |
Xiao&Zhu Ch.5 |
Grading Breakdown
| Component | Weight | Description |
|---|---|---|
| Assignments (4 total) | 48% |
Assignment 1: GPT Architecture Implementation Assignment 2: Text Generation & Advanced Decoding Assignment 3: Classification & Instruction Fine-tuning Assignment 4: Prompt Engineering & RAG Systems |
| Final Project | 29% | Team project demonstrating comprehensive application of course concepts |
| Attendance & Participation | 23% | Active engagement in lectures and in-class activities |
| Bonus Opportunities | Up to 3% | Research seminars, course improvement suggestions, and other activities |