Get a head start on program admission

Preview this course in the non-credit experience today! 
Start working toward program admission and requirements right away. Work you complete in the non-credit experience will transfer to the for-credit experience when you upgrade and pay tuition. See How It Works for details.

Cross-listed with DTSA 5510

  • Course Type: Breadth
  • Specialization: Machine Learning: Theory & Hands-On Practice with Python
  • Instructor: Dr. Geena Kim, Adjunct Professor of Computer Science
  • Prior knowledge needed: TBD

View on Coursera

Learning Outcomes

  • Explain what unsupervised learning is, and list methods used in unsupervised learning.

  • List and explain algorithms for various matrix factorization methods, and for what each is used.

Course Content

Duration: 8 hours

Now that you have a solid foundation in Supervised Learning, we shift our attention to uncovering the hidden structure from unlabeled data. We will start with an introduction to Unsupervised Learning. In this course, the models no longer have labels to learn from. They need to make sense of the data from the observations themselves. This week we are diving into Principal Component Analysis, PCA, a foundational dimension reduction technique. When you first start learning this topic, it might not seem easy. There is undoubtedly some math involved in this section. However, PCA can be grasped conceptually, perhaps more readily than anticipated. In the Supervised Learning course, we struggled with the Curse of Dimensionality. This week, we will see how PCA can reduce the number of dimensions and improve classification/regression tasks. You will have reading, a quiz, and a Jupyter lab/Peer Review to implement the PCA algorithm. It's only the first week of the course, but we wanted to remind you that in Week 5, you will turn in a final Unsupervised Learning project on a topic of your choice. If you are joining us from the Supervised Learning course, the project will be a similar rubric and workflow to that final project. Since this course will move fast, it would be a good idea to look at the final project rubric this week (and upcoming course topics) and spend some time choosing a dataset and project topic. 

Duration: 7 hours

This week, we are working with clustering, one of the most popular unsupervised learning methods. Last week, we used PCA to find a low-dimensional representation of data. Clustering, on the other hand, finds subgroups among observations. We can get a meaningful intuition of the data structure or use a procedure like Cluster-then-predict. Clustering has several applications ranging from marketing customer segmentation and advertising, identifying similar movies/music, to genomics research and disease subtypes discovery. We will focus our efforts mainly on K-means clustering and hierarchical clustering, considering the benefits and disadvantages of both and the choice of metrics like distance or linkage. We have reading, a quiz, and a Jupyter notebook lab/Peer Review this week. Make sure that you are working on your final project this week. To stay on track, finalize your project topic and complete any EDA and preprocessing so that next week, you can focus on the central part of the project and your unsupervised learning models.

Duration: 7 hours

This week we are working with Recommender Systems. Websites like Netflix, Amazon, and YouTube will surface personalized recommendations for movies, items, or videos. This week, we explore Recommendation Engines' strategies to predict users' likes. We will consider popularity, content-based, and collaborative filtering approaches, and what similarity metrics to use. As we work with Recommendation Systems, there are challenges, like the time complexity of operations and sparse data. This week is relatively math dense. You will have a quiz wherein you will work with different similarity metric calculations. Give yourself time for this week's Jupyter notebook lab and consider performant implementations. The Peer Review section this week is short. Since this course is dense, please make sure that you are working on your final projects to turn in during Week 5.  

Duration: 13 hours

We are already at the last week of course material! Get ready for another dense math week. Last week, we learned about Recommendation Systems. We used a Neighborhood Method of Collaborative Filtering, utilizing similarity measures. Latent Factor Models, including the popular Matrix Factorization (MF), can also be used for Collaborative Filtering. A 1999 publication in Nature made Non-negative Matrix Factorization extremely popular. MF has many applications, including image analysis, text mining/topic modeling, Recommender systems, audio signal separation, analytic chemistry, and gene expression analysis. This week, we focus on Singular Value Decomposition, Non-negative Matrix Factorization, and Approximation methods. We have reading, a quiz, and a Kaggle mini-project utilizing matrix factorization to categorize news articles. Next week is the due date for your final course project. Keep running experiments and working on the primary analysis for your final project. Ideally, it would be excellent to finish experimenting and iterate with your models this week so that next week, you can focus on preparing your final project deliverables.

Duration: 6.25 hours

This module contains materials for the final exam. If you've upgraded to the for-credit version of this course, please make sure you review the additional for-credit materials in the introductory module and anywhere else they may be found.

Notes

  • Cross-listed Courses: Courses that are offered under two or more programs. Considered equivalent when evaluating progress toward degree requirements. You may not earn credit for more than one version of a cross-listed course.
  • Page Updates: This page is periodically updated. Course information on the Coursera platform supersedes the information on this page. Click the View on Coursera button above for the most up-to-date information