This course covers practical algorithms and the theory for machine learning from a variety of perspectives. Topics include supervised learning (generative, discriminative learning, parametric, non-parametric learning, deep neural networks, support vector Machines), unsupervised learning (clustering, dimensionality reduction, kernel methods). The course will also discuss recent applications of machine learning, such as computer vision, data mining, natural language processing, speech recognition and robotics. Students will learn the implementation of selected machine learning algorithms via python and PyTorch.



Compétences que vous acquerrez
- Catégorie : Feature Engineering
- Catégorie : Unsupervised Learning
- Catégorie : Machine Learning
- Catégorie : Dimensionality Reduction
- Catégorie : Linear Algebra
- Catégorie : Reinforcement Learning
- Catégorie : Statistical Methods
- Catégorie : Artificial Neural Networks
- Catégorie : Artificial Intelligence and Machine Learning (AI/ML)
- Catégorie : Data Science
- Catégorie : Classification And Regression Tree (CART)
- Catégorie : Supervised Learning
- Catégorie : Deep Learning
- Catégorie : PyTorch (Machine Learning Library)
- Catégorie : Statistical Machine Learning
Détails à connaître

Ajouter à votre profil LinkedIn
août 2025
6 devoirs
Découvrez comment les employés des entreprises prestigieuses maîtrisent des compétences recherchées

Il y a 7 modules dans ce cours
This week covers key techniques in machine learning, beginning with the kernel trick to enhance model flexibility without adding computational complexity. We will also explore decision trees for both regression and classification tasks, learning to formulate Gini impurity and entropy as measures of impurity within tree splits. Practical exercises focus on tuning tree depth, an essential step to balance model accuracy and prevent overfitting. Additionally, we will introduce ensemble models, demonstrating how combining multiple trees can improve predictive power and robustness. These exercises will provide you with experience in optimizing decision trees and ensemble methods.
Inclus
3 vidéos6 lectures2 devoirs1 sujet de discussion
This week’s module explores foundational concepts in classification by comparing discriminative and generative models. You will analyze the mathematical theory behind generative models, gaining insight into how these models capture the underlying data distribution to make predictions. Key focus areas include formulating the Gaussian Discriminant Analysis (GDA) model and deriving mathematical expressions for the Naive Bayes classifier. Through detailed derivations and examples, you will be able to understand how each model functions and the types of data it best serves. By the end of this module, you will be able to apply both GDA and Naive Bayes, choosing the appropriate model based on data characteristics and classification requirements.
Inclus
2 vidéos3 lectures2 devoirs
This week’s module introduces neural networks, starting with how to implement linear and logistic regression models. You will explore how neural networks extend beyond linear boundaries to represent complex nonlinear relationships, making them highly adaptable for various data types. Key topics this week include conducting a forward pass through a neural network to understand how data flows and predictions are generated. The week also introduces the essential concept of backpropagation, the mechanism by which neural networks learn from errors to adjust weights and improve accuracy. Hands-on exercises in Python will allow you to implement forward and backward passes, solidifying your understanding of neural network operations and preparing them for more advanced deep learning applications.
Inclus
1 vidéo3 lectures1 devoir
This week’s module focuses on deep neural networks (DNNs) and their practical applications in machine learning. We will begin by describing the structure and functionality of a deep neural network, exploring how multiple layers enable the model to learn complex patterns. The module includes hands-on exercises to implement full forward and backward passes on DNNs, reinforcing the process of training and error correction. We will also analyze Convolutional Neural Networks (CNNs), understanding their role in image processing and feature extraction. By the end of the module, students will gain proficiency in implementing and training neural networks using PyTorch, preparing them to work with deep learning models in real-world applications.
Inclus
2 vidéos3 lectures
This week’s module explores advanced clustering and estimation techniques, starting with expectation maximization (EM), a powerful algorithm used for parameter estimation in statistical models. You will formulate the theoretical foundations of k-means clustering, learning how it partitions data into distinct groups based on similarity. We also cover Gaussian mixture models (GMMs), explaining how they model data distributions using a mixture of Gaussian distributions. Additionally, you will derive the convergence properties of the EM algorithm, understanding its behavior and how it iteratively improves estimates. Through practical exercises, you will gain experience implementing these algorithms, which will allow you to apply clustering and estimation techniques to complex datasets in machine learning tasks.
Inclus
2 vidéos5 lectures
This week, we introduce dimensionality reduction techniques, which are essential for simplifying complex data while preserving key features. You will learn to mathematically formulate these techniques using eigenvalue decomposition, gaining insight into how principal components are derived. We will compare three key methods—Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Factor Analysis—highlighting their differences and applications. You will also explore spectral clustering, a powerful method for grouping data based on graph theory. The concept of autoencoders will be demonstrated as a deep learning approach for reducing dimensionality and learning efficient data representations. Hands-on coding exercises will allow implementation of these techniques, providing practical skills for tackling high-dimensional datasets in machine learning and data analysis.
Inclus
1 vidéo4 lectures
In this final week of the course, we introduce Markov Decision Processes (MDPs), a foundational framework for decision-making in uncertain environments. You will learn to use MDPs to model problems where outcomes depend on both current states and actions. This week’s module will guide you through developing a mathematical framework to describe MDPs, including key components such as states, actions, and rewards. You will also learn how to implement learning processes using techniques such as value iteration and policy iteration, which are crucial for finding optimal decision strategies. Practical exercises will help you apply these concepts to tackle real-world problems in reinforcement learning and optimal decision-making.
Inclus
2 lectures1 devoir
Obtenez un certificat professionnel
Ajoutez ce titre à votre profil LinkedIn, à votre curriculum vitae ou à votre CV. Partagez-le sur les médias sociaux et dans votre évaluation des performances.
Instructeur

Offert par
En savoir plus sur Probability and Statistics
- Statut : Prévisualisation
Northeastern University
- Statut : Essai gratuit
Illinois Tech
- Statut : Essai gratuit
University of Colorado Boulder
- Statut : Essai gratuit
Politecnico di Milano
Pour quelles raisons les étudiants sur Coursera nous choisissent-ils pour leur carrière ?





Ouvrez de nouvelles portes avec Coursera Plus
Accès illimité à 10,000+ cours de niveau international, projets pratiques et programmes de certification prêts à l'emploi - tous inclus dans votre abonnement.
Faites progresser votre carrière avec un diplôme en ligne
Obtenez un diplôme auprès d’universités de renommée mondiale - 100 % en ligne
Rejoignez plus de 3 400 entreprises mondiales qui ont choisi Coursera pour les affaires
Améliorez les compétences de vos employés pour exceller dans l’économie numérique
Foire Aux Questions
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
You will be eligible for a full refund until two weeks after your payment date, or (for courses that have just launched) until two weeks after the first session of the course begins, whichever is later. You cannot receive a refund once you’ve earned a Course Certificate, even if you complete the course within the two-week refund period. See our full refund policy.
Plus de questions
Aide financière disponible,