A Markov chain can be used to model the evolution of a sequence of random events where probabilities for each depend solely on the previous event. Once a state in the sequence is observed, previous values are no longer relevant for the prediction of future values. Markov chains have many applications for modeling real-world phenomena in a myriad of disciplines including physics, biology, chemistry, queueing, and information theory. More recently, they are being recognized as important tools in the world of artificial intelligence (AI) where algorithms are designed to make intelligent decisions based on context and without human input. Markov chains can be particularly useful for natural language processing and generative AI algorithms where the respective goals are to make predictions and to create new data in the form or, for example, new text or images. In this course, we will explore examples of both. While generative AI models are generally far more complex than Markov chains, the study of the latter provides an important foundation for the former. Additionally, Markov chains provide the basis for a powerful class of so-called Markov chain Monte Carlo (MCMC) algorithms that can be used to sample values from complex probability distributions used in AI and beyond.

Découvrez de nouvelles compétences avec 30 % de réduction sur les cours dispensés par des experts du secteur. Économisez maintenant.


Discrete-Time Markov Chains and Monte Carlo Methods
Ce cours fait partie de Spécialisation Foundations of Probability and Statistics

Instructeur : Jem Corcoran
Inclus avec
Expérience recommandée
Ce que vous apprendrez
Analyze long-term behavior of Markov processes for the purposes of both prediction and understanding equilibrium in dynamic stochastic systems
Apply Markov decision processes to solve problems involving uncertainty and sequential decision-making
Simulate data from complex probability distributions using Markov chain Monte Carlo algorithms
Compétences que vous acquerrez
- Catégorie : Artificial Intelligence
- Catégorie : Machine Learning Algorithms
- Catégorie : Generative AI
- Catégorie : Statistical Modeling
- Catégorie : Mathematical Modeling
Détails à connaître

Ajouter à votre profil LinkedIn
août 2025
15 devoirs
Découvrez comment les employés des entreprises prestigieuses maîtrisent des compétences recherchées

Élaborez votre expertise du sujet
- Apprenez de nouveaux concepts auprès d'experts du secteur
- Acquérez une compréhension de base d'un sujet ou d'un outil
- Développez des compétences professionnelles avec des projets pratiques
- Obtenez un certificat professionnel partageable

Il y a 6 modules dans ce cours
Welcome to the course! This module contains logistical information to get you started!
Inclus
7 lectures4 laboratoires non notés
In this module we will review definitions and basic computations of conditional probabilities. We will then define a Markov chain and its associated transition probability matrix and learn how to do many basic calculations. We will then tackle more advanced calculations involving absorbing states and techniques for putting a longer history into a Markov framework!
Inclus
12 vidéos5 devoirs2 devoirs de programmation
What happens if you run a Markov chain out for a "very long time"? In many cases, it turns out that the chain will settle into a sort of "equilibrium" or "limiting distribution" where you will find it in various states with various fixed probabilities. In this Module, we will define communication classes, recurrence, and periodicity properties for Markov chains with the ultimate goal of being able to answer existence and uniqueness questions about limiting distributions!
Inclus
9 vidéos3 devoirs2 devoirs de programmation
In this Module, we will define what is meant by a "stationary" distribution for a Markov chain. You will learn how it relates to the limiting distribution discussed in the previous Module. You will also spend time learning about the very powerful "first-step analysis" technique for solving many, otherwise intractable, problems of interest surrounding Markov chains. We will discuss rates of convergence for a Markov chain to settle into its "stationary mode", and just maybe we'll give a monkey a keyboard and hope for the best!
Inclus
11 vidéos3 devoirs2 devoirs de programmation
In this Module we explore several options for simulating values from discrete and continuous distributions. Several of the algorithms we consider will involve creating a Markov chain with a stationary or limiting distribution that is equivalent to the "target" distribution of interest. This Module includes the inverse cdf method, the accept-reject algorithm, the Metropolis-Hastings algorithm, the Gibbs sampler, and a brief introduction to "perfect sampling".
Inclus
13 vidéos2 devoirs2 devoirs de programmation4 laboratoires non notés
In reinforcement learning, an "agent" learns to make decisions in an environment through receiving rewards or punishments for taking various actions. A Markov decision process (MDP) is reinforcement learning where, given the current state of the environment and the agent's current action, past states and actions used to get the agent to that point are irrelevant. In this Module, we learn about the famous "Bellman equation", which is used to recursively assign rewards to various states and how to use it in order to find an optimal strategy for the agent!
Inclus
5 vidéos2 devoirs2 devoirs de programmation4 laboratoires non notés
Obtenez un certificat professionnel
Ajoutez ce titre à votre profil LinkedIn, à votre curriculum vitae ou à votre CV. Partagez-le sur les médias sociaux et dans votre évaluation des performances.
Instructeur

Offert par
En savoir plus sur Probability and Statistics
- Statut : Essai gratuit
University of California, Santa Cruz
- Statut : Essai gratuit
Illinois Tech
- Statut : Essai gratuit
University of California, Santa Cruz
EIT Digital
Pour quelles raisons les étudiants sur Coursera nous choisissent-ils pour leur carrière ?





Ouvrez de nouvelles portes avec Coursera Plus
Accès illimité à 10,000+ cours de niveau international, projets pratiques et programmes de certification prêts à l'emploi - tous inclus dans votre abonnement.
Faites progresser votre carrière avec un diplôme en ligne
Obtenez un diplôme auprès d’universités de renommée mondiale - 100 % en ligne
Rejoignez plus de 3 400 entreprises mondiales qui ont choisi Coursera pour les affaires
Améliorez les compétences de vos employés pour exceller dans l’économie numérique
Foire Aux Questions
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you enroll in the course, you get access to all of the courses in the Specialization, and you earn a certificate when you complete the work. Your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
If you subscribed, you get a 7-day free trial during which you can cancel at no penalty. After that, we don’t give refunds, but you can cancel your subscription at any time. See our full refund policy.
Plus de questions
Aide financière disponible,