[MINI] Markov Decision Processes
Published January 26, 2018
20 min
    Add to queue
    Copy URL
    Show notes

    Formally, an MDP is defined as the tuple containing states, actions, the transition function, and the reward function. This podcast examines each of these and presents them in the context of simple examples.  Despite MDPs suffering from the curse of dimensionality, they're a useful formalism and a basic concept we will expand on in future episodes.

        0:00:00 / 0:00:00