Читать книгу Informatics and Machine Learning. From Martingales to Metaheuristics онлайн

65 страница из 101

There are two ways to do an estimation given a conditional problem. The first is to seek a maximal probability based on the optimal choice of outcome (maximum a posteriori [MAP]), versus a maximal probability (referred to as a “likelihood” in this context) given choice of conditioning (maximum likelihood [ML]).

MAP Estimate:

j


ML Estimate:

j


2.6 Emergent Distributions and Series

In this section we consider a r.v., X, with specific examples where those outcomes are fully enumerated (such as 0 or 1 outcomes corresponding to a coin flip). We review a series of observations of the r.v., X, to arrive at the LLN. The emergent structure to describe a r.v. from a series of observations is often described in terms of probability distributions, the most famous being the Gaussian Distribution (a.k.a. the Normal, or Bell curve).

2.6.1 The Law of Large Numbers (LLN)

The LLN will now be derived in the classic “weak” form. The “strong” form is derived in the modern mathematical context of Martingales in ssss1.1.