Читать книгу Informatics and Machine Learning. From Martingales to Metaheuristics онлайн
66 страница из 101
k
NN
As N➔∞ get the LLN (weak):
k
N2.6.2 Distributions
2.6.2.1 The Geometric Distribution(Emergent Via Maxent)
Here, we talk of the probability of seeing something after k tries when the probability of seeing that event at each try is “p.” Suppose we see an event for the first time after k tries, that means the first (k − 1) tries were nonevents (with probability (1 − p) for each try), and the final observation then occurs with probability p, giving rise to the classic formula for the geometric distribution:
ssss1 The Geometric distribution, P(X = k) = (1 − p)(k−1) p, with p = 0.8.
As far as normalization, i.e. do all outcomes sum to one, we have:
k = 1
So total probability already sums to one with no further normalization needed. In ssss1 is a geometric distribution for the case where p = 0.8:
2.6.2.2 The Gaussian (aka Normal) Distribution (Emergent Via LLN Relation and Maxent)
For the Normal distribution the normalization is easiest to get via complex integration (so we'll skip that). With mean zero and variance equal one (ssss1) we get: