Читать книгу Informatics and Machine Learning. From Martingales to Metaheuristics онлайн

47 страница из 101

The Shannon entropy of a discrete probability distribution is the measure of its amount of randomness, with the uniform probability distribution having the greatest randomness (e.g. it is most lacking in any statistical “structure” or “information”). Shannon entropy is the sum of each outcome probability times its log probability, with an overall negative placed in front to arrive at a definition involving a positive value. Further details on the mathematical formalism will be given in ssss1, but for now we can implement this in our first Python program:

-------------------------- prog1.py ------------------------- #!/usr/bin/python import numpy as np import math import re arr = np.array([1.0/10,1.0/10,1.0/10,1.0/10,1.0/10,1.0/2]) # print(arr[0]) shannon_entropy = 0 numterms = len(arr) print(numterms) index = 0 for index in range(0, numterms): shannon_entropy += arr[index]*math.log(arr[index]) shannon_entropy = -shannon_entropy print(shannon_entropy) ----------------------- end prog1.py ------------------------

Правообладателям