Читать книгу Informatics and Machine Learning. From Martingales to Metaheuristics онлайн
34 страница из 101
All of the core methods described thus far (FSA, HMM, SVM) require some amount of parameter “tuning” for good performance. In essence, tuning is a search through parameter space (of the method) for best performance (according to a variety of metrics). The tuning on acquisition parameters in an FSA, or choice of states in a HMM, or SVM Kernels and Kernel parameters, is often not terribly complicated allowing for a “brute‐force” search over a set of parameters, choosing the best from that set. On occasion, however, a more elaborate, and fully automated, search‐optimization is sought (or just search problem in general), For more complex search tasks it is good to know the modern search methodologies and what they are capable of, so these are described in ssss1.
1.9 Stochastic Sequential Analysis (SSA) Protocol (Deep Learning Without NNs)
The SSA protocol is shown in ssss1 (from prior publications and patent work, see [1–3]) and is a general signal‐processing flow topology and database schema (Left Panel), with specialized variants for CCC (Center) and kinetic feature extraction based on blockade‐level duration observations (Right). The SSA Protocol allows for the discovery, characterization, and classification of localizable, approximately‐stationary, statistical signal structures in channel current data, or genomic data, or sequential data in general. The core signal processing stage in ssss1 is usually the feature extraction stage, where central to the signal processing protocol is a generalized HMM. The SSA Protocol also has a built‐in recovery protocol for weak signal handling, outlined next, where the HMM methods are complemented by the strengths of other ML methods.