The terms 'Stream separation' or 'Stream segregation" are used here, meaning more or less the same as polyphonic voice separation or voice following. The core algorithm is based on ML clustering techniques, using 'single link agglomerative clustering' to group individual notes into streams of voices. Any overlapping notes are clustered into separate streams, one note belonging to one stream only. A dynamic variable *overlap-tolerance* holds a factor (0-1.0) to control how forgiving the algorithm is with respect to what counts as simultaneous. The feature vectors compared here are built from pitch and time info in input data, using a function to measure vector distances. The default distance measure is a euclidian metric. The default distance function - #'euclidian-distance - compares pitches using a mel frequency scale, to measure distances closer to what's considered perceptually significant, and time represented in milliseconds. Adjustable weightings for pitch and time allow tuning of distance measurements. Typical input is a chord-seq structure. Output a list of chord-seq's, one per voice/stream found. cpu time and memory consumption rises exponentially with larger input data sets. If this becomes a problem select subsets of data to work on, or use the provided analysis-class: 'stream-seg' to interactively segment data before applying the stream-separation analysis per segment. A sub-class of 'analysis' - 'stream-seg' - is provided to work interactively with stream-segregation within segmentation bounds of segmented data. Functions are provided to return the output from this analysis as a list of lists of chord-seqs, or the whole sequency of analysed segments concatenated into one multi-seq. Anders Vinjar - April 2017