code
string
signature
string
docstring
string
loss_without_docstring
float64
loss_with_docstring
float64
factor
float64
return all(filt.is_causal() for filt in self.callables if hasattr(filt, "is_causal"))
def is_causal(self)
Tests whether all filters in the list are causal (i.e., no future-data delay in positive ``z`` exponents). Non-linear filters are seem as causal by default. CascadeFilter and ParallelFilter are causal if all the filters they group are causal.
10.521798
8.086097
1.301221
if Hz is None: if freq < 7: # Perhaps user tried something up to 2 * pi raise ValueError("Frequency out of range.") Hz = 1 fHz = freq / Hz result = 6.23e-6 * fHz ** 2 + 93.39e-3 * fHz + 28.52 return result * Hz
def erb(freq, Hz=None)
``B. C. J. Moore and B. R. Glasberg, "Suggested formulae for calculating auditory filter bandwidths and excitation patterns". J. Acoust. Soc. Am., 74, 1983, pp. 750-753.``
8.161998
8.781127
0.929493
tnt = 2 * n - 2 return (factorial(n - 1) ** 2 / (pi * factorial(tnt) * 2 ** -tnt), 2 * (2 ** (1. / n) - 1) ** .5 )
def gammatone_erb_constants(n)
Constants for using the real bandwidth in the gammatone filter, given its order. Returns a pair :math:`(x, y) = (1/a_n, c_n)`. Based on equations from: ``Holdsworth, J.; Patterson, R.; Nimmo-Smith, I.; Rice, P. Implementing a GammaTone Filter Bank. In: SVOS Final Report, Annex C, Part A: The Auditory Filter Bank. 1988.`` First returned value is a bandwidth compensation for direct use in the gammatone formula: >>> x, y = gammatone_erb_constants(4) >>> central_frequency = 1000 >>> round(x, 3) 1.019 >>> bandwidth = x * erb["moore_glasberg_83"](central_frequency) >>> round(bandwidth, 2) 130.52 Second returned value helps us find the ``3 dB`` bandwidth as: >>> x, y = gammatone_erb_constants(4) >>> central_frequency = 1000 >>> bandwidth3dB = x * y * erb["moore_glasberg_83"](central_frequency) >>> round(bandwidth3dB, 2) 113.55
7.660928
9.234182
0.829627
assert eta >= 1 A = exp(-bandwidth) numerator = cos(phase) - A * cos(freq - phase) * z ** -1 denominator = 1 - 2 * A * cos(freq) * z ** -1 + A ** 2 * z ** -2 filt = (numerator / denominator).diff(n=eta-1, mul_after=-z) # Filter is done, but the denominator might have some numeric loss f0 = ZFilter(filt.numpoly) / denominator f0 /= abs(f0.freq_response(freq)) # Max gain == 1.0 (0 dB) fn = 1 / denominator fn /= abs(fn.freq_response(freq)) return CascadeFilter([f0] + [fn] * (eta - 1))
def gammatone(freq, bandwidth, phase=0, eta=4)
``Bellini, D. J. S. "AudioLazy: Processamento digital de sinais expressivo e em tempo real", IME-USP, Mastership Thesis, 2013.`` This implementation have the impulse response (for each sample ``n``, keeping the input parameter names): .. math:: n^{{eta - 1}} e^{{- bandwidth \cdot n}} \cos(freq \cdot n + phase)
10.511024
11.259243
0.933546
A = exp(-bandwidth) cosw = cos(freq) sinw = sin(freq) sig = [1., -1.] coeff = [cosw + s1 * (sqrt(2) + s2) * sinw for s1 in sig for s2 in sig] numerator = [1 - A * c * z ** -1 for c in coeff] denominator = 1 - 2 * A * cosw * z ** -1 + A ** 2 * z ** -2 filt = CascadeFilter(num / denominator for num in numerator) return CascadeFilter(f / abs(f.freq_response(freq)) for f in filt)
def gammatone(freq, bandwidth)
``Slaney, M. "An Efficient Implementation of the Patterson-Holdsworth Auditory Filter Bank", Apple Computer Technical Report #35, 1993.``
6.590384
6.710982
0.98203
bw = thub(bandwidth, 1) bw2 = thub(bw * 2, 4) freq = thub(freq, 4) resons = [resonator.z_exp, resonator.poles_exp] * 2 return CascadeFilter(reson(freq, bw2) for reson in resons)
def gammatone(freq, bandwidth)
``A. Klapuri, "Multipich Analysis of Polyphonic Music and Speech Signals Using an Auditory Model". IEEE Transactions on Audio, Speech and Language Processing, vol. 16, no. 2, 2008, pp. 255-266.``
11.232695
11.760141
0.95515
from scipy.interpolate import UnivariateSpline table = phon2dB.iso226.table schema = phon2dB.iso226.schema freqs = [row[schema.index("freq")] for row in table] if loudness is None: # Threshold levels spl = [row[schema.index("threshold")] for row in table] else: # Curve for a specific phon value def get_pressure_level(freq, alpha, loudness_base, threshold): return 10 / alpha * math.log10( 4.47e-3 * (10 ** (.025 * loudness) - 1.14) + (.4 * 10 ** ((threshold + loudness_base) / 10 - 9)) ** alpha ) - loudness_base + 94 spl = [get_pressure_level(**dict(xzip(schema, args))) for args in table] interpolator = UnivariateSpline(freqs, spl, s=0) interpolator_low = UnivariateSpline([-30] + freqs, [1e3] + spl, s=0) interpolator_high = UnivariateSpline(freqs + [32000], spl + [1e3], s=0) @elementwise("freq", 0) def freq2dB_spl(freq): if freq < 20: return interpolator_low(freq).tolist() if freq > 12500: return interpolator_high(freq).tolist() return interpolator(freq).tolist() return freq2dB_spl
def phon2dB(loudness=None)
Loudness in phons to Sound Pressure Level (SPL) in dB using the ISO/FDIS 226:2003 model. This function needs Scipy, as ``scipy.interpolate.UnivariateSpline`` objects are used as interpolators. Parameters ---------- loudness : The loudness value in phons to be converted, or None (default) to get the threshold of hearing. Returns ------- A callable that returns the SPL dB value for each given frequency in hertz. Note ---- See ``phon2dB.iso226.schema`` and ``phon2dB.iso226.table`` to know the original frequency used for the result. The result for any other value is an interpolation (spline). Don't trust on values nor lower nor higher than the frequency limits there (20Hz and 12.5kHz) as they're not part of ISO226 and no value was collected to estimate them (they're just a spline interpolation to reach 1000dB at -30Hz and 32kHz). Likewise, the trustful loudness input range is from 20 to 90 phon, as written on ISO226, although other values aren't found by a spline interpolation but by using the formula on section 4.1 of ISO226. Hint ---- The ``phon2dB.iso226.table`` also have other useful information, such as the threshold values in SPL dB.
4.941912
4.041714
1.222727
for wnd_dict in window._content_generation_table: names = wnd_dict["names"] sname = wnd_dict["sname"] = names[0] wnd_dict.setdefault("params_def", "") for sdict in [window, wsymm]: docs_dict = window._doc_kwargs(symm = sdict is wsymm, **wnd_dict) decorators = [format_docstring(**docs_dict), sdict.strategy(*names)] ns = dict(pi=pi, sin=sin, cos=cos, xrange=xrange, __name__=__name__) exec(sdict._code_template.format(**wnd_dict), ns, ns) reduce(lambda func, dec: dec(func), decorators, ns[sname]) if not wnd_dict.get("distinct", True): wsymm[sname] = window[sname] break wsymm[sname].periodic = window[sname].periodic = window[sname] wsymm[sname].symm = window[sname].symm = wsymm[sname]
def _generate_window_strategies()
Create all window and wsymm strategies
6.672225
5.850282
1.140496
if max_lag is None: max_lag = len(blk) - 1 return [sum(blk[n] * blk[n + tau] for n in xrange(len(blk) - tau)) for tau in xrange(max_lag + 1)]
def acorr(blk, max_lag=None)
Calculate the autocorrelation of a given 1-D block sequence. Parameters ---------- blk : An iterable with well-defined length. Don't use this function with Stream objects! max_lag : The size of the result, the lags you'd need. Defaults to ``len(blk) - 1``, since any lag beyond would result in zero. Returns ------- A list with lags from 0 up to max_lag, where its ``i``-th element has the autocorrelation for a lag equals to ``i``. Be careful with negative lags! You should use abs(lag) indexes when working with them. Examples -------- >>> seq = [1, 2, 3, 4, 3, 4, 2] >>> acorr(seq) # Default max_lag is len(seq) - 1 [59, 52, 42, 30, 17, 8, 2] >>> acorr(seq, 9) # Zeros at the end [59, 52, 42, 30, 17, 8, 2, 0, 0, 0] >>> len(acorr(seq, 3)) # Resulting length is max_lag + 1 4 >>> acorr(seq, 3) [59, 52, 42, 30]
2.653128
3.334743
0.795602
if max_lag is None: max_lag = len(blk) - 1 elif max_lag >= len(blk): raise ValueError("Block length should be higher than order") return [[sum(blk[n - i] * blk[n - j] for n in xrange(max_lag, len(blk)) ) for i in xrange(max_lag + 1) ] for j in xrange(max_lag + 1)]
def lag_matrix(blk, max_lag=None)
Finds the lag matrix for a given 1-D block sequence. Parameters ---------- blk : An iterable with well-defined length. Don't use this function with Stream objects! max_lag : The size of the result, the lags you'd need. Defaults to ``len(blk) - 1``, the maximum lag that doesn't create fully zeroed matrices. Returns ------- The covariance matrix as a list of lists. Each cell (i, j) contains the sum of ``blk[n - i] * blk[n - j]`` elements for all n that allows such without padding the given block.
3.288355
2.957054
1.112037
dft_data = (sum(xn * cexp(-1j * n * f) for n, xn in enumerate(blk)) for f in freqs) if normalize: lblk = len(blk) return [v / lblk for v in dft_data] return list(dft_data)
def dft(blk, freqs, normalize=True)
Complex non-optimized Discrete Fourier Transform Finds the DFT for values in a given frequency list, in order, over the data block seen as periodic. Parameters ---------- blk : An iterable with well-defined length. Don't use this function with Stream objects! freqs : List of frequencies to find the DFT, in rad/sample. FFT implementations like numpy.fft.ftt finds the coefficients for N frequencies equally spaced as ``line(N, 0, 2 * pi, finish=False)`` for N frequencies. normalize : If True (default), the coefficient sums are divided by ``len(blk)``, and the coefficient for the DC level (frequency equals to zero) is the mean of the block. If False, that coefficient would be the sum of the data in the block. Returns ------- A list of DFT values for each frequency, in the same order that they appear in the freqs input. Note ---- This isn't a FFT implementation, and performs :math:`O(M . N)` float pointing operations, with :math:`M` and :math:`N` equals to the length of the inputs. This function can find the DFT for any specific frequency, with no need for zero padding or finding all frequencies in a linearly spaced band grid with N frequency bins at once.
5.417768
6.711593
0.807225
neg_hyst = -hysteresis seq_iter = iter(seq) # Gets the first sign if first_sign == 0: last_sign = 0 for el in seq_iter: yield 0 if (el > hysteresis) or (el < neg_hyst): # Ignores hysteresis region last_sign = -1 if el < 0 else 1 # Define the first sign break else: last_sign = -1 if first_sign < 0 else 1 # Finds the full zero-crossing sequence for el in seq_iter: # Keep the same iterator (needed for non-generators) if el * last_sign < neg_hyst: last_sign = -1 if el < 0 else 1 yield 1 else: yield 0
def zcross(seq, hysteresis=0, first_sign=0)
Zero-crossing stream. Parameters ---------- seq : Any iterable to be used as input for the zero crossing analysis hysteresis : Crossing exactly zero might happen many times too fast due to high frequency oscilations near zero. To avoid this, you can make two threshold limits for the zero crossing detection: ``hysteresis`` and ``-hysteresis``. Defaults to zero (0), which means no hysteresis and only one threshold. first_sign : Optional argument with the sign memory from past. Gets the sig from any signed number. Defaults to zero (0), which means "any", and the first sign will be the first one found in data. Returns ------- A Stream instance that outputs 1 for each crossing detected, 0 otherwise.
3.913251
3.925067
0.99699
size_inv = 1. / size @tostream def maverage_filter(sig, zero=0.): data = deque((zero * size_inv for _ in xrange(size)), maxlen=size) mean_value = zero for el in sig: mean_value -= data.popleft() new_value = el * size_inv data.append(new_value) mean_value += new_value yield mean_value return maverage_filter
def maverage(size)
Moving average This is the only strategy that uses a ``collections.deque`` object instead of a ZFilter instance. Fast, but without extra capabilites such as a frequency response plotting method. Parameters ---------- size : Data block window size. Should be an integer. Returns ------- A callable that accepts two parameters: a signal ``sig`` and the starting memory element ``zero`` that behaves like the ``LinearFilter.__call__`` arguments. The output from that callable is a Stream instance, and has no decimation applied. See Also -------- envelope : Signal envelope (time domain) strategies.
4.769988
5.189841
0.919101
if low is None: if high is None: return Stream(sig) return Stream(el if el < high else high for el in sig) if high is None: return Stream(el if el > low else low for el in sig) if high < low: raise ValueError("Higher clipping limit is smaller than lower one") return Stream(high if el > high else (low if el < low else el) for el in sig)
def clip(sig, low=-1., high=1.)
Clips the signal up to both a lower and a higher limit. Parameters ---------- sig : The signal to be clipped, be it a Stream instance, a list or any iterable. low, high : Lower and higher clipping limit, "saturating" the input to them. Defaults to -1.0 and 1.0, respectively. These can be None when needed one-sided clipping. When both limits are set to None, the output will be a Stream that yields exactly the ``sig`` input data. Returns ------- Clipped signal as a Stream instance.
3.104959
2.668844
1.16341
idata = iter(sig) d0 = next(idata) yield d0 delta = d0 - d0 # Get the zero (e.g., integer, float) from data for d1 in idata: d_diff = d1 - d0 if abs(d_diff) > max_delta: delta += - d_diff + min((d_diff) % step, (d_diff) % -step, key=lambda x: abs(x)) yield d1 + delta d0 = d1
def unwrap(sig, max_delta=pi, step=2*pi)
Parametrized signal unwrapping. Parameters ---------- sig : An iterable seen as an input signal. max_delta : Maximum value of :math:`\Delta = sig_i - sig_{i-1}` to keep output without another minimizing step change. Defaults to :math:`\pi`. step : The change in order to minimize the delta is an integer multiple of this value. Defaults to :math:`2 . \pi`. Returns ------- The signal unwrapped as a Stream, minimizing the step difference when any adjacency step in the input signal is higher than ``max_delta`` by summing/subtracting ``step``.
5.39417
5.778495
0.93349
filt = (1 - z ** -lag).linearize() @tostream def amdf_filter(sig, zero=0.): return maverage(size)(abs(filt(sig, zero=zero)), zero=zero) return amdf_filter
def amdf(lag, size)
Average Magnitude Difference Function non-linear filter for a given size and a fixed lag. Parameters ---------- lag : Time lag, in samples. See ``freq2lag`` if needs conversion from frequency values. size : Moving average size. Returns ------- A callable that accepts two parameters: a signal ``sig`` and the starting memory element ``zero`` that behaves like the ``LinearFilter.__call__`` arguments. The output from that callable is a Stream instance, and has no decimation applied. See Also -------- freq2lag : Frequency (in rad/sample) to lag (in samples) converter.
19.627527
20.576294
0.95389
import numpy as np # Finds the size from data, if needed if size is None: blk_sig = Stream(blk_sig) size = len(blk_sig.peek()) if hop is None: hop = size # Find the right windowing function to be applied if wnd is None: wnd = np.ones(size) elif callable(wnd) and not isinstance(wnd, Stream): wnd = wnd(size) if isinstance(wnd, Sequence): wnd = np.array(wnd) elif isinstance(wnd, Iterable): wnd = np.hstack(wnd) else: raise TypeError("Window should be an iterable or a callable") # Normalization to the [-1; 1] range if normalize: steps = Stream(wnd).blocks(hop).map(np.array) gain = np.sum(np.abs(np.vstack(steps)), 0).max() if gain: # If gain is zero, normalization couldn't have any effect wnd = wnd / gain # Can't use "/=" nor "*=" as Numpy would keep datatype # Overlap-add algorithm old = np.zeros(size) for blk in (wnd * blk for blk in blk_sig): blk[:-hop] += old[hop:] for el in blk[:hop]: yield el old = blk for el in old[hop:]: # No more blocks, finish yielding the last one yield el
def overlap_add(blk_sig, size=None, hop=None, wnd=None, normalize=True)
Overlap-add algorithm using Numpy arrays. Parameters ---------- blk_sig : An iterable of blocks (sequences), such as the ``Stream.blocks`` result. size : Block size for each ``blk_sig`` element, in samples. hop : Number of samples for two adjacent blocks (defaults to the size). wnd : Windowing function to be applied to each block or any iterable with exactly ``size`` elements. If ``None`` (default), applies a rectangular window. normalize : Flag whether the window should be normalized so that the process could happen in the [-1; 1] range, dividing the window by its hop gain. Default is ``True``. Returns ------- A Stream instance with the blocks overlapped and added. See Also -------- Stream.blocks : Splits the Stream instance into blocks with given size and hop. blocks : Same to Stream.blocks but for without using the Stream class. chain : Lazily joins all iterables given as parameters. chain.from_iterable : Same to ``chain(*data)``, but the ``data`` evaluation is lazy. window : Window/apodization/tapering functions for a given size as a StrategyDict. Note ---- Each block has the window function applied to it and the result is the sum of the blocks without any edge-case special treatment for the first and last few blocks.
5.367492
4.575638
1.173059
# Finds the size from data, if needed if size is None: blk_sig = Stream(blk_sig) size = len(blk_sig.peek()) if hop is None: hop = size # Find the window to be applied, resulting on a list or None if wnd is not None: if callable(wnd) and not isinstance(wnd, Stream): wnd = wnd(size) if isinstance(wnd, Iterable): wnd = list(wnd) else: raise TypeError("Window should be an iterable or a callable") # Normalization to the [-1; 1] range if normalize: if wnd: steps = Stream(wnd).map(abs).blocks(hop).map(tuple) gain = max(xmap(sum, xzip(*steps))) if gain: # If gain is zero, normalization couldn't have any effect wnd[:] = (w / gain for w in wnd) else: wnd = [1 / ceil(size / hop)] * size # Window application if wnd: mul = operator.mul if len(wnd) != size: raise ValueError("Incompatible window size") wnd = wnd + [0.] # Allows detecting when block size is wrong blk_sig = (xmap(mul, wnd, blk) for blk in blk_sig) # Overlap-add algorithm add = operator.add mem = [0.] * size s_h = size - hop for blk in xmap(iter, blk_sig): mem[:s_h] = xmap(add, mem[hop:], blk) mem[s_h:] = blk # Remaining elements if len(mem) != size: raise ValueError("Wrong block size or declared") for el in mem[:hop]: yield el for el in mem[hop:]: # No more blocks, finish yielding the last one yield el
def overlap_add(blk_sig, size=None, hop=None, wnd=None, normalize=True)
Overlap-add algorithm using lists instead of Numpy arrays. The behavior is the same to the ``overlap_add.numpy`` strategy, besides the data types.
5.587267
5.45976
1.023354
from numpy.fft import fft, ifft return stft.base(transform=fft, inverse_transform=ifft)(func, **kwparams)
def stft(func=None, **kwparams)
Short Time Fourier Transform for complex data. Same to the default STFT strategy, but with new defaults. This is the same to: .. code-block:: python stft.base(transform=numpy.fft.fft, inverse_transform=numpy.fft.ifft) See ``stft.base`` docs for more.
7.748755
3.643112
2.12696
from numpy.fft import fft, ifft ifft_r = lambda *args: ifft(*args).real return stft.base(transform=fft, inverse_transform=ifft_r)(func, **kwparams)
def stft(func=None, **kwparams)
Short Time Fourier Transform for real data keeping the full FFT block. Same to the default STFT strategy, but with new defaults. This is the same to: .. code-block:: python stft.base(transform=numpy.fft.fft, inverse_transform=lambda *args: numpy.fft.ifft(*args).real) See ``stft.base`` docs for more.
6.702073
3.693483
1.814567
if size is None: size = chunks.size dfmt = str(size) + dfmt if byte_order is None: struct_string = dfmt else: struct_string = byte_order + dfmt s = struct.Struct(struct_string) for block in blocks(seq, size, padval=padval): yield s.pack(*block)
def chunks(seq, size=None, dfmt="f", byte_order=None, padval=0.)
Chunk generator based on the struct module (Python standard library). Low-level data blockenizer for homogeneous data as a generator, to help writing an iterable into a file. The dfmt should be one char, chosen from the ones in link: `<http://docs.python.org/library/struct.html#format-characters>`_ Useful examples (integer are signed, use upper case for unsigned ones): - "b" for 8 bits (1 byte) integer - "h" for 16 bits (2 bytes) integer - "i" for 32 bits (4 bytes) integer - "f" for 32 bits (4 bytes) float (default) - "d" for 64 bits (8 bytes) float (double) Byte order follows native system defaults. Other options are in the site: `<http://docs.python.org/library/struct.html#struct-alignment>`_ They are: - "<" means little-endian - ">" means big-endian Note ---- Default chunk size can be accessed (and changed) via chunks.size.
3.074431
3.347872
0.918324
if size is None: size = chunks.size chunk = array.array(dfmt, xrange(size)) idx = 0 for el in seq: chunk[idx] = el idx += 1 if idx == size: yield chunk.tostring() idx = 0 if idx != 0: for idx in xrange(idx, size): chunk[idx] = padval yield chunk.tostring()
def chunks(seq, size=None, dfmt="f", byte_order=None, padval=0.)
Chunk generator based on the array module (Python standard library). See chunk.struct for more help. This strategy uses array.array (random access by indexing management) instead of struct.Struct and blocks/deque (circular queue appending) from the chunks.struct strategy. Hint ---- Try each one to find the faster one for your machine, and chooses the default one by assigning ``chunks.default = chunks.strategy_name``. It'll be the one used by the AudioIO/AudioThread playing mechanism. Note ---- The ``dfmt`` symbols for arrays might differ from structs' defaults.
2.826581
3.643667
0.775752
with self.halting: # Avoid simultaneous "close" threads if not self.finished: # Ignore all "close" calls, but the first, self.finished = True # and any call to play would raise ThreadError # Closes all playing AudioThread instances while True: with self.lock: # Ensure there's no other thread messing around try: thread = self._threads[0] # Needless to say: pop = deadlock except IndexError: # Empty list break # No more threads if not self.wait: thread.stop() thread.join() # Closes all recording RecStream instances while self._recordings: recst = self._recordings[-1] recst.stop() recst.take(inf) # Ensure it'll be closed # Finishes assert not self._pa._streams # No stream should survive self._pa.terminate()
def close(self)
Destructor for this audio interface. Waits the threads to finish their streams, if desired.
13.536472
13.210835
1.024649
with self.lock: if self.finished: raise threading.ThreadError("Trying to play an audio stream while " "halting the AudioIO manager object") new_thread = AudioThread(self, audio, **kwargs) self._threads.append(new_thread) new_thread.start() return new_thread
def play(self, audio, **kwargs)
Start another thread playing the given audio sample iterable (e.g. a list, a generator, a NumPy np.ndarray with samples), and play it. The arguments are used to customize behaviour of the new thread, as parameters directly sent to PyAudio's new stream opening method, see AudioThread.__init__ for more.
5.43714
4.522187
1.202326
if chunk_size is None: chunk_size = chunks.size if hasattr(self, "api"): kwargs.setdefault("input_device_index", self.api["defaultInputDevice"]) channels = kwargs.pop("nchannels", channels) # Backwards compatibility input_stream = RecStream(self, self._pa.open(format=_STRUCT2PYAUDIO[dfmt], channels=channels, rate=rate, frames_per_buffer=chunk_size, input=True, **kwargs), chunk_size, dfmt ) self._recordings.append(input_stream) return input_stream
def record(self, chunk_size = None, dfmt = "f", channels = 1, rate = DEFAULT_SAMPLE_RATE, **kwargs )
Records audio from device into a Stream. Parameters ---------- chunk_size : Number of samples per chunk (block sent to device). dfmt : Format, as in chunks(). Default is "f" (Float32). channels : Channels in audio stream (serialized). rate : Sample rate (same input used in sHz). Returns ------- Endless Stream instance that gather data from the audio input device.
4.330538
4.715397
0.918382
# From now on, it's multi-thread. Let the force be with them. st = self.stream._stream for chunk in chunks(self.audio, size=self.chunk_size*self.nchannels, dfmt=self.dfmt): #Below is a faster way to call: # self.stream.write(chunk, self.chunk_size) self.write_stream(st, chunk, self.chunk_size, False) if not self.go.is_set(): self.stream.stop_stream() if self.halting: break self.go.wait() self.stream.start_stream() # Finished playing! Destructor-like step: let's close the thread with self.lock: if self in self.device_manager._threads: # If not already closed self.stream.close() self.device_manager.thread_finished(self)
def run(self)
Plays the audio. This method plays the audio, and shouldn't be called explicitly, let the constructor do so.
8.32362
8.131708
1.0236
with self.lock: self.halting = True self.go.clear()
def stop(self)
Stops the playing thread and close
13.309846
15.586417
0.853939
if note_string == "?": return nan data = note_string.strip().lower() name2delta = {"c": -9, "d": -7, "e": -5, "f": -4, "g": -2, "a": 0, "b": 2} accident2delta = {"b": -1, "#": 1, "x": 2} accidents = list(it.takewhile(lambda el: el in accident2delta, data[1:])) octave_delta = int(data[len(accidents) + 1:]) - 4 return (MIDI_A4 + name2delta[data[0]] + # Name sum(accident2delta[ac] for ac in accidents) + # Accident 12 * octave_delta # Octave )
def str2midi(note_string)
Given a note string name (e.g. "Bb4"), returns its MIDI pitch number.
3.833745
3.81751
1.004253
result = 12 * (log2(freq) - log2(FREQ_A4)) + MIDI_A4 return nan if isinstance(result, complex) else result
def freq2midi(freq)
Given a frequency in Hz, returns its MIDI pitch number.
7.251963
6.510132
1.11395
if isinf(midi_number) or isnan(midi_number): return "?" num = midi_number - (MIDI_A4 - 4 * 12 - 9) note = (num + .5) % 12 - .5 rnote = int(round(note)) error = note - rnote octave = str(int(round((num - note) / 12.))) if sharp: names = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"] else: names = ["C", "Db", "D", "Eb", "E", "F", "Gb", "G", "Ab", "A", "Bb", "B"] names = names[rnote] + octave if abs(error) < 1e-4: return names else: err_sig = "+" if error > 0 else "-" err_str = err_sig + str(round(100 * abs(error), 2)) + "%" return names + err_str
def midi2str(midi_number, sharp=True)
Given a MIDI pitch number, returns its note string name (e.g. "C3").
2.57766
2.576739
1.000357
# Input validation if any(f <= 0 for f in (freq, fmin, fmax)): raise ValueError("Frequencies have to be positive") # If freq is out of range, avoid range extension while freq < fmin: freq *= 2 while freq > fmax: freq /= 2 if freq < fmin: # Gone back and forth return [] # Finds the range for a valid input return list(it.takewhile(lambda x: x > fmin, (freq * 2 ** harm for harm in it.count(0, -1)) ))[::-1] \ + list(it.takewhile(lambda x: x < fmax, (freq * 2 ** harm for harm in it.count(1)) ))
def octaves(freq, fmin=20., fmax=2e4)
Given a frequency and a frequency range, returns all frequencies in that range that is an integer number of octaves related to the given frequency. Parameters ---------- freq : Frequency, in any (linear) unit. fmin, fmax : Frequency range, in the same unit of ``freq``. Defaults to 20.0 and 20,000.0, respectively. Returns ------- A list of frequencies, in the same unit of ``freq`` and in ascending order. Examples -------- >>> from audiolazy import octaves, sHz >>> octaves(440.) [27.5, 55.0, 110.0, 220.0, 440.0, 880.0, 1760.0, 3520.0, 7040.0, 14080.0] >>> octaves(440., fmin=3000) [3520.0, 7040.0, 14080.0] >>> Hz = sHz(44100)[1] # Conversion unit from sample rate >>> freqs = octaves(440 * Hz, fmin=300 * Hz, fmax = 1000 * Hz) # rad/sample >>> len(freqs) # Number of octaves 2 >>> [round(f, 6) for f in freqs] # Values in rad/sample [0.062689, 0.125379] >>> [round(f / Hz, 6) for f in freqs] # Values in Hz [440.0, 880.0]
4.870095
5.206244
0.935433
is_d_import = lambda n: isinstance(n, ast.Name) and n.id == "__import__" is_assign = lambda n: isinstance(n, ast.Assign) is_valid = lambda n: is_assign(n) and not any(map(is_d_import, ast.walk(n))) with open(fname, "r") as f: astree = ast.parse(f.read(), filename=fname) astree.body = [node for node in astree.body if is_valid(node)] return locals_from_exec(compile(astree, fname, mode="exec"))
def pseudo_import(fname)
Namespace dict from assignments in the file without ``__import__``
3.01582
2.826221
1.067086
with open(fname, "r") as f: data = f.read().splitlines() first_idx = next(idx for idx, line in enumerate(data) if line.strip()) if data[first_idx].strip() == "..": next_idx = first_idx + 1 first_idx = next(idx for idx, line in enumerate(data[next_idx:], next_idx) if line.strip() and not line.startswith(" ")) return "\n".join(map(line_process, data[first_idx:]))
def read_rst_and_process(fname, line_process=lambda line: line)
The reStructuredText string in file ``fname``, without the starting ``..`` comment and with ``line_process`` function applied to every line.
2.712002
2.621726
1.034434
def processor(line): markup = ".. image::" if line.startswith(markup): fname = line[len(markup):].strip() if not(fname.startswith("/") or "://" in fname): return "{} {}{}".format(markup, path, fname) return line return processor
def image_path_processor_factory(path)
Processor for concatenating the ``path`` to relative path images
5.250767
5.371452
0.977532
smix = Streamix() sig = thub(sig, 3) # Auto-copy 3 times (remove this line if using feedback) smix.add(0, sig) # To get a feedback delay, use "smix.copy()" below instead of both "sig" smix.add(280 * ms, .1 * sig) # You can also try other constants smix.add(220 * ms, .1 * sig) return smix
def delay(sig)
Simple feedforward delay effect
15.426564
14.70125
1.049337
dur = quarters * quarter_dur if pitch is None: return zeros(dur) freq = str2freq(pitch) * Hz return synth(freq, dur)
def note2snd(pitch, quarters)
Creates an audio Stream object for a single note. Parameters ---------- pitch : Pitch note like ``"A4"``, as a string, or ``None`` for a rest. quarters : Duration in quarters (see ``quarter_dur``).
7.91021
8.723602
0.90676
return os.path.join(os.path.split(__file__)[0], os.path.extsep.join([prefix, suffix]))
def find_full_name(prefix, suffix="rst")
Script path to actual path relative file name converter. Parameters ---------- prefix : File name prefix (without extension), relative to the script location. suffix : File name extension (defaults to "rst"). Returns ------- A file name path relative to the actual location to a file inside the script location. Warning ------- Calling this OVERWRITES the RST files in the directory it's in, and don't ask for confirmation!
3.911271
4.297016
0.91023
with open(find_full_name(prefix), "w") as rst_file: rst_file.write(full_gpl_for_rst) rst_file.write(data)
def save_to_rst(prefix, data)
Saves a RST file with the given prefix into the script file location.
6.60069
6.329322
1.042875
ks_mem = (sum(lz.sinusoid(x * freq) for x in [1, 3, 9]) + lz.white_noise() + lz.Stream(-1, 1)) / 5 return lz.karplus_strong(freq, memory=ks_mem)
def ks_synth(freq)
Synthesize the given frequency into a Stream by using a model based on Karplus-Strong.
11.50492
11.330761
1.01537
choral_file = corpus.getBachChorales()[random.randint(0, 399)] choral = corpus.parse(choral_file) if log: print("Chosen choral:", choral.metadata.title) return choral
def get_random_choral(log=True)
Gets a choral from the J. S. Bach chorals corpus (in Music21).
4.844117
3.1611
1.532415
# Configuration s, Hz = lz.sHz(rate) step = 60. / beat * s # Creates a score from the music21 data score = reduce(operator.concat, [[(pitch.frequency * Hz, # Note note.offset * step, # Starting time note.quarterLength * step, # Duration Fermata in note.expressions) for pitch in note.pitches] for note in score.flat.notes] ) # Mix all notes into song song = lz.Streamix() last_start = 0 for freq, start, dur, has_fermata in score: delta = start - last_start if has_fermata: delta *= 2 song.add(delta, synth(freq).limit(dur)) last_start = start # Zero-padding and finishing song.add(dur + pad_dur * s, lz.Stream([])) return song
def m21_to_stream(score, synth=ks_synth, beat=90, fdur=2., pad_dur=.5, rate=lz.DEFAULT_SAMPLE_RATE)
Converts Music21 data to a Stream object. Parameters ---------- score : A Music21 data, usually a music21.stream.Score instance. synth : A function that receives a frequency as input and should yield a Stream instance with the note being played. beat : The BPM (beats per minute) value to be used in playing. fdur : Relative duration of a fermata. For example, 1.0 ignores the fermata, and 2.0 (default) doubles its duration. pad_dur : Duration in seconds, but not multiplied by ``s``, to be used as a zero-padding ending event (avoids clicks at the end when playing). rate : The sample rate, given in samples per second.
8.285693
7.880787
1.051379
print("H(z) = " + filt_str) # Avoids printing as "1/z" filt = sympify(filt_str, dict(G=G, R=R, w=w, z=z)) print() # Finds the power magnitude equation for the filter freq_resp = filt.subs(z, exp(I * w)) frr, fri = freq_resp.as_real_imag() power_resp = fcompose(expand_complex, cancel, trigsimp)(frr ** 2 + fri ** 2) pprint(Eq(Symbol("Power"), power_resp)) print() # Finds the G value given the max gain value of 1 at the DC or Nyquist # frequency. As exp(I*pi) is -1 and exp(I*0) is 1, we can use freq_resp # (without "abs") instead of power_resp. Gsolutions = factor(solve(Eq(freq_resp.subs(w, max_gain_freq), 1), G)) assert len(Gsolutions) == 1 pprint(Eq(G, Gsolutions[0])) print() # Finds the unconstrained R values for a given cutoff frequency power_resp_no_G = power_resp.subs(G, Gsolutions[0]) half_power_eq = Eq(power_resp_no_G, S.Half) Rsolutions = solve(half_power_eq, R) # Constraining -1 < R < 1 when w = pi/4 (although the constraint is general) Rsolutions_stable = [el for el in Rsolutions if -1 < el.subs(w, pi/4) < 1] assert len(Rsolutions_stable) == 1 # Constraining w to the [0;pi] range, so |sin(w)| = sin(w) Rsolution = Rsolutions_stable[0].subs(abs(sin(w)), sin(w)) pprint(Eq(R, Rsolution)) # More information about the pole (or -pole) print("\n ** Alternative way to write R **\n") if has_sqrt(Rsolution): x = Symbol("x") # A helper symbol xval = sum(el for el in Rsolution.args if not has_sqrt(el)) pprint(Eq(x, xval)) print() pprint(Eq(R, expand(Rsolution.subs(xval, x)))) else: # That's also what would be found in a bilinear transform with prewarping pprint(Eq(R, Rsolution.rewrite(tan).cancel())) # Not so nice numerically # See whether the R denominator can be zeroed for root in solve(fraction(Rsolution)[1], w): if 0 <= root <= pi: power_resp_r = fcompose(expand, cancel)(power_resp_no_G.subs(w, root)) Rsolutions_r = solve(Eq(power_resp_r, S.Half), R) assert len(Rsolutions_r) == 1 print("\nDenominator is zero for this value of " + pretty(w)) pprint(Eq(w, root)) pprint(Eq(R, Rsolutions_r[0]))
def design_z_filter_single_pole(filt_str, max_gain_freq)
Finds the coefficients for a simple lowpass/highpass filter. This function just prints the coefficient values, besides the given filter equation and its power gain. There's 3 constraints used to find the coefficients: 1. The G value is defined by the max gain of 1 (0 dB) imposed at a specific frequency 2. The R value is defined by the 50% power cutoff frequency given in rad/sample. 3. Filter should be stable (-1 < R < 1) Parameters ---------- filt_str : Filter equation as a string using the G, R, w and z values. max_gain_freq : A value of zero (DC) or pi (Nyquist) to ensure the max gain as 1 (0 dB). Note ---- The R value is evaluated only at pi/4 rad/sample to find whether -1 < R < 1, and the max gain is assumed to be either 0 or pi, using other values might fail.
6.239824
5.759974
1.083308
if isinstance(value, float): if value.is_integer(): value = rint(value) # Hides ".0" when possible else: value = "{:g}".format(value) if power != 0: suffix = "" if power == 1 else "^{p}".format(p=power) if value == 1: return "{0}{1}".format(symbol, suffix) if value == -1: return "-{0}{1}".format(symbol, suffix) return "{v} * {0}{1}".format(symbol, suffix, v=value) else: return str(value)
def multiplication_formatter(power, value, symbol)
Formats a ``value * symbol ** power`` as a string. Usually ``symbol`` is already a string and both other inputs are numbers, however this isn't strictly needed. If ``symbol`` is a number, the multiplication won't be done, keeping its default string formatting as is.
3.265769
3.330999
0.980417
if b[:1] == "-": return "{0} - {1}".format(a, b[1:]) return "{0} + {1}".format(a, b)
def pair_strings_sum_formatter(a, b)
Formats the sum of a and b. Note ---- Both inputs are numbers already converted to strings.
3.063402
3.520347
0.870199
if len(order) != len(size): raise ValueError("Arguments 'order' and 'size' should have the same size") str_data = { "p": float_str.pi(value, after=after, max_denominator=max_denominator), "r": float_str.frac(value, max_denominator=max_denominator), "f": elementwise("v", 0)(lambda v: "{0:g}".format(v))(value) } sizes = {k: len(v) for k, v in iteritems(str_data)} sizes["p"] = max(1, sizes["p"] - len(float_str.pi_symbol) + 1) for char, max_size in xzip(order, size): if sizes[char] <= max_size: return str_data[char] return str_data["f"]
def float_str(value, order="pprpr", size=[4, 5, 3, 6, 4], after=False, max_denominator=1000000)
Pretty string from int/float. "Almost" automatic string formatter for integer fractions, fractions of :math:`\pi` and float numbers with small number of digits. Outputs a representation among ``float_str.pi``, ``float_str.frac`` (without a symbol) strategies, as well as the usual float representation. The formatter is chosen by counting the resulting length, trying each one in the given ``order`` until one gets at most the given ``size`` limit parameter as its length. Parameters ---------- value : A float number or an iterable with floats. order : A string that gives the order to try formatting. Each char should be: - ``"p"`` for pi formatter (``float_str.pi``); - ``"r"`` for ratio without symbol (``float_str.frac``); - ``"f"`` for the float usual base 10 decimal representation. Defaults to ``"pprpr"``. If no trial has the desired size, returns the float representation. size : The max size allowed for each formatting in the ``order``, respectively. Defaults to ``[4, 5, 3, 6, 4]``. after : Chooses the place where the :math:`\pi` symbol should appear, when such formatter apply. If ``True``, that's the end of the string. If ``False``, that's in between the numerator and the denominator, before the slash. Defaults to ``False``. max_denominator : The data in ``value`` is rounded following the limit given by this parameter when trying to represent it as a fraction/ratio. Defaults to the integer 1,000,000 (one million). Returns ------- A string with the number written into. Note ---- You probably want to keep ``max_denominator`` high to avoid rounding.
3.937436
3.347254
1.176318
if value == 0: return "0" frac = Fraction(value/symbol_value).limit_denominator(max_denominator) num, den = frac.numerator, frac.denominator output_data = [] if num < 0: num = -num output_data.append("-") if (num != 1) or (symbol_str == "") or after: output_data.append(str(num)) if (value != 0) and not after: output_data.append(symbol_str) if den != 1: output_data.extend(["/", str(den)]) if after: output_data.append(symbol_str) return "".join(output_data)
def float_str(value, symbol_str="", symbol_value=1, after=False, max_denominator=1000000)
Pretty rational string from float numbers. Converts a given numeric value to a string based on rational fractions of the given symbol, useful for labels in plots. Parameters ---------- value : A float number or an iterable with floats. symbol_str : String data that will be in the output representing the data as a numerator multiplier, if needed. Defaults to an empty string. symbol_value : The conversion value for the given symbol (e.g. pi = 3.1415...). Defaults to one (no effect). after : Chooses the place where the ``symbol_str`` should be written. If ``True``, that's the end of the string. If ``False``, that's in between the numerator and the denominator, before the slash. Defaults to ``False``. max_denominator : An int instance, used to round the float following the given limit. Defaults to the integer 1,000,000 (one million). Returns ------- A string with the rational number written into as a fraction, with or without a multiplying symbol. Examples -------- >>> float_str.frac(12.5) '25/2' >>> float_str.frac(0.333333333333333) '1/3' >>> float_str.frac(0.333) '333/1000' >>> float_str.frac(0.333, max_denominator=100) '1/3' >>> float_str.frac(0.125, symbol_str="steps") 'steps/8' >>> float_str.frac(0.125, symbol_str=" Hz", ... after=True) # The symbol includes whitespace! '1/8 Hz' See Also -------- float_str.pi : This fraction/ratio formatter, but configured with the "pi" symbol.
2.337578
2.969141
0.787291
return float_str.frac(value, symbol_str=float_str.pi_symbol, symbol_value=float_str.pi_value, after=after, max_denominator=max_denominator)
def float_str(value, after=False, max_denominator=1000000)
String formatter for fractions of :math:`\pi`. Alike the rational_formatter, but fixed to the symbol string ``float_str.pi_symbol`` and value ``float_str.pi_value`` (both can be changed, if needed), mainly intended for direct use with MatPlotLib labels. Examples -------- >>> float_str.pi_symbol = "pi" # Just for printing sake >>> float_str.pi(pi / 2) 'pi/2' >>> float_str.pi(pi * .333333333333333) 'pi/3' >>> float_str.pi(pi * .222222222222222) '2pi/9' >>> float_str.pi_symbol = " PI" # With the space >>> float_str.pi(pi / 2, after=True) '1/2 PI' >>> float_str.pi(pi * .333333333333333, after=True) '1/3 PI' >>> float_str.pi(pi * .222222222222222, after=True) '2/9 PI' See Also -------- float_str.frac : Float to string conversion, perhaps with a symbol as a multiplier.
6.074407
2.964026
2.049377
# Process multi-rows (replaced by rows with empty columns when needed) pdata = [] for row in data: prow = [el if isinstance(el, list) else [el] for el in row] pdata.extend(pr for pr in xzip_longest(*prow, fillvalue="")) # Find the columns sizes sizes = [max(len("{0}".format(el)) for el in column) for column in xzip(*pdata)] sizes = [max(size, len(sch)) for size, sch in xzip(sizes, schema)] # Creates the title and border rows if schema is None: schema = pdata[0] pdata = pdata[1:] border = " ".join("=" * size for size in sizes) titles = " ".join("{1:^{0}}".format(*pair) for pair in xzip(sizes, schema)) # Creates the full table and returns rows = [border, titles, border] rows.extend(" ".join("{1:<{0}}".format(*pair) for pair in xzip(sizes, row)) for row in pdata) rows.append(border) return rows
def rst_table(data, schema=None)
Creates a reStructuredText simple table (list of strings) from a list of lists.
4.260488
4.217591
1.010171
if not getattr(obj, "__doc__", False): data = [el.strip() for el in str(obj).splitlines()] if len(data) == 1: if data[0].startswith("<audiolazy.lazy_"): # Instance data = data[0].split("0x", -1)[0] + "0x...>" # Hide its address else: data = "".join(["``", data[0], "``"]) else: data = " ".join(data) # No docstring elif (not obj.__doc__) or (obj.__doc__.strip() == ""): data = "\ * * * * ...no docstring... * * * * \ " # Docstring else: data = (el.strip() for el in obj.__doc__.strip().splitlines()) data = " ".join(it.takewhile(lambda el: el != "", data)) # Ensure max_width (word wrap) max_width -= len(indent) result = [] for word in data.split(): if len(word) <= max_width: if result: if len(result[-1]) + len(word) + 1 <= max_width: word = " ".join([result.pop(), word]) result.append(word) else: result = [word] else: # Splits big words result.extend("".join(w) for w in blocks(word, max_width, padval="")) # Apply indentation and finishes return [indent + el for el in result]
def small_doc(obj, indent="", max_width=80)
Finds a useful small doc representation of an object. Parameters ---------- obj : Any object, which the documentation representation should be taken from. indent : Result indentation string to be insert in front of all lines. max_width : Each line of the result may have at most this length. Returns ------- For classes, modules, functions, methods, properties and StrategyDict instances, returns the first paragraph in the doctring of the given object, as a list of strings, stripped at right and with indent at left. For other inputs, it will use themselves cast to string as their docstring.
4.41326
4.426016
0.997118
r def decorator(func): if func.__doc__: kwargs["__doc__"] = func.__doc__.format(*args, **kwargs) func.__doc__ = template_.format(*args, **kwargs) return func return decorator
def format_docstring(template_="{__doc__}", *args, **kwargs)
r""" Parametrized decorator for adding/changing a function docstring. For changing a already available docstring in the function, the ``"{__doc__}"`` in the template is replaced by the original function docstring. Parameters ---------- template_ : A format-style template. *args, **kwargs : Positional and keyword arguments passed to the formatter. Examples -------- Closure docstring personalization: >>> def add(n): ... @format_docstring(number=n) ... def func(m): ... '''Adds {number} to the given value.''' ... return n + m ... return func >>> add(3).__doc__ 'Adds 3 to the given value.' >>> add("__").__doc__ 'Adds __ to the given value.' Same but using a lambda (you can also try with ``**locals()``): >>> def add_with_lambda(n): ... return format_docstring("Adds {0}.", n)(lambda m: n + m) >>> add_with_lambda(15).__doc__ 'Adds 15.' >>> add_with_lambda("something").__doc__ 'Adds something.' Mixing both template styles with ``{__doc__}``: >>> templ = "{0}, {1} is my {name} docstring:{__doc__}->\nEND!" >>> @format_docstring(templ, "zero", "one", "two", name="testing", k=[1, 2]) ... def test(): ... ''' ... Not empty! ... {2} != {k[0]} but {2} == {k[1]} ... ''' >>> print(test.__doc__) zero, one is my testing docstring: Not empty! two != 1 but two == 2 -> END!
3.073129
3.794829
0.80982
metaclass = kwargs.get("metaclass", type) if not bases: bases = (object,) class NewMeta(type): def __new__(mcls, name, mbases, namespace): if name: return metaclass.__new__(metaclass, name, bases, namespace) return super(NewMeta, mcls).__new__(mcls, "", mbases, {}) return NewMeta("", tuple(), {})
def meta(*bases, **kwargs)
Allows unique syntax similar to Python 3 for working with metaclasses in both Python 2 and Python 3. Examples -------- >>> class BadMeta(type): # An usual metaclass definition ... def __new__(mcls, name, bases, namespace): ... if "bad" not in namespace: # A bad constraint ... raise Exception("Oops, not bad enough") ... value = len(name) # To ensure this metaclass is called again ... def really_bad(self): ... return self.bad() * value ... namespace["really_bad"] = really_bad ... return super(BadMeta, mcls).__new__(mcls, name, bases, namespace) ... >>> class Bady(meta(object, metaclass=BadMeta)): ... def bad(self): ... return "HUA " ... >>> class BadGuy(Bady): ... def bad(self): ... return "R" ... >>> issubclass(BadGuy, Bady) True >>> Bady().really_bad() # Here value = 4 'HUA HUA HUA HUA ' >>> BadGuy().really_bad() # Called metaclass ``__new__`` again, so value = 6 'RRRRRR'
3.926585
4.282071
0.916983
if isinstance(start, collections.Iterable): lastp = 0. c = 0. if isinstance(step, collections.Iterable): if isinstance(modulo, collections.Iterable): for p, m, s in xzip(start, modulo, step): c += p - lastp c = c % m % m yield c c += s lastp = p else: for p, s in xzip(start, step): c += p - lastp c = c % modulo % modulo yield c c += s lastp = p else: if isinstance(modulo, collections.Iterable): for p, m in xzip(start, modulo): c += p - lastp c = c % m % m yield c c += step lastp = p else: # Only start is iterable. This should be optimized! if step == 0: for p in start: yield p % modulo % modulo else: steps = int(modulo / step) if steps > 1: n = 0 for p in start: c += p - lastp yield (c + n * step) % modulo % modulo lastp = p n += 1 if n == steps: n = 0 c = (c + steps * step) % modulo % modulo else: for p in start: c += p - lastp c = c % modulo % modulo yield c c += step lastp = p else: c = start if isinstance(step, collections.Iterable): if isinstance(modulo, collections.Iterable): for m, s in xzip(modulo, step): c = c % m % m yield c c += s else: # Only step is iterable. This should be optimized! for s in step: c = c % modulo % modulo yield c c += s else: if isinstance(modulo, collections.Iterable): for m in modulo: c = c % m % m yield c c += step else: # None is iterable if step == 0: c = start % modulo % modulo while True: yield c else: steps = int(modulo / step) if steps > 1: n = 0 while True: yield (c + n * step) % modulo % modulo n += 1 if n == steps: n = 0 c = (c + steps * step) % modulo % modulo else: while True: c = c % modulo % modulo yield c c += step
def modulo_counter(start=0., modulo=256., step=1.)
Creates a lazy endless counter stream with the given modulo, i.e., its values ranges from 0. to the given "modulo", somewhat equivalent to:\n Stream(itertools.count(start, step)) % modulo\n Yet the given step can be an iterable, and doen't create unneeded big ints. All inputs can be float. Input order remembers slice/range inputs. All inputs can also be iterables. If any of them is an iterable, the end of this counter happen when there's no more data in one of those inputs. to continue iteration.
1.846036
1.817289
1.015818
m = (end - begin) / (dur - (1. if finish else 0.)) for sample in xrange(int(dur + .5)): yield begin + sample * m
def line(dur, begin=0., end=1., finish=False)
Finite Stream with a straight line, could be used as fade in/out effects. Parameters ---------- dur : Duration, given in number of samples. Use the sHz function to help with durations in seconds. begin, end : First and last (or stop) values to be yielded. Defaults to [0., 1.], respectively. finish : Choose if ``end`` it the last to be yielded or it shouldn't be yield at all. Defauts to False, which means that ``end`` won't be yield. The last sample won't have "end" amplitude unless finish is True, i.e., without explicitly saying "finish=True", the "end" input works like a "stop" range parameter, although it can [should] be a float. This is so to help concatenating several lines. Returns ------- A finite Stream with the linearly spaced data. Examples -------- With ``finish = True``, it works just like NumPy ``np.linspace``, besides argument order and lazyness: >>> import numpy as np # This test needs Numpy >>> np.linspace(.2, .7, 6) array([ 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]) >>> line(6, .1, .7, finish=True) <audiolazy.lazy_stream.Stream object at 0x...> >>> list(line(6, .2, .7, finish=True)) [0.2, 0.3, 0.4, 0.5, 0.6, 0.7] >>> list(line(6, 1, 4)) # With finish = False (default) [1.0, 1.5, 2.0, 2.5, 3.0, 3.5] Line also works with Numpy arrays and matrices >>> a = np.mat([[1, 2], [3, 4]]) >>> b = np.mat([[3, 2], [2, 1]]) >>> for el in line(4, a, b): ... print(el) [[ 1. 2.] [ 3. 4.]] [[ 1.5 2. ] [ 2.75 3.25]] [[ 2. 2. ] [ 2.5 2.5]] [[ 2.5 2. ] [ 2.25 1.75]] And also with ZFilter instances: >>> from audiolazy import z >>> for el in line(4, z ** 2 - 5, z + 2): ... print(el) z^2 - 5 0.75 * z^2 + 0.25 * z - 3.25 0.5 * z^2 + 0.5 * z - 1.5 0.25 * z^2 + 0.75 * z + 0.25 Note ---- Amplitudes commonly should be float numbers between -1 and 1. Using line(<inputs>).append([end]) you can finish the line with one extra sample without worrying with the "finish" input. See Also -------- sHz : Second and hertz constants from samples/second rate.
5.018232
8.449425
0.593914
# Configure sustain possibilities if isinstance(s, collections.Iterable): it_s = iter(s) s = next(it_s) else: it_s = None # Attack and decay lines m_a = 1. / a m_d = (s - 1.) / d len_a = int(a + .5) len_d = int(d + .5) for sample in xrange(len_a): yield sample * m_a for sample in xrange(len_d): yield 1. + sample * m_d # Sustain! if it_s is None: while True: yield s else: for s in it_s: yield s
def attack(a, d, s)
Linear ADS fading attack stream generator, useful to be multiplied with a given stream. Parameters ---------- a : "Attack" time, in number of samples. d : "Decay" time, in number of samples. s : "Sustain" amplitude level (should be based on attack amplitude). The sustain can be a Stream, if desired. Returns ------- Stream instance yielding an endless envelope, or a finite envelope if the sustain input is a finite Stream. The attack amplitude is is 1.0.
3.672805
3.369452
1.09003
if dur is None or (isinf(dur) and dur > 0): while True: yield 1.0 for x in xrange(int(.5 + dur)): yield 1.0
def ones(dur=None)
Ones stream generator. You may multiply your endless stream by this to enforce an end to it. Parameters ---------- dur : Duration, in number of samples; endless if not given. Returns ------- Stream that repeats "1.0" during a given time duration (if any) or endlessly.
5.434938
5.804294
0.936365
if dur is None or (isinf(dur) and dur > 0): while True: yield 0.0 for x in xrange(int(.5 + dur)): yield 0.0
def zeros(dur=None)
Zeros/zeroes stream generator. You may sum your endless stream by this to enforce an end to it. Parameters ---------- dur : Duration, in number of samples; endless if not given. Returns ------- Stream that repeats "0.0" during a given time duration (if any) or endlessly.
5.488443
5.842369
0.939421
m_a = 1. / a m_d = (s - 1.) / d m_r = - s * 1. / r len_a = int(a + .5) len_d = int(d + .5) len_r = int(r + .5) len_s = int(dur + .5) - len_a - len_d - len_r for sample in xrange(len_a): yield sample * m_a for sample in xrange(len_d): yield 1. + sample * m_d for sample in xrange(len_s): yield s for sample in xrange(len_r): yield s + sample * m_r
def adsr(dur, a, d, s, r)
Linear ADSR envelope. Parameters ---------- dur : Duration, in number of samples, including the release time. a : "Attack" time, in number of samples. d : "Decay" time, in number of samples. s : "Sustain" amplitude level (should be based on attack amplitude). r : "Release" time, in number of samples. Returns ------- Stream instance yielding a finite ADSR envelope, starting and finishing with 0.0, having peak value of 1.0.
2.592959
2.499704
1.037307
if dur is None or (isinf(dur) and dur > 0): while True: yield random.uniform(low, high) for x in xrange(rint(dur)): yield random.uniform(low, high)
def white_noise(dur=None, low=-1., high=1.)
White noise stream generator. Parameters ---------- dur : Duration, in number of samples; endless if not given (or None). low, high : Lower and higher limits. Defaults to the [-1; 1] range. Returns ------- Stream yielding random numbers between -1 and 1.
4.504478
4.497841
1.001476
if dur is None or (isinf(dur) and dur > 0): while True: yield random.gauss(mu, sigma) for x in xrange(rint(dur)): yield random.gauss(mu, sigma)
def gauss_noise(dur=None, mu=0., sigma=1.)
Gaussian (normal) noise stream generator. Parameters ---------- dur : Duration, in number of samples; endless if not given (or None). mu : Distribution mean. Defaults to zero. sigma : Distribution standard deviation. Defaults to one. Returns ------- Stream yielding Gaussian-distributed random numbers. Warning ------- This function can yield values outside the [-1; 1] range, and you might need to clip its results. See Also -------- clip: Clips the signal up to both a lower and a higher limit.
4.687854
6.121611
0.765788
# When at 44100 samples / sec, 5 seconds of this leads to an error of 8e-14 # peak to peak. That's fairly enough. for n in modulo_counter(start=phase, modulo=2 * pi, step=freq): yield sin(n)
def sinusoid(freq, phase=0.)
Sinusoid based on the optimized math.sin
20.198652
20.186964
1.000579
if dur is None or (isinf(dur) and dur > 0): yield one while True: yield zero elif dur >= .5: num_samples = int(dur - .5) yield one for x in xrange(num_samples): yield zero
def impulse(dur=None, one=1., zero=0.)
Impulse stream generator. Parameters ---------- dur : Duration, in number of samples; endless if not given. Returns ------- Stream that repeats "0.0" during a given time duration (if any) or endlessly, but starts with one (and only one) "1.0".
5.198163
5.111862
1.016882
return comb.tau(2 * pi / freq, tau).linearize()(zeros(), memory=memory)
def karplus_strong(freq, tau=2e4, memory=white_noise)
Karplus-Strong "digitar" synthesis algorithm. Parameters ---------- freq : Frequency, in rad/sample. tau : Time decay (up to ``1/e``, or -8.686 dB), in number of samples. Defaults to 2e4. Be careful: using the default value will make duration different on each sample rate value. Use ``sHz`` if you need that independent from the sample rate and in seconds unit. memory : Memory data for the comb filter (delayed "output" data in memory). Defaults to the ``white_noise`` function. Returns ------- Stream instance with the synthesized data. Note ---- The fractional delays are solved by exponent linearization. See Also -------- sHz : Second and hertz constants from samples/second rate. white_noise : White noise stream generator.
53.965469
60.494869
0.892067
data = sum(cycle(self.table[::partial+1]) * amplitude for partial, amplitude in iteritems(harmonics_dict)) return TableLookup(data.take(len(self)), cycles=self.cycles)
def harmonize(self, harmonics_dict)
Returns a "harmonized" table lookup instance by using a "harmonics" dictionary with {partial: amplitude} terms, where all "partial" keys have to be integers.
20.206263
11.363087
1.778237
max_abs = max(self.table, key=abs) if max_abs == 0: raise ValueError("Can't normalize zeros") return self / max_abs
def normalize(self)
Returns a new table with values ranging from -1 to 1, reaching at least one of these, unless there's no data.
6.500561
5.02519
1.293595
op_symbols = .strip().splitlines() if HAS_MATMUL: op_symbols.append("@ matmul rmatmul") for op_line in op_symbols: symbol, names = op_line.split(None, 1) for name in names.split(): cls._insert(name, symbol)
def _initialize(cls)
Internal method to initialize the class by creating all the operator metadata to be used afterwards.
8.322788
8.150713
1.021112
def decorator(func): keep_name = kwargs.pop("keep_name", False) if kwargs: key = next(iter(kwargs)) raise TypeError("Unknown keyword argument '{}'".format(key)) if not keep_name: func.__name__ = str(names[0]) self[names] = func return self return decorator
def strategy(self, *names, **kwargs)
StrategyDict wrapping method for adding a new strategy. Parameters ---------- *names : Positional arguments with all names (strings) that could be used to call the strategy to be added, to be used both as key items and as attribute names. keep_name : Boolean keyword-only parameter for choosing whether the ``__name__`` attribute of the decorated/wrapped function should be changed or kept. Defaults to False (i.e., changes the name by default). Returns ------- A decorator/wrapper function to be used once on the new strategy to be added. Example ------- Let's create a StrategyDict that knows its name: >>> txt_proc = StrategyDict("txt_proc") Add a first strategy ``swapcase``, using this method as a decorator factory: >>> @txt_proc.strategy("swapcase") ... def txt_proc(txt): ... return txt.swapcase() Let's do it again, but wrapping the strategy functions inline. First two strategies have multiple names, the last keeps the function name, which would otherwise be replaced by the first given name: >>> txt_proc.strategy("lower", "low")(lambda txt: txt.lower()) {(...): <function ... at 0x...>, (...): <function ... at 0x...>} >>> txt_proc.strategy("upper", "up")(lambda txt: txt.upper()) {...} >>> txt_proc.strategy("keep", keep_name=True)(lambda txt: txt) {...} We can now iterate through the strategies to call them or see their function names >>> sorted(st("Just a Test") for st in txt_proc) ['JUST A TEST', 'Just a Test', 'jUST A tEST', 'just a test'] >>> sorted(st.__name__ for st in txt_proc) # Just the first name ['<lambda>', 'lower', 'swapcase', 'upper'] Calling a single strategy: >>> txt_proc.low("TeStInG") 'testing' >>> txt_proc["upper"]("TeStInG") 'TESTING' >>> txt_proc("TeStInG") # Default is the first: swapcase 'tEsTiNg' >>> txt_proc.default("TeStInG") 'tEsTiNg' >>> txt_proc.default = txt_proc.up # Manually changing the default >>> txt_proc("TeStInG") 'TESTING' Hint ---- Default strategy is the one stored as the ``default`` attribute, you can change or remove it at any time. When removing all keys that are assigned to the default strategy, the default attribute will be removed from the StrategyDict instance as well. The first strategy added afterwards is the one that will become the new default, unless the attribute is created or changed manually.
3.937168
4.149203
0.948897
sphinx_string = sphinx_template.format(build_dir=build_dir, out_type=out_type) if sphinx.main(shlex.split(sphinx_string)) != 0: raise RuntimeError("Something went wrong while building '{0}'" .format(out_type)) if out_type in make_target: make_string = make_template.format(build_dir=build_dir, out_type=out_type, make_param=make_target[out_type]) call(shlex.split(make_string))
def call_sphinx(out_type, build_dir = "build")
Call the ``sphinx-build`` for the given output type and the ``make`` when the target has this possibility. Parameters ---------- out_type : A builder name for ``sphinx-build``. See the full list at `<http://sphinx-doc.org/invocation.html>`_. build_dir : Directory for storing the output. Defaults to "build".
2.983524
3.385524
0.881259
return sum(Stream(f.series(n=None, **kwargs)).limit(n))
def taylor(f, n=2, **kwargs)
Taylor/Mclaurin polynomial aproximation for the given function. The ``n`` (default 2) is the amount of aproximation terms for ``f``. Other arguments are keyword-only and will be passed to the ``f.series`` method.
25.70389
16.075415
1.598957
return [[vect[abs(i-j)] for i in xrange(len(vect))] for j in xrange(len(vect))]
def toeplitz(vect)
Find the toeplitz matrix as a list of lists given its first line/column.
5.706531
4.69613
1.215156
if order is None: order = len(acdata) - 1 elif order >= len(acdata): acdata = Stream(acdata).append(0).take(order + 1) # Inner product for filters based on above statistics def inner(a, b): # Be careful, this depends on acdata !!! return sum(acdata[abs(i-j)] * ai * bj for i, ai in enumerate(a.numlist) for j, bj in enumerate(b.numlist) ) try: A = ZFilter(1) for m in xrange(1, order + 1): B = A(1 / z) * z ** -m A -= inner(A, z ** -m) / inner(B, B) * B except ZeroDivisionError: raise ParCorError("Can't find next PARCOR coefficient") A.error = inner(A, A) return A
def levinson_durbin(acdata, order=None)
Solve the Yule-Walker linear system of equations. They're given by: .. math:: R . a = r where :math:`R` is a simmetric Toeplitz matrix where each element are lags from the given autocorrelation list. :math:`R` and :math:`r` are defined (Python indexing starts with zero and slices don't include the last element): .. math:: R[i][j] = acdata[abs(j - i)] r = acdata[1 : order + 1] Parameters ---------- acdata : Autocorrelation lag list, commonly the ``acorr`` function output. order : The order of the resulting ZFilter object. Defaults to ``len(acdata) - 1``. Returns ------- A FIR filter, as a ZFilter object. The mean squared error over the given data (variance of the white noise) is in its "error" attribute. See Also -------- acorr: Calculate the autocorrelation of a given block. lpc : Calculate the Linear Predictive Coding (LPC) coefficients. parcor : Partial correlation coefficients (PARCOR), or reflection coefficients, relative to the lattice implementation of a filter, obtained by reversing the Levinson-Durbin algorithm. Examples -------- >>> data = [2, 2, 0, 0, -1, -1, 0, 0, 1, 1] >>> acdata = acorr(data) >>> acdata [12, 6, 0, -3, -6, -3, 0, 2, 4, 2] >>> ldfilt = levinson_durbin(acorr(data), 3) >>> ldfilt 1 - 0.625 * z^-1 + 0.25 * z^-2 + 0.125 * z^-3 >>> ldfilt.error # Squared! See lpc for more information about this 7.875 Notes ----- The Levinson-Durbin algorithm used to solve the equations needs :math:`O(order^2)` floating point operations.
7.318279
6.101285
1.199465
if order < 100: return lpc.nautocor(blk, order) try: return lpc.kautocor(blk, order) except ParCorError: return lpc.nautocor(blk, order)
def lpc(blk, order=None)
Find the Linear Predictive Coding (LPC) coefficients as a ZFilter object, the analysis whitening filter. This implementation uses the autocorrelation method, using the Levinson-Durbin algorithm or Numpy pseudo-inverse for linear system solving, when needed. Parameters ---------- blk : An iterable with well-defined length. Don't use this function with Stream objects! order : The order of the resulting ZFilter object. Defaults to ``len(blk) - 1``. Returns ------- A FIR filter, as a ZFilter object. The mean squared error over the given block is in its "error" attribute. Hint ---- See ``lpc.kautocor`` example, which should apply equally for this strategy. See Also -------- levinson_durbin : Levinson-Durbin algorithm for solving Yule-Walker equations (Toeplitz matrix linear system). lpc.nautocor: LPC coefficients from linear system solved with Numpy pseudo-inverse. lpc.kautocor: LPC coefficients obtained with Levinson-Durbin algorithm.
6.849651
3.591964
1.906938
from numpy import matrix from numpy.linalg import pinv acdata = acorr(blk, order) coeffs = pinv(toeplitz(acdata[:-1])) * -matrix(acdata[1:]).T coeffs = coeffs.T.tolist()[0] filt = 1 + sum(ai * z ** -i for i, ai in enumerate(coeffs, 1)) filt.error = acdata[0] + sum(a * c for a, c in xzip(acdata[1:], coeffs)) return filt
def lpc(blk, order=None)
Find the Linear Predictive Coding (LPC) coefficients as a ZFilter object, the analysis whitening filter. This implementation uses the autocorrelation method, using numpy.linalg.pinv as a linear system solver. Parameters ---------- blk : An iterable with well-defined length. Don't use this function with Stream objects! order : The order of the resulting ZFilter object. Defaults to ``len(blk) - 1``. Returns ------- A FIR filter, as a ZFilter object. The mean squared error over the given block is in its "error" attribute. Hint ---- See ``lpc.kautocor`` example, which should apply equally for this strategy. See Also -------- lpc.autocor: LPC coefficients by using one of the autocorrelation method strategies. lpc.kautocor: LPC coefficients obtained with Levinson-Durbin algorithm.
6.667762
6.133934
1.087029
from numpy import matrix from numpy.linalg import pinv lagm = lag_matrix(blk, order) phi = matrix(lagm) psi = phi[1:, 0] coeffs = pinv(phi[1:, 1:]) * -psi coeffs = coeffs.T.tolist()[0] filt = 1 + sum(ai * z ** -i for i, ai in enumerate(coeffs, 1)) filt.error = phi[0, 0] + sum(a * c for a, c in xzip(lagm[0][1:], coeffs)) return filt
def lpc(blk, order=None)
Find the Linear Predictive Coding (LPC) coefficients as a ZFilter object, the analysis whitening filter. This implementation uses the covariance method, assuming a zero-mean stochastic process, using numpy.linalg.pinv as a linear system solver.
6.869289
6.518485
1.053817
# Calculate the covariance for each lag pair phi = lag_matrix(blk, order) order = len(phi) - 1 # Inner product for filters based on above statistics def inner(a, b): return sum(phi[i][j] * ai * bj for i, ai in enumerate(a.numlist) for j, bj in enumerate(b.numlist) ) A = ZFilter(1) B = [z ** -1] beta = [inner(B[0], B[0])] m = 1 while True: try: k = -inner(A, z ** -m) / beta[m - 1] # Last one is really a PARCOR coeff except ZeroDivisionError: raise ZeroDivisionError("Can't find next coefficient") if k >= 1 or k <= -1: raise ValueError("Unstable filter") A += k * B[m - 1] if m >= order: A.error = inner(A, A) return A gamma = [inner(z ** -(m + 1), B[q]) / beta[q] for q in xrange(m)] B.append(z ** -(m + 1) - sum(gamma[q] * B[q] for q in xrange(m))) beta.append(inner(B[m], B[m])) m += 1
def lpc(blk, order=None)
Find the Linear Predictive Coding (LPC) coefficients as a ZFilter object, the analysis whitening filter. This implementation is based on the covariance method, assuming a zero-mean stochastic process, finding the coefficients iteratively and greedily like the lattice implementation in Levinson-Durbin algorithm, although the lag matrix found from the given block don't have to be toeplitz. Slow, but this strategy don't need NumPy.
5.420639
5.133679
1.055897
try: return all(abs(k) < 1 for k in parcor(ZFilter(filt.denpoly))) except ParCorError: return False
def parcor_stable(filt)
Tests whether the given filter is stable or not by using the partial correlation coefficients (reflection coefficients) of the given filter. Parameters ---------- filt : A LTI filter as a LinearFilter object. Returns ------- A boolean that is true only when all correlation coefficients are inside the unit circle. Critical stability (i.e., when outer coefficient has magnitude equals to one) is seem as an instability, and returns False. See Also -------- parcor : Partial correlation coefficients generator. lsf_stable : Tests filter stability with Line Spectral Frequencies (LSF) values.
14.593933
13.838123
1.054618
den = fir_filt.denominator if len(den) != 1: raise ValueError("Filter has feedback") elif den[0] != 1: # So we don't have to worry with the denominator anymore fir_filt /= den[0] from numpy import roots rev_filt = ZFilter(fir_filt.numerator[::-1]) * z ** -1 P = fir_filt + rev_filt Q = fir_filt - rev_filt roots_p = roots(P.numerator[::-1]) roots_q = roots(Q.numerator[::-1]) lsf_p = sorted(phase(roots_p)) lsf_q = sorted(phase(roots_q)) return reduce(operator.concat, xzip(*sorted([lsf_p, lsf_q])), tuple())
def lsf(fir_filt)
Find the Line Spectral Frequencies (LSF) from a given FIR filter. Parameters ---------- filt : A LTI FIR filter as a LinearFilter object. Returns ------- A tuple with all LSFs in rad/sample, alternating from the forward prediction and backward prediction filters, starting with the lowest LSF value.
5.028763
5.193116
0.968352
lsf_data = lsf(ZFilter(filt.denpoly)) return all(a < b for a, b in blocks(lsf_data, size=2, hop=1))
def lsf_stable(filt)
Tests whether the given filter is stable or not by using the Line Spectral Frequencies (LSF) of the given filter. Needs NumPy. Parameters ---------- filt : A LTI filter as a LinearFilter object. Returns ------- A boolean that is true only when the LSF values from forward and backward prediction filters alternates. Critical stability (both forward and backward filters has the same LSF value) is seem as an instability, and returns False. See Also -------- lsf : Gets the Line Spectral Frequencies from a filter. Needs NumPy. parcor_stable : Tests filter stability with partial correlation coefficients (reflection coefficients).
18.521385
20.288372
0.912906
separators = audiolazy.Stream( idx - 1 for idx, el in enumerate(lines) if all(char in sep for char in el) and len(el) > 0 ).append([len(lines)]) first_idx = separators.copy().take() blk_data = OrderedDict() empty_count = iter(audiolazy.count(1)) next_empty = lambda: "--Empty--{0}--".format(next(empty_count)) if first_idx != 0: blk_data[next_empty()] = lines[:first_idx] for idx1, idx2 in separators.blocks(size=2, hop=1): name = lines[idx1].strip() if lines[idx1].strip() != "" else next_empty() blk_data[name] = lines[idx1+2 : idx2] # Strips the empty lines for name in blk_data: while blk_data[name][-1].strip() == "": blk_data[name].pop() while blk_data[name][0].strip() == "": blk_data[name] = blk_data[name][1:] return blk_data
def splitter(lines, sep="-=", keep_idx=False)
Splits underlined blocks without indentation (reStructuredText pattern). Parameters ---------- lines : A list of strings sep : Underline symbols. A line with only such symbols will be seen as a underlined one. keep_idx : If False (default), the function returns a collections.OrderedDict. Else, returns a list of index pairs Returns ------- A collections.OrderedDict instance where a block with underlined key like ``"Key\\n==="`` and a list of lines following will have the item (key, list of lines), in the order that they appeared in the lists input. Empty keys gets an order numbering, and might happen for example after a ``"----"`` separator. The values (lists of lines) don't include the key nor its underline, and is also stripped/trimmed as lines (i.e., there's no empty line as the first and last list items, but first and last line may start/end with whitespaces).
4.618608
4.731905
0.976057
sp_name = name.split(".") try: # Find the audiolazy module name data = getattr(audiolazy, sp_name[0]) if isinstance(data, audiolazy.StrategyDict): module_name = data.default.__module__ else: module_name = data.__module__ if not module_name.startswith("audiolazy"): # Decorated math, cmath, ... del module_name for mname in audiolazy.__modules__: if sp_name[0] in getattr(audiolazy, mname).__all__: module_name = "audiolazy." + mname break # Now gets the referenced item location = ".".join([module_name] + sp_name) for sub_name in sp_name[1:]: data = getattr(data, sub_name) # Finds the role to be used for referencing type_dict = OrderedDict([ (audiolazy.StrategyDict, "obj"), (Exception, "exc"), (types.MethodType, "meth"), (types.FunctionType, "func"), (types.ModuleType, "mod"), (property, "attr"), (type, "class"), ]) role = [v for k, v in iteritems(type_dict) if isinstance(data, k)][0] # Not found except AttributeError: return ":obj:`{0}`".format(name) # Found! else: return ":{0}:`{1} <{2}>`".format(role, name, location)
def audiolazy_namer(name)
Process a name to get Sphinx reStructuredText internal references like ``:obj:`name <audiolazy.lazy_something.name>``` for a given name string, specific for AudioLazy.
4.015172
3.799191
1.056849
# Duplication removal if what == "module": # For some reason, summary appears twice idxs = [idx for idx, el in enumerate(lines) if el.startswith("Summary")] if len(idxs) >= 2: del lines[idxs.pop():] # Remove the last summary if len(idxs) >= 1: lines.insert(idxs[-1] + 1, "") if obj is audiolazy.lazy_math: lines.insert(idxs[-1] + 1, ".. tabularcolumns:: cl") else: lines.insert(idxs[-1] + 1, ".. tabularcolumns:: CJ") lines.insert(idxs[-1] + 1, "") # Real docstring format pre-processing result = [] for name, blk in iteritems(splitter(lines)): nlower = name.lower() if nlower == "parameters": starters = audiolazy.Stream(idx for idx, el in enumerate(blk) if len(el) > 0 and not el.startswith(" ") ).append([len(blk)]) for idx1, idx2 in starters.blocks(size=2, hop=1): param_data = " ".join(b.strip() for b in blk[idx1:idx2]) param, expl = param_data.split(":", 1) if "," in param: param = param.strip() if not param[0] in ("(", "[", "<", "{"): param = "[{0}]".format(param) while "," in param: fparam, param = param.split(",", 1) result.append(":param {0}: {1}".format(fparam.strip(), "\.\.\.")) result.append(":param {0}: {1}".format(param.strip(), expl.strip())) elif nlower == "returns": result.append(":returns: " + " ".join(blk)) elif nlower in ("note", "warning", "hint"): result.append(".. {0}::".format(nlower)) result.extend(" " + el for el in blk) elif nlower == "examples": result.append("**Examples**:") result.extend(" " + el for el in blk) elif nlower == "see also": result.append(".. seealso::") for el in blk: if el.endswith(":"): result.append("") # Skip a line # Sphinx may need help here to find some object locations refs = [namer(f.strip()) for f in el[:-1].split(",")] result.append(" " + ", ".join(refs)) else: result.append(" " + el) else: # Unkown block name, perhaps the starting one (empty) result.extend(blk) # Skip a line after each block result.append("") # Replace lines with the processed data while keeping the actual lines id del lines[:] lines.extend(result)
def pre_processor(app, what, name, obj, options, lines, namer=lambda name: ":obj:`{0}`".format(name))
Callback preprocessor function for docstrings. Converts data from Spyder pattern to Sphinx, using a ``namer`` function that defaults to ``lambda name: ":obj:`{0}`".format(name)`` (specific for ``.. seealso::``).
4.584726
4.689521
0.977653
if name in ["__doc__", "__module__", "__dict__", "__weakref__", "__abstractmethods__" ] or name.startswith("_abc_"): return True return False
def should_skip(app, what, name, obj, skip, options)
Callback object chooser function for docstring documentation.
6.491025
5.836926
1.112062
app.connect('autodoc-process-docstring', lambda *args: pre_processor(*args, namer=audiolazy_namer)) app.connect('autodoc-skip-member', should_skip)
def setup(app)
Just connects the docstring pre_processor and should_skip functions to be applied on all docstrings.
6.412345
4.390727
1.460429
for name in os.listdir(path): full_name = os.path.join(path, name) if os.path.isdir(full_name): for new_name in file_name_generator_recursive(full_name): yield new_name else: yield full_name
def file_name_generator_recursive(path)
Generator function for filenames given a directory path name. The resulting generator don't yield any [sub]directory name.
1.771668
1.711102
1.035396
return max(file_iterable, key=lambda fname: os.path.getmtime(fname))
def newest_file(file_iterable)
Returns the name of the newest file given an iterable of file names.
3.915858
3.161762
1.238505
return sum(el ** 2 for el in wnd) / sum(wnd) ** 2 * len(wnd)
def enbw(wnd)
Equivalent Noise Bandwidth in bins (Processing Gain reciprocal).
6.935723
7.211945
0.961699
return sum(wnd * Stream(wnd).skip(hop)) / sum(el ** 2 for el in wnd)
def overlap_correlation(wnd, hop)
Overlap correlation percent for the given overlap hop in samples.
16.062006
16.177233
0.992877
return -dB20(abs(sum(wnd * cexp(line(len(wnd), 0, -1j * pi)))) / sum(wnd))
def scalloping_loss(wnd)
Positive number with the scalloping loss in dB.
25.090706
21.310726
1.177375
spectrum = dB20(rfft(wnd, res * len(wnd))) root_at_xdb = spectrum - spectrum[0] - dB10(power) return next(i for i, el in enumerate(zcross(root_at_xdb)) if el) / res
def find_xdb_bin(wnd, power=.5, res=1500)
A not so fast way to find the x-dB cutoff frequency "bin" index. Parameters ---------- wnd: The window itself as an iterable. power: The power value (squared amplitude) where the x-dB value should lie, using ``x = dB10(power)``. res : Zero-padding factor. 1 for no zero-padding, 2 for twice the length, etc..
17.526876
14.322273
1.22375
size = 1 + 2 * neighbors pairs = enumerate(Stream(blk).blocks(size=size, hop=1).map(list), neighbors) for idx, nbhood in pairs: center = nbhood.pop(neighbors) if all(center >= el for el in nbhood): yield idx next(pairs) # Skip ones we already know can't be peaks next(pairs)
def get_peaks(blk, neighbors=2)
Get all peak indices in blk (sorted by index value) but the ones at the vector limits (first and last ``neighbors - 1`` values). A peak is the max value in a neighborhood of ``neighbors`` values for each side.
10.870748
10.38226
1.04705
spectrum = dB20(rfft(wnd, res * len(wnd))) first_peak = next(get_peaks(spectrum, neighbors=neighbors)) return max(spectrum[first_peak:]) - spectrum[0]
def hsll(wnd, res=20, neighbors=2)
Highest Side Lobe Level (dB). Parameters ---------- res : Zero-padding factor. 1 for no zero-padding, 2 for twice the length, etc.. neighbors : Number of neighbors needed by ``get_peaks`` to define a peak.
10.680139
11.44479
0.933188
# Finds all side lobe peaks, to find the "best" line for it afterwards spectrum = dB20(rfft(wnd, res * len(wnd))) peak_indices = list(get_peaks(spectrum, neighbors=neighbors)) log2_peak_indices = np.log2(peak_indices) # Base 2 ensures result in dB/oct peaks = spectrum[peak_indices] npeaks = len(peak_indices) # This length (actually, twice the length) is the "weight" of each peak lengths = np.array([0] + (1 - z **-2)(log2_peak_indices).skip(2).take(inf) + [0]) # Extreme values weights to zero max_length = sum(lengths) # First guess for the polynomial "a*x + b" is at the center idx = np.searchsorted(log2_peak_indices, .5 * (log2_peak_indices[-1] + log2_peak_indices[0])) a = ((peaks[idx+1] - peaks[idx]) / (log2_peak_indices[idx+1] - log2_peak_indices[idx])) b = peaks[idx] - a * log2_peak_indices[idx] # Scoring for the optimization function def score(vect, show=False): a, b = vect h = start_delta * (1 + a ** 2) ** .5 # Vertical deviation while True: pdelta = peaks - (a * log2_peak_indices + b) peaks_idx_included = np.nonzero((pdelta < h) & (pdelta > -h)) missing = npeaks - len(peaks_idx_included[0]) if missing < npeaks * max_miss: break h *= 2 pdelta_included = pdelta[peaks_idx_included] real_delta = max(pdelta_included) - min(pdelta_included) total_length = sum(lengths[peaks_idx_included]) if show: # For debug print(real_delta, len(peaks_idx_included[0])) return -total_length / max_length + 4 * real_delta ** .5 a, b = so.fmin(score, [a, b], xtol=1e-12, ftol=1e-12, disp=False) # # For Debug only # score([a, b], show=True) # plt.figure() # plt.plot(log2_peak_indices, peaks, "x-") # plt.plot(log2_peak_indices, a * log2_peak_indices + b) # plt.show() return a
def slfo(wnd, res=50, neighbors=2, max_miss=.7, start_delta=1e-4)
Side Lobe Fall Off (dB/oct). Finds the side lobe peak fall off numerically in dB/octave by using the ``scipy.optimize.fmin`` function. Hint ---- Originally, Harris rounded the results he found to a multiple of -6, you can use the AudioLazy ``rint`` function for that: ``rint(falloff, 6)``. Parameters ---------- res : Zero-padding factor. 1 for no zero-padding, 2 for twice the length, etc.. neighbors : Number of neighbors needed by ``get_peaks`` to define a peak. max_miss : Maximum percent of peaks that might be missed when approximating them by a line. start_delta : Minimum acceptable value for an orthogonal deviation from the approximation line to include a peak.
4.508758
4.363806
1.033217
div, mod = divmod(x, step) err = min(step / 10., .1) result = div * step if x > 0: result += err elif x < 0: result -= err if (operator.ge if x >= 0 else operator.gt)(2 * mod, step): result += step return int(result)
def rint(x, step=1)
Round to integer. Parameters ---------- x : Input number (integer or float) to be rounded. step : Quantization level (defaults to 1). If set to 2, the output will be the "best" even number. Result ------ The step multiple nearest to x. When x is exactly halfway between two possible outputs, it'll result the one farthest to zero.
4.982227
5.537583
0.899712
# Initialization res = deque(maxlen=size) # Circular queue idx = 0 last_idx = size - 1 if hop is None: hop = size reinit_idx = size - hop # Yields each block, keeping last values when needed if hop <= size: for el in seq: res.append(el) if idx == last_idx: yield res idx = reinit_idx else: idx += 1 # Yields each block and skips (loses) data due to hop > size else: for el in seq: if idx < 0: # Skips data idx += 1 else: res.append(el) if idx == last_idx: yield res #res = dtype() idx = size-hop else: idx += 1 # Padding to finish if idx > max(size-hop, 0): for _ in xrange(idx,size): res.append(padval) yield res
def blocks(seq, size=None, hop=None, padval=0.)
General iterable blockenizer. Generator that gets ``size`` elements from ``seq``, and outputs them in a collections.deque (mutable circular queue) sequence container. Next output starts ``hop`` elements after the first element in last output block. Last block may be appended with ``padval``, if needed to get the desired size. The ``seq`` can have hybrid / hetherogeneous data, it just need to be an iterable. You can use other type content as padval (e.g. None) to help segregate the padding at the end, if desired. Note ---- When hop is less than size, changing the returned contents will keep the new changed value in the next yielded container.
4.901769
4.626633
1.059468
for unused in xrange(left): yield zero for item in seq: yield item for unused in xrange(right): yield zero
def zero_pad(seq, left=0, right=0, zero=0.)
Zero padding sample generator (not a Stream!). Parameters ---------- seq : Sequence to be padded. left : Integer with the number of elements to be padded at left (before). Defaults to zero. right : Integer with the number of elements to be padded at right (after). Defaults to zero. zero : Element to be padded. Defaults to a float zero (0.0). Returns ------- A generator that pads the given ``seq`` with samples equals to ``zero``, ``left`` times before and ``right`` times after it.
4.690374
4.571184
1.026074
if (name == "") and (pos is None): pos = 0 def elementwise_decorator(func): @wraps(func) def wrapper(*args, **kwargs): # Find the possibly Iterable argument positional = (pos is not None) and (pos < len(args)) arg = args[pos] if positional else kwargs[name] if isinstance(arg, Iterable) and not isinstance(arg, STR_TYPES): if positional: data = (func(*(args[:pos] + (x,) + args[pos+1:]), **kwargs) for x in arg) else: data = (func(*args, **dict(it.chain(iteritems(kwargs), [(name, x)]))) for x in arg) # Generators should still return generators if isinstance(arg, SOME_GEN_TYPES): return data # Cast to numpy array or matrix, if needed, without actually # importing its package type_arg = type(arg) try: is_numpy = type_arg.__module__ == "numpy" except AttributeError: is_numpy = False if is_numpy: np_type = {"ndarray": sys.modules["numpy"].array, "matrix": sys.modules["numpy"].mat }[type_arg.__name__] return np_type(list(data)) # If it's a Stream, let's use the Stream constructor from .lazy_stream import Stream if issubclass(type_arg, Stream): return Stream(data) # Tuple, list, set, dict, deque, etc.. all falls here return type_arg(data) return func(*args, **kwargs) # wrapper returned value return wrapper # elementwise_decorator returned value return elementwise_decorator
def elementwise(name="", pos=None)
Function auto-map decorator broadcaster. Creates an "elementwise" decorator for one input parameter. To create such, it should know the name (for use as a keyword argument and the position "pos" (input as a positional argument). Without a name, only the positional argument will be used. Without both name and position, the first positional argument will be used.
4.468288
4.31229
1.036175
if not (ignore_type or type(a) == type(b)): return False is_it_a = isinstance(a, Iterable) is_it_b = isinstance(b, Iterable) if is_it_a != is_it_b: return False if is_it_a: return all(almost_eq.bits(ai, bi, bits, tol, ignore_type) for ai, bi in xzip_longest(a, b, fillvalue=pad)) significand = {32: 23, 64: 52, 80: 63, 128: 112 }[bits] # That doesn't include the sign bit power = tol - significand - 1 return abs(a - b) <= 2 ** power * abs(a + b)
def almost_eq(a, b, bits=32, tol=1, ignore_type=True, pad=0.)
Almost equal, based on the amount of floating point significand bits. Alternative to "a == b" for float numbers and iterables with float numbers, and tests for sequence contents (i.e., an elementwise a == b, that also works with generators, nested lists, nested generators, etc.). If the type of both the contents and the containers should be tested too, set the ignore_type keyword arg to False. Default version is based on 32 bits IEEE 754 format (23 bits significand). Could use 64 bits (52 bits significand) but needs a native float type with at least that size in bits. If a and b sizes differ, at least one will be padded with the pad input value to keep going with the comparison. Note ---- Be careful with endless generators!
3.790588
3.678698
1.030416
if not (ignore_type or type(a) == type(b)): return False is_it_a = isinstance(a, Iterable) is_it_b = isinstance(b, Iterable) if is_it_a != is_it_b: return False if is_it_a: return all(almost_eq.diff(ai, bi, max_diff, ignore_type) for ai, bi in xzip_longest(a, b, fillvalue=pad)) return abs(a - b) <= max_diff
def almost_eq(a, b, max_diff=1e-7, ignore_type=True, pad=0.)
Almost equal, based on the :math:`|a - b|` value. Alternative to "a == b" for float numbers and iterables with float numbers. See almost_eq for more information. This version based on the non-normalized absolute diff, similar to what unittest does with its assertAlmostEquals. If a and b sizes differ, at least one will be padded with the pad input value to keep going with the comparison. Note ---- Be careful with endless generators!
2.461891
2.548227
0.966119
class Cache(dict): def __missing__(self, key): result = self[key] = func(*key) return result cache = Cache() f = wraps(func)(lambda *key: cache[key]) f.cache = cache return f
def cached(func)
Cache decorator for a function without keyword arguments You can access the cache contents using the ``cache`` attribute in the resulting function, which is a dictionary mapping the arguments tuple to the previously returned function result.
3.183227
3.007665
1.058371
date_finder = DateFinder(base_date=base_date) return date_finder.find_dates(text, source=source, index=index, strict=strict)
def find_dates(text, source=False, index=False, strict=False, base_date=None)
Extract datetime strings from text :param text: A string that contains one or more natural language or literal datetime strings :type text: str|unicode :param source: Return the original string segment :type source: boolean :param index: Return the indices where the datetime string was located in text :type index: boolean :param strict: Only return datetimes with complete date information. For example: `July 2016` of `Monday` will not return datetimes. `May 16, 2015` will return datetimes. :type strict: boolean :param base_date: Set a default base datetime when parsing incomplete dates :type base_date: datetime :return: Returns a generator that produces :mod:`datetime.datetime` objects, or a tuple with the source text and index, if requested
2.279096
4.399129
0.518079
# add timezones to replace cloned_replacements = copy.copy(REPLACEMENTS) # don't mutate for tz_string in captures.get("timezones", []): cloned_replacements.update({tz_string: " "}) date_string = date_string.lower() for key, replacement in cloned_replacements.items(): # we really want to match all permutations of the key surrounded by whitespace chars except one # for example: consider the key = 'to' # 1. match 'to ' # 2. match ' to' # 3. match ' to ' # but never match r'(\s|)to(\s|)' which would make 'october' > 'ocber' date_string = re.sub( r"(^|\s)" + key + r"(\s|$)", replacement, date_string, flags=re.IGNORECASE, ) return date_string, self._pop_tz_string(sorted(captures.get("timezones", [])))
def _find_and_replace(self, date_string, captures)
:warning: when multiple tz matches exist the last sorted capture will trump :param date_string: :return: date_string, tz_string
5.230322
4.921121
1.062831