code
string
signature
string
docstring
string
loss_without_docstring
float64
loss_with_docstring
float64
factor
float64
stdev = self.semi_stdev(threshold=threshold, ddof=ddof, freq=freq) return (self.anlzd_ret() - threshold) / stdev
def sortino_ratio(self, threshold=0.0, ddof=0, freq=None)
Return over a threshold per unit of downside deviation. A performance appraisal ratio that replaces standard deviation in the Sharpe ratio with downside deviation. [Source: CFA Institute] Parameters ---------- threshold : {float, TSeries, pd.Series}, default 0. While zero is the default, it is also customary to use a "minimum acceptable return" (MAR) or a risk-free rate. Note: this is assumed to be a *periodic*, not necessarily annualized, return. ddof : int, default 0 Degrees of freedom, passed to pd.Series.std(). freq : str or None, default None A frequency string used to create an annualization factor. If None, `self.freq` will be used. If that is also None, a frequency will be inferred. If none can be inferred, an exception is raised. It may be any frequency string or anchored offset string recognized by Pandas, such as 'D', '5D', 'Q', 'Q-DEC', or 'BQS-APR'. Returns ------- float
7.122885
8.918459
0.798668
er = self.excess_ret(benchmark=benchmark) return er.anlzd_stdev(ddof=ddof)
def tracking_error(self, benchmark, ddof=0)
Standard deviation of excess returns. The standard deviation of the differences between a portfolio's returns and its benchmark's returns. [Source: CFA Institute] Also known as: tracking risk; active risk Parameters ---------- benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. ddof : int, default 0 Degrees of freedom, passed to pd.Series.std(). Returns ------- float
14.375051
18.369011
0.782571
benchmark = _try_to_squeeze(benchmark) if benchmark.ndim > 1: raise ValueError("Treynor ratio requires a single benchmark") rf = self._validate_rf(rf) beta = self.beta(benchmark) return (self.anlzd_ret() - rf) / beta
def treynor_ratio(self, benchmark, rf=0.02)
Return over `rf` per unit of systematic risk. A measure of risk-adjusted performance that relates a portfolio's excess returns to the portfolio's beta. [Source: CFA Institute] Parameters ---------- benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. rf : {float, TSeries, pd.Series}, default 0.02 If float, this represents an *compounded annualized* risk-free rate; 2.0% is the default. If a TSeries or pd.Series, this represents a time series of periodic returns to a risk-free security. To download a risk-free rate return series using 3-month US T-bill yields, see:`pyfinance.datasets.load_rf`. Returns ------- float
6.595362
6.791247
0.971156
slf, bm = self.upmarket_filter( benchmark=benchmark, threshold=threshold, compare_op=compare_op, include_benchmark=True, ) return slf.geomean() / bm.geomean()
def up_capture(self, benchmark, threshold=0.0, compare_op="ge")
Upside capture ratio. Measures the performance of `self` relative to benchmark conditioned on periods where `benchmark` is gt or ge to `threshold`. Upside capture ratios are calculated by taking the fund's monthly return during the periods of positive benchmark performance and dividing it by the benchmark return. [Source: CFA Institute] Parameters ---------- benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. threshold : float, default 0. The threshold at which the comparison should be done. `self` and `benchmark` are "filtered" to periods where `benchmark` is gt/ge `threshold`. compare_op : {'ge', 'gt'} Comparison operator used to compare to `threshold`. 'gt' is greater-than; 'ge' is greater-than-or-equal. Returns ------- float Note ---- This metric uses geometric, not arithmetic, mean return.
7.392924
6.541338
1.130185
return self._mkt_filter( benchmark=benchmark, threshold=threshold, compare_op=compare_op, include_benchmark=include_benchmark, )
def upmarket_filter( self, benchmark, threshold=0.0, compare_op="ge", include_benchmark=False, )
Drop elementwise samples where `benchmark` < `threshold`. Filters `self` (and optionally, `benchmark`) to periods where `benchmark` > `threshold`. (Or >= `threshold`.) Parameters ---------- benchmark : {pd.Series, TSeries, 1d np.ndarray} The benchmark security to which `self` is compared. threshold : float, default 0.0 The threshold at which the comparison should be done. `self` and `benchmark` are "filtered" to periods where `benchmark` is gt/ge `threshold`. compare_op : {'ge', 'gt'} Comparison operator used to compare to `threshold`. 'gt' is greater-than; 'ge' is greater-than-or-equal. include_benchmark : bool, default False If True, return tuple of (`self`, `benchmark`) both filtered. If False, return only `self` filtered. Returns ------- TSeries or tuple of TSeries TSeries if `include_benchmark=False`, otherwise, tuple.
2.429896
3.237752
0.750489
return ols.OLS( y=self, x=benchmark, has_const=has_const, use_const=use_const )
def CAPM(self, benchmark, has_const=False, use_const=True)
Interface to OLS regression against `benchmark`. `self.alpha()`, `self.beta()` and several other methods stem from here. For the full method set, see `pyfinance.ols.OLS`. Parameters ---------- benchmark : {pd.Series, TSeries, pd.DataFrame, np.ndarray} The benchmark securitie(s) to which `self` is compared. has_const : bool, default False Specifies whether `benchmark` includes a user-supplied constant (a column vector). If False, it is added at instantiation. use_const : bool, default True Whether to include an intercept term in the model output. Note the difference between `has_const` and `use_const`: the former specifies whether a column vector of 1s is included in the input; the latter specifies whether the model itself should include a constant (intercept) term. Exogenous data that is ~N(0,1) would have a constant equal to zero; specify use_const=False in this situation. Returns ------- pyfinance.ols.OLS
4.437985
4.962368
0.894328
def load_13f(url): resp = requests.get(url).text data = xmltodict.parse(resp)["informationTable"]["infoTable"] df = pd.DataFrame(data).drop( ["shrsOrPrnAmt", "investmentDiscretion"], axis=1 ) df["votingAuthority"] = df["votingAuthority"].apply(lambda d: d["Sole"]) df.loc[:, "value"] = pd.to_numeric(df["value"], errors="coerce") df.loc[:, "votingAuthority"] = pd.to_numeric( df["votingAuthority"], errors="coerce" ) return df
Load and parse an SEC.gov-hosted 13F XML file to Pandas DataFrame. Provide the URL to the raw .xml form13fInfoTable. (See example below.) Parameters ---------- url : str Link to .xml file. Returns ------- df : pd.DataFrame Holdings snapshot. Example ------- # Third Point LLC June 2017 13F >>> from pyfinance import datasets >>> url = 'https://www.sec.gov/Archives/edgar/data/1040273/000108514617001787/form13fInfoTable.xml' # noqa >>> df = datasets.load_13f(url=url) .. _U.S. SEC: Accessing EDGAR Data: https://www.sec.gov/edgar/searchedgar/accessing-edgar-data.htm
null
null
null
def load_industries(): n = [5, 10, 12, 17, 30, 38, 48] port = ("%s_Industry_Portfolios" % i for i in n) rets = [] for p in port: ret = pdr.get_data_famafrench(p, start=DSTART)[0] rets.append(ret.to_timestamp(how="end", copy=False)) industries = dict(zip(n, rets)) return industries
Load industry portfolio returns from Ken French's website. Returns ------- industries : dictionary of Pandas DataFrames Each key is a portfolio group. Example ------- >>> from pyfinance import datasets >>> ind = datasets.load_industries() # Monthly returns to 5 industry portfolios >>> ind[5].head() Cnsmr Manuf HiTec Hlth Other Date 1950-01-31 1.26 1.47 3.21 1.06 3.19 1950-02-28 1.91 1.29 2.06 1.92 1.02 1950-03-31 0.28 1.93 3.46 -2.90 -0.68 1950-04-30 3.22 5.21 3.58 5.52 1.50 1950-05-31 3.81 6.18 1.07 3.96 1.36
null
null
null
def load_rates(freq="D"): months = [1, 3, 6] years = [1, 2, 3, 5, 7, 10, 20, 30] # Nested dictionaries of symbols from fred.stlouisfed.org nom = { "D": ["DGS%sMO" % m for m in months] + ["DGS%s" % y for y in years], "W": ["WGS%sMO" % m for m in months] + ["WGS%sYR" % y for y in years], "M": ["GS%sM" % m for m in months] + ["GS%s" % y for y in years], } tips = { "D": ["DFII%s" % y for y in years[3:7]], "W": ["WFII%s" % y for y in years[3:7]], "M": ["FII%s" % y for y in years[3:7]], } fcp = { "D": ["DCPF1M", "DCPF2M", "DCPF3M"], "W": ["WCPF1M", "WCPF2M", "WCPF3M"], "M": ["CPF1M", "CPF2M", "CPF3M"], } nfcp = { "D": ["DCPN30", "DCPN2M", "DCPN3M"], "W": ["WCPN1M", "WCPN2M", "WCPN3M"], "M": ["CPN1M", "CPN2M", "CPN3M"], } short = { "D": ["DFF", "DPRIME", "DPCREDIT"], "W": ["FF", "WPRIME", "WPCREDIT"], "M": ["FEDFUNDS", "MPRIME", "MPCREDIT"], } rates = list( itertools.chain.from_iterable( [d[freq] for d in [nom, tips, fcp, nfcp, short]] ) ) rates = pdr.DataReader(rates, "fred", start=DSTART) l1 = ( ["Nominal"] * 11 + ["TIPS"] * 4 + ["Fncl CP"] * 3 + ["Non-Fncl CP"] * 3 + ["Short Rates"] * 3 ) l2 = ( ["%sm" % m for m in months] + ["%sy" % y for y in years] + ["%sy" % y for y in years[3:7]] + 2 * ["%sm" % m for m in range(1, 4)] + ["Fed Funds", "Prime Rate", "Primary Credit"] ) rates.columns = pd.MultiIndex.from_arrays([l1, l2]) return rates
Load interest rates from https://fred.stlouisfed.org/. Parameters ---------- reload : bool, default True If True, download the data from source rather than loading pickled data freq : str {'D', 'W', 'M'}, default 'D' Frequency of time series; daily, weekly, or monthly start : str or datetime, default '1963', optional Start date of time series dropna : bool, default True If True, drop NaN along rows in resulting DataFrame how : str, default 'any' Passed to dropna() Original source --------------- Board of Governors of the Federal Reserve System H.15 Selected Interest Rates https://www.federalreserve.gov/releases/h15/
null
null
null
def load_shiller(): xls = "http://www.econ.yale.edu/~shiller/data/ie_data.xls" cols = [ "date", "sp50p", "sp50d", "sp50e", "cpi", "frac", "real_rate", "real_sp50p", "real_sp50d", "real_sp50e", "cape", ] iedata = pd.read_excel( xls, sheet_name="Data", skiprows=7, skip_footer=1, names=cols ).drop("frac", axis=1) dt = iedata["date"].astype(str).str.replace(".", "") + "01" iedata["date"] = pd.to_datetime(dt, format="%Y%m%d") + offsets.MonthEnd() return iedata.set_index("date")
Load market & macroeconomic data from Robert Shiller's website. Returns ------- iedata : pd.DataFrame Time series of S&P 500 and interest rate variables. Example ------- >>> from pyfinance import datasets >>> shiller = datasets.load_shiller() >>> shiller.iloc[:7, :5] sp50p sp50d sp50e cpi real_rate date 1871-01-31 4.44 0.26 0.4 12.4641 5.3200 1871-02-28 4.50 0.26 0.4 12.8446 5.3233 1871-03-31 4.61 0.26 0.4 13.0350 5.3267 1871-04-30 4.74 0.26 0.4 12.5592 5.3300 1871-05-31 4.86 0.26 0.4 12.2738 5.3333 1871-06-30 4.82 0.26 0.4 12.0835 5.3367 1871-07-31 4.73 0.26 0.4 12.0835 5.3400 .. _ONLINE DATA ROBERT SHILLER: http://www.econ.yale.edu/~shiller/data.htm
null
null
null
def appender(defaultdocs, passed_to=None): def _doc(func): params = inspect.signature(func).parameters params = [param.name for param in params.values()] msg = "\n**kwargs : passed to `%s`" params = "".join( [ textwrap.dedent(defaultdocs.get(param, msg % passed_to)) for param in params ] ) func.__doc__ += "\n\nParameters\n" + 10 * "=" + params return func return _doc
Decorator for appending commonly used parameter definitions. Useful in cases where functions repeatedly use the same parameters. (But where a class implementation is not appropriate.) `defaultdocs` -> dict, format as shown in Example below, with keys being parameters and values being descriptions. Example ------- ddocs = { 'a' : ''' a : int, default 0 the first parameter ''', 'b' : ''' b : int, default 1 the second parameter ''' } @appender(ddocs) def f(a, b): '''Title doc.''' # Params here pass
null
null
null
def avail(df): avail = DataFrame( { "start": df.apply(lambda col: col.first_valid_index()), "end": df.apply(lambda col: col.last_valid_index()), } ) return avail[["start", "end"]]
Return start & end availability for each column in a DataFrame.
null
null
null
def constrain(*objs): # TODO: build in the options to first dropna on each index before finding # intersection, AND to use `dropcol` from this module. Note that this # would require filtering out Series to which dropcol isn't applicable. # A little bit of set magic below. # Note that pd.Index.intersection only applies to 2 Index objects common_idx = pd.Index(set.intersection(*[set(o.index) for o in objs])) new_dfs = (o.reindex(common_idx) for o in objs) return tuple(new_dfs)
Constrain group of DataFrames & Series to intersection of indices. Parameters ---------- objs : iterable DataFrames and/or Series to constrain Returns ------- new_dfs : list of DataFrames, copied rather than inplace
null
null
null
def constrain_horizon( r, strict=False, cust=None, years=0, quarters=0, months=0, days=0, weeks=0, year=None, month=None, day=None, ): textnum = { "zero": 0, "one": 1, "two": 2, "three": 3, "four": 4, "five": 5, "six": 6, "seven": 7, "eight": 8, "nine": 9, "ten": 10, "eleven": 11, "twelve": 12, "thirteen": 13, "fourteen": 14, "fifteen": 15, "sixteen": 16, "seventeen": 17, "eighteen": 18, "nineteen": 19, "twenty": 20, "twenty four": 24, "thirty six": 36, } relativedeltas = years, quarters, months, days, weeks, year, month, day if cust is not None and any(relativedeltas): raise ValueError( "Cannot specify competing (nonzero) values for both" " `cust` and other parameters." ) if cust is not None: cust = cust.lower() if cust.endswith("y"): years = int(re.search(r"\d+", cust).group(0)) elif cust.endswith("m"): months = int(re.search(r"\d+", cust).group(0)) elif cust.endswith(("years ago", "year ago", "year", "years")): pos = cust.find(" year") years = textnum[cust[:pos].replace("-", "")] elif cust.endswith(("months ago", "month ago", "month", "months")): pos = cust.find(" month") months = textnum[cust[:pos].replace("-", "")] else: raise ValueError("`cust` not recognized.") # Convert quarters to months & combine for MonthOffset months += quarters * 3 # Start date will be computed relative to `end` end = r.index[-1] # Establish some funky date conventions assumed in finance. If the end # date is 6/30, the date *3 months prior* is 3/31, not 3/30 as would be # produced by dateutil.relativedelta. if end.is_month_end and days == 0 and weeks == 0: if years != 0: years *= 12 months += years start = end - offsets.MonthBegin(months) else: start = end - offsets.DateOffset( years=years, months=months, days=days - 1, weeks=weeks, year=year, month=month, day=day, ) if strict and start < r.index[0]: raise ValueError( "`start` pre-dates first element of the Index, %s" % r.index[0] ) return r[start:end]
Constrain a Series/DataFrame to a specified lookback period. See the documentation for dateutil.relativedelta: dateutil.readthedocs.io/en/stable/relativedelta.html Parameters ---------- r : DataFrame or Series The target pandas object to constrain strict : bool, default False If True, raise Error if the implied start date on the horizon predates the actual start date of `r`. If False, just return `r` in this situation years, months, weeks, days : int, default 0 Relative information; specify as positive to subtract periods. Adding or subtracting a relativedelta with relative information performs the corresponding aritmetic operation on the original datetime value with the information in the relativedelta quarters : int, default 0 Similar to the other plural relative info periods above, but note that this param is custom here. (It is not a standard relativedelta param) year, month, day : int, default None Absolute information; specify as positive to subtract periods. Adding relativedelta with absolute information does not perform an aritmetic operation, but rather REPLACES the corresponding value in the original datetime with the value(s) in relativedelta
null
null
null
def cumargmax(a): # Thank you @Alex Riley # https://stackoverflow.com/a/40675969/7954504 m = np.asarray(np.maximum.accumulate(a)) if a.ndim == 1: x = np.arange(a.shape[0]) else: x = np.repeat(np.arange(a.shape[0])[:, None], a.shape[1], axis=1) x[1:] *= m[:-1] < m[1:] np.maximum.accumulate(x, axis=0, out=x) return x
Cumulative argmax. Parameters ---------- a : np.ndarray Returns ------- np.ndarray
null
null
null
def dropcols(df, start=None, end=None): if isinstance(df, Series): raise ValueError("func only applies to `pd.DataFrame`") if start is None: start = df.index[0] if end is None: end = df.index[-1] subset = df.index[(df.index >= start) & (df.index <= end)] return df.dropna(axis=1, subset=subset)
Drop columns that contain NaN within [start, end] inclusive. A wrapper around DataFrame.dropna() that builds an easier *subset* syntax for tseries-indexed DataFrames. Parameters ---------- df : DataFrame start : str or datetime, default None start cutoff date, inclusive end : str or datetime, default None end cutoff date, inclusive Example ------- df = DataFrame(np.random.randn(10,3), index=pd.date_range('2017', periods=10)) # Drop in some NaN df.set_value('2017-01-04', 0, np.nan) df.set_value('2017-01-02', 2, np.nan) df.loc['2017-01-05':, 1] = np.nan # only col2 will be kept--its NaN value falls before `start` print(dropcols(df, start='2017-01-03')) 2 2017-01-01 0.12939 2017-01-02 NaN 2017-01-03 0.16596 2017-01-04 1.06442 2017-01-05 -1.87040 2017-01-06 -0.17160 2017-01-07 0.94588 2017-01-08 1.49246 2017-01-09 0.02042 2017-01-10 0.75094
null
null
null
def dropout(a, p=0.5, inplace=False): dt = a.dtype if p == 0.5: # Can't pass float dtype to `randint` directly. rand = np.random.randint(0, high=2, size=a.shape).astype(dtype=dt) else: rand = np.random.choice([0, 1], p=[p, 1 - p], size=a.shape).astype(dt) if inplace: a *= rand else: return a * rand
Randomly set elements from `a` equal to zero, with proportion `p`. Similar in concept to the dropout technique employed within neural networks. Parameters ---------- a: numpy.ndarray Array to be modified. p: float in [0, 1] Expected proportion of elements in the result that will equal 0. inplace: bool Example ------- >>> x = np.arange(10, 20, 2, dtype=np.uint8) >>> z = dropout(x, p=0.6) >>> z array([10, 12, 0, 0, 0], dtype=uint8) >>> x.dtype == z.dtype True
null
null
null
def _uniquewords(*args): words = {} n = 0 for word in itertools.chain(*args): if word not in words: words[word] = n n += 1 return words
Dictionary of words to their indices. Helper function to `encode.`
null
null
null
def encode(*args): args = [arg.split() for arg in args] unique = _uniquewords(*args) feature_vectors = np.zeros((len(args), len(unique))) for vec, s in zip(feature_vectors, args): for word in s: vec[unique[word]] = 1 return feature_vectors
One-hot encode the given input strings.
null
null
null
def expanding_stdize(obj, **kwargs): return (obj - obj.expanding(**kwargs).mean()) / ( obj.expanding(**kwargs).std() )
Standardize a pandas object column-wise on expanding window. **kwargs -> passed to `obj.expanding` Example ------- df = pd.DataFrame(np.random.randn(10, 3)) print(expanding_stdize(df, min_periods=5)) 0 1 2 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 NaN NaN NaN 4 0.67639 -1.03507 0.96610 5 0.95008 -0.26067 0.27761 6 1.67793 -0.50816 0.19293 7 1.50364 -1.10035 -0.87859 8 -0.64949 0.08028 -0.51354 9 0.15280 -0.73283 -0.84907
null
null
null
def get_anlz_factor(freq): # 'Q-NOV' would give us (2001, 1); we just want (2000, 1). try: base, mult = get_freq_code(freq) except ValueError: # The above will fail for a bunch of irregular frequencies, such # as 'Q-NOV' or 'BQS-APR' freq = freq.upper() if freq.startswith(("A-", "BA-", "AS-", "BAS-")): freq = "A" elif freq.startswith(("Q-", "BQ-", "QS-", "BQS-")): freq = "Q" elif freq in {"MS", "BMS"}: freq = "M" else: raise ValueError("Invalid frequency: %s" % freq) base, mult = get_freq_code(freq) return PERIODS_PER_YEAR[(base // 1000) * 1000] / mult
Find the number of periods per year given a frequency. Parameters ---------- freq : str Any frequency str or anchored offset str recognized by Pandas. Returns ------- float Example ------- >>> get_anlz_factor('D') 252.0 >>> get_anlz_factor('5D') # 5-business-day periods per year 50.4 >>> get_anlz_factor('Q') 4.0 >>> get_anlz_factor('Q-DEC') 4.0 >>> get_anlz_factor('BQS-APR') 4.0
null
null
null
def public_dir(obj, max_underscores=0, type_=None): if max_underscores > 0: cond1 = lambda i: not i.startswith("_" * max_underscores) # noqa else: cond1 = lambda i: True # noqa if type_: if isinstance(type_, str): if type_ in ["callable", "func", "function"]: type_ = Callable elif "callable" in type_ or "func" in type_: type_ = [ i if i not in ["callable", "func", "function"] else Callable for i in type_ ] if isinstance(type_, str): # 'str' --> str (class `type`) type_ = eval(type_) elif not isinstance(type_, type): type_ = [eval(i) if isinstance(i, str) else i for i in type_] # else: we have isinstance(type_, type) cond2 = lambda i: isinstance(getattr(obj, i), type_) # noqa else: cond2 = lambda i: True # noqa return [i for i in dir(obj) if cond1(i) and cond2(i)]
Like `dir()` with additional options for object inspection. Attributes ---------- obj: object Any object that could be passed to `dir()` max_underscores: int, default 0 If > 0, names that begin with underscores repeated n or more times will be excluded. type_: None, sequence, str, or type Filter to objects of these type(s) only. if 'callable' or 'func' is passed, gets mapped to collections.Callable. if the string-version of a type (i.e. 'str') is passed, it gets eval'd to its type. Examples -------- >>> import os >>> # Get all public string constants from os.path public_dir(os.path, max_underscores=1, type_=str) ['curdir', 'defpath', 'devnull', 'extsep', 'pardir', 'pathsep', 'sep'] >>> # Get integer constants public_dir(os.path, max_underscores=1, type_='int') ['supports_unicode_filenames']
null
null
null
def random_tickers( length, n_tickers, endswith=None, letters=None, slicer=itertools.islice ): # The trick here is that we need uniqueness. That defeats the # purpose of using NumPy because we need to generate 1x1. # (Although the alternative is just to generate a "large # enough" duplicated sequence and prune from it.) if letters is None: letters = string.ascii_uppercase if endswith: # Only generate substrings up to `endswith` length = length - len(endswith) join = "".join def yield_ticker(rand=random.choices): if endswith: while True: yield join(rand(letters, k=length)) + endswith else: while True: yield join(rand(letters, k=length)) tickers = itertools.islice(unique_everseen(yield_ticker()), n_tickers) return list(tickers)
Generate a length-n_tickers list of unique random ticker symbols. Parameters ---------- length : int The length of each ticker string. n_tickers : int Number of tickers to generate. endswith : str, default None Specify the ending element(s) of each ticker (for example, 'X'). letters : sequence, default None Sequence of possible letters to choose from. If None, defaults to `string.ascii_uppercase`. Returns ------- list of str Examples -------- >>> from pyfinance import utils >>> utils.random_tickers(length=5, n_tickers=4, endswith='X') ['UZTFX', 'ROYAX', 'ZBVIX', 'IUWYX'] >>> utils.random_tickers(3, 8) ['SBW', 'GDF', 'FOG', 'PWO', 'QDH', 'MJJ', 'YZD', 'QST']
null
null
null
def random_weights(size, sumto=1.0): w = np.random.random(size) if w.ndim == 2: if isinstance(sumto, (np.ndarray, list, tuple)): sumto = np.asarray(sumto)[:, None] w = sumto * w / w.sum(axis=-1)[:, None] elif w.ndim == 1: w = sumto * w / w.sum() else: raise ValueError("`w.ndim` must be 1 or 2, not %s" % w.ndim) return w
Generate an array of random weights that sum to `sumto`. The result may be of arbitrary dimensions. `size` is passed to the `size` parameter of `np.random.random`, which acts as a shape parameter in this case. Note that `sumto` is subject to typical Python floating point limitations. This function does not implement a softmax check. Parameters ---------- size: int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. sumto: float, default 1. Each vector of weights should sum to this in decimal terms. Returns ------- np.ndarray
null
null
null
def rolling_windows(a, window): if window > a.shape[0]: raise ValueError( "Specified `window` length of {0} exceeds length of" " `a`, {1}.".format(window, a.shape[0]) ) if isinstance(a, (Series, DataFrame)): a = a.values if a.ndim == 1: a = a.reshape(-1, 1) shape = (a.shape[0] - window + 1, window) + a.shape[1:] strides = (a.strides[0],) + a.strides windows = np.squeeze( np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) ) # In cases where window == len(a), we actually want to "unsqueeze" to 2d. # I.e., we still want a "windowed" structure with 1 window. if windows.ndim == 1: windows = np.atleast_2d(windows) return windows
Creates rolling-window 'blocks' of length `window` from `a`. Note that the orientation of rows/columns follows that of pandas. Example ------- import numpy as np onedim = np.arange(20) twodim = onedim.reshape((5,4)) print(twodim) [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15] [16 17 18 19]] print(rwindows(onedim, 3)[:5]) [[0 1 2] [1 2 3] [2 3 4] [3 4 5] [4 5 6]] print(rwindows(twodim, 3)[:5]) [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] [[ 8 9 10 11] [12 13 14 15] [16 17 18 19]]]
null
null
null
def unique_everseen(iterable, filterfalse_=itertools.filterfalse): # Itertools recipes: # https://docs.python.org/3/library/itertools.html#itertools-recipes seen = set() seen_add = seen.add for element in filterfalse_(seen.__contains__, iterable): seen_add(element) yield element
Unique elements, preserving order.
null
null
null
def uniqify(seq): if PY37: # Credit: Raymond Hettinger return list(dict.fromkeys(seq)) else: # Credit: Dave Kirby # https://www.peterbe.com/plog/uniqifiers-benchmark seen = set() # We don't care about truth value of `not seen.add(x)`; # just there to add to `seen` inplace. return [x for x in seq if x not in seen and not seen.add(x)]
`Uniqify` a sequence, preserving order. A plain-vanilla version of itertools' `unique_everseen`. Example ------- >>> s = list('zyxabccabxyz') >>> uniqify(s) ['z', 'y', 'x', 'a', 'b', 'c' Returns ------- list
null
null
null
return super(GetAppListJsonView, self).dispatch(*args, **kwargs)
def dispatch(self, *args, **kwargs)
Only staff members can access this view
9.445714
8.375665
1.127757
self.app_list = site.get_app_list(request) self.apps_dict = self.create_app_list_dict() # no menu provided items = get_config('MENU') if not items: voices = self.get_default_voices() else: voices = [] for item in items: self.add_voice(voices, item) return JsonResponse(voices, safe=False)
def get(self, request)
Returns a json representing the menu voices in a format eaten by the js menu. Raised ImproperlyConfigured exceptions can be viewed in the browser console
5.74608
4.849069
1.184986
voice = None if item.get('type') == 'title': voice = self.get_title_voice(item) elif item.get('type') == 'app': voice = self.get_app_voice(item) elif item.get('type') == 'model': voice = self.get_app_model_voice(item) elif item.get('type') == 'free': voice = self.get_free_voice(item) if voice: voices.append(voice)
def add_voice(self, voices, item)
Adds a voice to the list
2.047537
2.026294
1.010483
view = True if item.get('perms', None): view = self.check_user_permission(item.get('perms', [])) elif item.get('apps', None): view = self.check_apps_permission(item.get('apps', [])) if view: return { 'type': 'title', 'label': item.get('label', ''), 'icon': item.get('icon', None) } return None
def get_title_voice(self, item)
Title voice Returns the js menu compatible voice dict if the user can see it, None otherwise
3.11827
3.021215
1.032124
view = True if item.get('perms', None): view = self.check_user_permission(item.get('perms', [])) elif item.get('apps', None): view = self.check_apps_permission(item.get('apps', [])) if view: return { 'type': 'free', 'label': item.get('label', ''), 'icon': item.get('icon', None), 'url': item.get('url', None) } return None
def get_free_voice(self, item)
Free voice Returns the js menu compatible voice dict if the user can see it, None otherwise
2.784549
2.764332
1.007314
if item.get('name', None) is None: raise ImproperlyConfigured('App menu voices must have a name key') if self.check_apps_permission([item.get('name', None)]): children = [] if item.get('models', None) is None: for name, model in self.apps_dict[item.get('name')]['models'].items(): # noqa children.append({ 'type': 'model', 'label': model.get('name', ''), 'url': model.get('admin_url', '') }) else: for model_item in item.get('models', []): voice = self.get_model_voice(item.get('name'), model_item) if voice: children.append(voice) return { 'type': 'app', 'label': item.get('label', ''), 'icon': item.get('icon', None), 'children': children } return None
def get_app_voice(self, item)
App voice Returns the js menu compatible voice dict if the user can see it, None otherwise
2.912383
2.776037
1.049115
if app_model_item.get('name', None) is None: raise ImproperlyConfigured('Model menu voices must have a name key') # noqa if app_model_item.get('app', None) is None: raise ImproperlyConfigured('Model menu voices must have an app key') # noqa return self.get_model_voice(app_model_item.get('app'), app_model_item)
def get_app_model_voice(self, app_model_item)
App Model voice Returns the js menu compatible voice dict if the user can see it, None otherwise
3.038272
2.99329
1.015028
if model_item.get('name', None) is None: raise ImproperlyConfigured('Model menu voices must have a name key') # noqa if self.check_model_permission(app, model_item.get('name', None)): return { 'type': 'model', 'label': model_item.get('label', ''), 'icon': model_item.get('icon', None), 'url': self.apps_dict[app]['models'][model_item.get('name')]['admin_url'], # noqa } return None
def get_model_voice(self, app, model_item)
Model voice Returns the js menu compatible voice dict if the user can see it, None otherwise
3.67502
3.393629
1.082917
d = {} for app in self.app_list: models = {} for model in app.get('models', []): models[model.get('object_name').lower()] = model d[app.get('app_label').lower()] = { 'app_url': app.get('app_url', ''), 'app_label': app.get('app_label'), 'models': models } return d
def create_app_list_dict(self)
Creates a more efficient to check dictionary from the app_list list obtained from django admin
2.447576
2.179962
1.12276
for app in apps: if app in self.apps_dict: return True return False
def check_apps_permission(self, apps)
Checks if one of apps is listed in apps_dict Since apps_dict is derived from the app_list given by django admin, it lists only the apps the user can view
5.049221
3.925662
1.286209
if self.apps_dict.get(app, False) and model in self.apps_dict[app]['models']: return True return False
def check_model_permission(self, app, model)
Checks if model is listed in apps_dict Since apps_dict is derived from the app_list given by django admin, it lists only the apps and models the user can view
4.437167
3.83407
1.157299
voices = [] for app in self.app_list: children = [] for model in app.get('models', []): child = { 'type': 'model', 'label': model.get('name', ''), 'url': model.get('admin_url', '') } children.append(child) voice = { 'type': 'app', 'label': app.get('name', ''), 'url': app.get('app_url', ''), 'children': children } voices.append(voice) return voices
def get_default_voices(self)
When no custom menu is defined in settings Retrieves a js menu ready dict from the django admin app list
2.517753
2.168968
1.160807
if not index.isValid(): return dataFrame = self.model().dataFrame() # get all infos from dataFrame dfindex = dataFrame.iloc[[index.row()]].index columnName = dataFrame.columns[index.column()] dtype = dataFrame[columnName].dtype value = dataFrame[columnName][dfindex] # create the mime data mimePayload = PandasCellPayload( dfindex, columnName, value, dtype, hex(id(self.model())) ) mimeData = MimeData() mimeData.setData(mimePayload) # create the drag icon and start drag operation drag = QtGui.QDrag(self) drag.setMimeData(mimeData) pixmap = QtGui.QPixmap(":/icons/insert-table.png") drag.setHotSpot(QtCore.QPoint(pixmap.width()/3, pixmap.height()/3)) drag.setPixmap(pixmap) result = drag.start(Qt.MoveAction)
def startDrag(self, index)
start a drag operation with a PandasCellPayload on defined index. Args: index (QModelIndex): model index you want to start the drag operation.
3.271029
3.050652
1.072239
for button in self.buttons[1:]: button.setEnabled(enabled) if button.isChecked(): button.setChecked(False) model = self.tableView.model() if model is not None: model.enableEditing(enabled)
def enableEditing(self, enabled)
Enable the editing buttons to add/remove rows/columns and to edit the data. This method is also a slot. In addition, the data of model will be made editable, if the `enabled` parameter is true. Args: enabled (bool): This flag indicates, if the buttons shall be activated.
3.676249
4.351478
0.844828
#for button in self.buttons[1:]: for button in self.buttons: # supress editButtons toggled event button.blockSignals(True) if button.isChecked(): button.setChecked(False) button.blockSignals(False)
def uncheckButton(self)
Removes the checked stated of all buttons in this widget. This method is also a slot.
5.373014
5.435134
0.988571
model = self.tableView.model() if model is not None: model.addDataFrameColumn(columnName, dtype, defaultValue) self.addColumnButton.setChecked(False)
def addColumn(self, columnName, dtype, defaultValue)
Adds a column with the given parameters to the underlying model This method is also a slot. If no model is set, nothing happens. Args: columnName (str): The name of the new column. dtype (numpy.dtype): The datatype of the new column. defaultValue (object): Fill the column with this value.
5.236551
6.77341
0.773104
if triggered: dialog = AddAttributesDialog(self) dialog.accepted.connect(self.addColumn) dialog.rejected.connect(self.uncheckButton) dialog.show()
def showAddColumnDialog(self, triggered)
Display the dialog to add a column to the model. This method is also a slot. Args: triggered (bool): If the corresponding button was activated, the dialog will be created and shown.
4.507356
5.496082
0.820103
if triggered: model = self.tableView.model() model.addDataFrameRows() self.sender().setChecked(False)
def addRow(self, triggered)
Adds a row to the model. This method is also a slot. Args: triggered (bool): If the corresponding button was activated, the row will be appended to the end.
10.053901
11.573091
0.868731
if triggered: model = self.tableView.model() selection = self.tableView.selectedIndexes() rows = [index.row() for index in selection] model.removeDataFrameRows(set(rows)) self.sender().setChecked(False)
def removeRow(self, triggered)
Removes a row to the model. This method is also a slot. Args: triggered (bool): If the corresponding button was activated, the selected row will be removed from the model.
4.598934
5.269763
0.872702
model = self.tableView.model() if model is not None: model.removeDataFrameColumns(columnNames) self.removeColumnButton.setChecked(False)
def removeColumns(self, columnNames)
Removes one or multiple columns from the model. This method is also a slot. Args: columnNames (list): A list of columns, which shall be removed from the model.
5.855736
7.779119
0.752751
if triggered: model = self.tableView.model() if model is not None: columns = model.dataFrameColumns() dialog = RemoveAttributesDialog(columns, self) dialog.accepted.connect(self.removeColumns) dialog.rejected.connect(self.uncheckButton) dialog.show()
def showRemoveColumnDialog(self, triggered)
Display the dialog to remove column(s) from the model. This method is also a slot. Args: triggered (bool): If the corresponding button was activated, the dialog will be created and shown.
4.16858
4.443394
0.938152
if isinstance(model, DataFrameModel): self.enableEditing(False) self.uncheckButton() selectionModel = self.tableView.selectionModel() self.tableView.setModel(model) model.dtypeChanged.connect(self.updateDelegate) model.dataChanged.connect(self.updateDelegates) del selectionModel
def setViewModel(self, model)
Sets the model for the enclosed TableView in this widget. Args: model (DataFrameModel): The model to be displayed by the Table View.
5.912446
6.072576
0.973631
for index, column in enumerate(self.tableView.model().dataFrame().columns): dtype = self.tableView.model().dataFrame()[column].dtype self.updateDelegate(index, dtype)
def updateDelegates(self)
reset all delegates
4.873944
4.490877
1.085299
thread = QtCore.QThread(parent) thread.started.connect(worker.doWork) worker.finished.connect(thread.quit) if deleteWorkerLater: thread.finished.connect(worker.deleteLater) worker.moveToThread(thread) worker.setParent(parent) return thread
def createThread(parent, worker, deleteWorkerLater=False)
Create a new thread for given worker. Args: parent (QObject): parent of thread and worker. worker (ProgressWorker): worker to use in thread. deleteWorkerLater (bool, optional): delete the worker if thread finishes. Returns: QThread
2.263613
2.88274
0.78523
# lists, tuples, dicts refer to numpy.object types and # return a 'text' description - working as intended or bug? try: value = np.dtype(value) except TypeError as e: return None for (dtype, string) in self._all: if dtype == value: return string # no match found return given value return None
def description(self, value)
Fetches the translated description for the given datatype. The given value will be converted to a `numpy.dtype` object, matched against the supported datatypes and the description will be translated into the preferred language. (Usually a settings dialog should be available to change the language). If the conversion fails or no match can be found, `None` will be returned. Args: value (type|numpy.dtype): Any object or type. Returns: str: The translated description of the datatype None: If no match could be found or an error occured during convertion.
14.750491
13.184709
1.118757
for (dtype, string) in self._all: if string == value: return dtype return None
def dtype(self, value)
Gets the datatype for the given `value` (description). Args: value (str): A text description for any datatype. Returns: numpy.dtype: The matching datatype for the given text. None: If no match can be found, `None` will be returned.
9.28178
9.096848
1.020329
if column.dtype == object: column.fillna('', inplace=True) return column
def fillNoneValues(column)
Fill all NaN/NaT values of a column with an empty string Args: column (pandas.Series): A Series object with all rows. Returns: column: Series with filled NaN values.
4.559227
6.353499
0.717593
tempColumn = column try: # Try to convert the first row and a random row instead of the complete # column, might be faster # tempValue = np.datetime64(column[0]) tempValue = np.datetime64(column[randint(0, len(column.index) - 1)]) tempColumn = column.apply(to_datetime) except Exception: pass return tempColumn
def convertTimestamps(column)
Convert a dtype of a given column to a datetime. This method tries to do this by brute force. Args: column (pandas.Series): A Series object with all rows. Returns: column: Converted to datetime if no errors occured, else the original column will be returned.
6.149848
6.101107
1.007989
if isinstance(filepath, pd.DataFrame): return filepath assert isinstance(first_codec, str), "first_codec must be a string" codecs = ['UTF_8', 'ISO-8859-1', 'ASCII', 'UTF_16', 'UTF_32'] try: codecs.remove(first_codec) except ValueError as not_in_list: pass codecs.insert(0, first_codec) errors = [] for c in codecs: try: return pd.read_csv(filepath, usecols=usecols, low_memory=low_memory, encoding=c, dtype=dtype, parse_dates=parse_dates, sep=sep, chunksize=chunksize, **kwargs) # Need to catch `UnicodeError` here, not just `UnicodeDecodeError`, # because pandas 0.23.1 raises it when decoding with UTF_16 and the # file is not in that format: except (UnicodeError, UnboundLocalError) as e: errors.append(e) except Exception as e: errors.append(e) if 'tokenizing' in str(e): pass else: raise if verbose: [print(e) for e in errors] raise UnicodeDecodeError("Tried {} codecs and failed on all: \n CODECS: {} \n FILENAME: {}".format( len(codecs), codecs, os.path.basename(filepath)))
def superReadCSV(filepath, first_codec='UTF_8', usecols=None, low_memory=False, dtype=None, parse_dates=True, sep=',', chunksize=None, verbose=False, **kwargs)
A wrap to pandas read_csv with mods to accept a dataframe or filepath. returns dataframe untouched, reads filepath and returns dataframe based on arguments.
2.991992
2.996599
0.998462
ext = os.path.splitext(filepath)[1].lower() allowed_exts = ['.csv', '.txt', '.tsv'] assert ext in ['.csv', '.txt'], "Unexpected file extension {}. \ Supported extensions {}\n filename: {}".format( ext, allowed_exts, os.path.basename(filepath)) maybe_seps = ['|', ';', ',', '\t', ':'] with open(filepath,'r') as fp: header = fp.__next__() count_seps_header = {sep: _count(sep, header) for sep in maybe_seps} count_seps_header = {sep: count for sep, count in count_seps_header.items() if count > 0} if count_seps_header: return max(count_seps_header.__iter__(), key=(lambda key: count_seps_header[key])) else: raise Exception("Couldn't identify the sep from the header... here's the information:\n HEADER: {}\n SEPS SEARCHED: {}".format(header, maybe_seps))
def identify_sep(filepath)
Identifies the separator of data in a filepath. It reads the first line of the file and counts supported separators. Currently supported separators: ['|', ';', ',','\t',':']
3.88712
3.61266
1.075972
if isinstance(filepath, pd.DataFrame): return filepath sep = kwargs.get('sep', None) ext = os.path.splitext(filepath)[1].lower() if sep is None: if ext == '.tsv': kwargs['sep'] = '\t' elif ext == '.csv': kwargs['sep'] = ',' else: found_sep = identify_sep(filepath) print(found_sep) kwargs['sep'] = found_sep return superReadCSV(filepath, **kwargs)
def superReadText(filepath, **kwargs)
A wrapper to superReadCSV which wraps pandas.read_csv(). The benefit of using this function is that it automatically identifies the column separator. .tsv files are assumed to have a \t (tab) separation .csv files are assumed to have a comma separation. .txt (or any other type) get the first line of the file opened and get tested for various separators as defined in the identify_sep function.
3.073235
2.291147
1.341352
if isinstance(filepath, pd.DataFrame): return filepath ext = os.path.splitext(filepath)[1].lower() if ext in ['.xlsx', '.xls']: df = pd.read_excel(filepath, **kwargs) elif ext in ['.pkl', '.p', '.pickle', '.pk']: df = pd.read_pickle(filepath) else: # Assume it's a text-like file and try to read it. try: df = superReadText(filepath, **kwargs) except Exception as e: # TODO: Make this trace back better? Custom Exception? Raise original? raise Exception("Error reading file: {}".format(e)) return df
def superReadFile(filepath, **kwargs)
Uses pandas.read_excel (on excel files) and returns a dataframe of the first sheet (unless sheet is specified in kwargs) Uses superReadText (on .txt,.tsv, or .csv files) and returns a dataframe of the data. One function to read almost all types of data files.
3.554359
3.345788
1.062338
cols = list(frame.columns) for i, item in enumerate(frame.columns): if item in frame.columns[:i]: cols[i] = "toDROP" frame.columns = cols return frame.drop("toDROP", 1, errors='ignore')
def dedupe_cols(frame)
Need to dedupe columns that have the same name.
4.113492
3.855443
1.066931
counts = {} positions = {pos: fld for pos, fld in enumerate(cols)} for c in cols: if c in counts.keys(): counts[c] += 1 else: counts[c] = 1 fixed_cols = {} for pos, col in positions.items(): if counts[col] > 1: fix_cols = {pos: fld for pos, fld in positions.items() if fld == col} keys = [p for p in fix_cols.keys()] min_pos = min(keys) cnt = 1 for p, c in fix_cols.items(): if not p == min_pos: cnt += 1 c = c + str(cnt) fixed_cols.update({p: c}) positions.update(fixed_cols) cols = [x for x in positions.values()] return cols
def rename_dupe_cols(cols)
Takes a list of strings and appends 2,3,4 etc to duplicates. Never appends a 0 or 1. Appended #s are not always in order...but if you wrap this in a dataframe.to_sql function you're guaranteed to not have dupe column name errors importing data to SQL...you'll just have to check yourself to see which fields were renamed.
2.798503
2.883444
0.970542
editor = BigIntSpinbox(parent) try: editor.setMinimum(self.minimum) editor.setMaximum(self.maximum) editor.setSingleStep(self.singleStep) except TypeError as err: # initiate the editor with default values pass return editor
def createEditor(self, parent, option, index)
Returns the widget used to edit the item specified by index for editing. The parent widget and style option are used to control how the editor widget appears. Args: parent (QWidget): parent widget. option (QStyleOptionViewItem): controls how editor widget appears. index (QModelIndex): model data index.
5.304949
7.079028
0.74939
if index.isValid(): value = index.model().data(index, QtCore.Qt.EditRole) spinBox.setValue(value)
def setEditorData(self, spinBox, index)
Sets the data to be displayed and edited by the editor from the data model item specified by the model index. Args: spinBox (BigIntSpinbox): editor widget. index (QModelIndex): model data index.
2.396078
2.883996
0.830819
editor = QtGui.QDoubleSpinBox(parent) try: editor.setMinimum(self.minimum) editor.setMaximum(self.maximum) editor.setSingleStep(self.singleStep) editor.setDecimals(self.decimals) except TypeError as err: # initiate the spinbox with default values. pass return editor
def createEditor(self, parent, option, index)
Returns the widget used to edit the item specified by index for editing. The parent widget and style option are used to control how the editor widget appears. Args: parent (QWidget): parent widget. option (QStyleOptionViewItem): controls how editor widget appears. index (QModelIndex): model data index.
3.106763
3.888836
0.798893
spinBox.interpretText() value = spinBox.value() model.setData(index, value, QtCore.Qt.EditRole)
def setModelData(self, spinBox, model, index)
Gets data from the editor widget and stores it in the specified model at the item index. Args: spinBox (QDoubleSpinBox): editor widget. model (QAbstractItemModel): parent model. index (QModelIndex): model data index.
3.872892
4.385142
0.883185
editor = QtGui.QLineEdit(parent) return editor
def createEditor(self, parent, option, index)
Returns the widget used to edit the item specified by index for editing. The parent widget and style option are used to control how the editor widget appears. Args: parent (QWidget): parent widget. option (QStyleOptionViewItem): controls how editor widget appears. index (QModelIndex): model data index.
6.734537
17.852448
0.377233
if index.isValid(): value = editor.text() model.setData(index, value, QtCore.Qt.EditRole)
def setModelData(self, editor, model, index)
Gets data from the editor widget and stores it in the specified model at the item index. Args: editor (QtGui.QLineEdit): editor widget. model (QAbstractItemModel): parent model. index (QModelIndex): model data index.
3.14307
2.750444
1.14275
combo = QtGui.QComboBox(parent) combo.addItems(SupportedDtypes.names()) combo.currentIndexChanged.connect(self.currentIndexChanged) return combo
def createEditor(self, parent, option, index)
Creates an Editor Widget for the given index. Enables the user to manipulate the displayed data in place. An editor is created, which performs the change. The widget used will be a `QComboBox` with all available datatypes in the `pandas` project. Args: parent (QtCore.QWidget): Defines the parent for the created editor. option (QtGui.QStyleOptionViewItem): contains all the information that QStyle functions need to draw the items. index (QtCore.QModelIndex): The item/index which shall be edited. Returns: QtGui.QWidget: he widget used to edit the item specified by index for editing.
5.295277
5.202297
1.017873
editor.blockSignals(True) data = index.data() dataIndex = editor.findData(data) # dataIndex = editor.findData(data, role=Qt.EditRole) editor.setCurrentIndex(dataIndex) editor.blockSignals(False)
def setEditorData(self, editor, index)
Sets the current data for the editor. The data displayed has the same value as `index.data(Qt.EditRole)` (the translated name of the datatype). Therefor a lookup for all items of the combobox is made and the matching item is set as the currently displayed item. Signals emitted by the editor are blocked during exection of this method. Args: editor (QtGui.QComboBox): The current editor for the item. Should be a `QtGui.QComboBox` as defined in `createEditor`. index (QtCore.QModelIndex): The index of the current item.
3.290253
3.538611
0.929815
model.setData(index, editor.itemText(editor.currentIndex()))
def setModelData(self, editor, model, index)
Updates the model after changing data in the editor. Args: editor (QtGui.QComboBox): The current editor for the item. Should be a `QtGui.QComboBox` as defined in `createEditor`. model (ColumnDtypeModel): The model which holds the displayed data. index (QtCore.QModelIndex): The index of the current item of the model.
7.276262
7.360343
0.988577
try: bytestream = pickle.dumps(data) super(MimeData, self).setData(self._mimeType, bytestream) except TypeError: raise TypeError(self.tr("can not pickle added data")) except: raise
def setData(self, data)
Add some data. Args: data (object): Object to add as data. This object has to be pickable. Qt objects don't work! Raises: TypeError if data is not pickable
7.596085
6.770501
1.121938
try: bytestream = super(MimeData, self).data(self._mimeType).data() return pickle.loads(bytestream) except: raise
def data(self)
return stored data Returns: unpickled data
7.805067
7.450575
1.047579
#return super(DataFrameExportDialog, self).accepted try: self._saveModel() except Exception as err: self._statusBar.showMessage(str(err)) raise else: self._resetWidgets() self.exported.emit(True) self.accept()
def accepted(self)
Successfully close the widget and emit an export signal. This method is also a `SLOT`. The dialog will be closed, when the `Export Data` button is pressed. If errors occur during the export, the status bar will show the error message and the dialog will not be closed.
7.471069
5.620615
1.329226
self._resetWidgets() self.exported.emit(False) self.reject()
def rejected(self)
Close the widget and reset its inital state. This method is also a `SLOT`. The dialog will be closed and all changes reverted, when the `cancel` button is pressed.
19.90596
18.24507
1.091032
delimiter = self._delimiterBox.currentSelected() header = self._headerCheckBox.isChecked() # column labels if self._filename is None: filename = self._filenameLineEdit.text() else: filename = self._filename ext = os.path.splitext(filename)[1].lower() index = False # row labels encodingIndex = self._encodingComboBox.currentIndex() encoding = self._encodingComboBox.itemText(encodingIndex) encoding = _calculateEncodingKey(encoding.lower()) try: dataFrame = self._model.dataFrame() except AttributeError as err: raise AttributeError('No data loaded to export.') else: print("Identifying export type for {}".format(filename)) try: if ext in ['.txt','.csv']: dataFrame.to_csv(filename, encoding=encoding, header=header, index=index, sep=delimiter) elif ext == '.tsv': sep = '\t' dataFrame.to_csv(filename, encoding=encoding, header=header, index=index, sep=delimiter) elif ext in ['.xlsx','.xls']: dataFrame.to_excel(filename, encoding=encoding, header=header, index=index, sep=delimiter) except IOError as err: raise IOError('No filename given') except UnicodeError as err: raise UnicodeError('Could not encode all data. Choose a different encoding') except Exception: raise self.signalExportFilenames.emit(self._model._filePath, filename)
def _saveModel(self)
Reimplements _saveModel to utilize all of the Pandas export options based on file extension. :return: None
3.716372
3.597037
1.033176
if value >= self.minimum() and value <= self.maximum(): self._lineEdit.setText(str(value)) elif value < self.minimum(): self._lineEdit.setText(str(self.minimum())) elif value > self.maximum(): self._lineEdit.setText(str(self.maximum())) return True
def setValue(self, value)
setter function to _lineEdit.text. Sets minimum/maximum as new value if value is out of bounds. Args: value (int/long): new value to set. Returns True if all went fine.
2.092934
1.909061
1.096316
self.setValue(self.value() + steps*self.singleStep())
def stepBy(self, steps)
steps value up/down by a single step. Single step is defined in singleStep(). Args: steps (int): positiv int steps up, negativ steps down
7.335743
5.461839
1.34309
if self.value() > self.minimum() and self.value() < self.maximum(): return self.StepUpEnabled | self.StepDownEnabled elif self.value() <= self.minimum(): return self.StepUpEnabled elif self.value() >= self.maximum(): return self.StepDownEnabled
def stepEnabled(self)
Virtual function that determines whether stepping up and down is legal at any given time. Returns: ored combination of StepUpEnabled | StepDownEnabled
2.617961
2.299881
1.138303
if not isinstance(singleStep, int): raise TypeError("Argument is not of type int") # don't use negative values self._singleStep = abs(singleStep) return self._singleStep
def setSingleStep(self, singleStep)
setter to _singleStep. converts negativ values to positiv ones. Args: singleStep (int): new _singleStep value. converts negativ values to positiv ones. Raises: TypeError: If the given argument is not an integer. Returns: int or long: the absolute value of the given argument.
5.141188
4.228911
1.215724
if not isinstance(minimum, int): raise TypeError("Argument is not of type int or long") self._minimum = minimum
def setMinimum(self, minimum)
setter to _minimum. Args: minimum (int or long): new _minimum value. Raises: TypeError: If the given argument is not an integer.
5.340186
5.662768
0.943035
if not isinstance(maximum, int): raise TypeError("Argument is not of type int or long") self._maximum = maximum
def setMaximum(self, maximum)
setter to _maximum. Args: maximum (int or long): new _maximum value
5.244268
5.597598
0.936878
df = self._models[filepath].dataFrame() kwargs['index'] = kwargs.get('index', False) if save_as is not None: to_path = save_as else: to_path = filepath ext = os.path.splitext(to_path)[1].lower() if ext == ".xlsx": kwargs.pop('sep', None) df.to_excel(to_path, **kwargs) elif ext in ['.csv','.txt']: df.to_csv(to_path, **kwargs) else: raise NotImplementedError("Cannot save file of type {}".format(ext)) if save_as is not None: if keep_orig is False: # Re-purpose the original model # Todo - capture the DataFrameModelManager._updates too model = self._models.pop(filepath) model._filePath = to_path else: # Create a new model. model = DataFrameModel() model.setDataFrame(df, copyDataFrame=True, filePath=to_path) self._models[to_path] = model
def save_file(self, filepath, save_as=None, keep_orig=False, **kwargs)
Saves a DataFrameModel to a file. :param filepath: (str) The filepath of the DataFrameModel to save. :param save_as: (str, default None) The new filepath to save as. :param keep_orig: (bool, default False) True keeps the original filepath/DataFrameModel if save_as is specified. :param kwargs: pandas.DataFrame.to_excel(**kwargs) if .xlsx pandas.DataFrame.to_csv(**kwargs) otherwise. :return: None
3.818239
3.469741
1.100439
assert isinstance(df_model, DataFrameModel), "df_model argument must be a DataFrameModel!" df_model._filePath = file_path try: self._models[file_path] except KeyError: self.signalNewModelRead.emit(file_path) self._models[file_path] = df_model
def set_model(self, df_model, file_path)
Sets a DataFrameModel and registers it to the given file_path. :param df_model: (DataFrameModel) The DataFrameModel to register. :param file_path: The file path to associate with the DataFrameModel. *Overrides the current filePath on the DataFrameModel (if any) :return: None
4.214875
3.307338
1.274401
assert isinstance(df, pd.DataFrame), "Cannot update file with type '{}'".format(type(df)) self._models[filepath].setDataFrame(df, copyDataFrame=False) if notes: update = dict(date=pd.Timestamp(datetime.datetime.now()), notes=notes) self._updates[filepath].append(update) self._paths_updated.append(filepath)
def update_file(self, filepath, df, notes=None)
Sets a new DataFrame for the DataFrameModel registered to filepath. :param filepath (str) The filepath to the DataFrameModel to be updated :param df (pandas.DataFrame) The new DataFrame to register to the model. :param notes (str, default None) Optional notes to register along with the update.
6.043829
5.932638
1.018742
self._models.pop(filepath) self._updates.pop(filepath, default=None) self.signalModelDestroyed.emit(filepath)
def remove_file(self, filepath)
Removes the DataFrameModel from being registered. :param filepath: (str) The filepath to delete from the DataFrameModelManager. :return: None
8.896426
7.52484
1.182274
try: model = self._models[filepath] except KeyError: model = read_file(filepath, **kwargs) self._models[filepath] = model self.signalNewModelRead.emit(filepath) finally: self._paths_read.append(filepath) return self._models[filepath]
def read_file(self, filepath, **kwargs)
Reads a filepath into a DataFrameModel and registers it. Example use: dfmm = DataFrameModelManger() dfmm.read_file(path_to_file) dfm = dfmm.get_model(path_to_file) df = dfm.get_frame(path_to_file) :param filepath: (str) The filepath to read :param kwargs: .xlsx files: pandas.read_excel(**kwargs) .csv files: pandas.read_csv(**kwargs) :return: DataFrameModel
3.843225
3.602902
1.066703
separator = '-' * 80 logFile = os.path.join(tempfile.gettempdir(), "error.log") notice = "An unhandled exception occurred. Please report the problem.\n" notice += .format(logFile) timeString = time.strftime("%Y-%m-%d, %H:%M:%S") tbinfofile = io.StringIO() traceback.print_tb(tracebackobj, None, tbinfofile) tbinfofile.seek(0) tbinfo = tbinfofile.read() if python_version < 3: # Python3 has no str().decode() tbinfo = tbinfo.decode('utf-8') else: pass try: if python_version < 3: # Python3 has no str().decode() excValueStr = str(excValue).decode('utf-8') else: excValueStr = str(excValue) except UnicodeEncodeError: excValueStr = str(excValue) errmsg = '{0}: \n{1}'.format(excType, excValueStr) sections = ['\n', separator, timeString, separator, errmsg, separator, tbinfo] try: msg = '\n'.join(sections) except TypeError: # Remove all things not string. sections = [item for item in sections if type(item) == str] msg = '\n'.join(sections) try: f = codecs.open(logFile, "a+", encoding='utf-8') f.write(msg) f.close() except IOError: msgbox("unable to write to {0}".format(logFile), "Writing error") # always show an error message try: if not _isQAppRunning(): app = QtGui.QApplication([]) _showMessageBox(str(notice) + str(msg)) except Exception: msgbox(str(notice) + str(msg), "Error")
def excepthook(excType, excValue, tracebackobj)
Global function to catch unhandled exceptions. @param excType exception type @param excValue exception value @param tracebackobj traceback object
3.167089
3.240655
0.977299
return "".join(traceback.format_exception( sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2] ))
def exception_format()
Convert exception info into a string suitable for display.
2.501618
2.190529
1.142015
output_list = list() for i, item in enumerate(input_list): tempList = input_list[:i] + input_list[i + 1:] if item not in tempList: output_list.append(item) else: output_list.append('{0}_{1}'.format(item, i)) return output_list
def uniquify_list_of_strings(input_list)
Ensure that every string within input_list is unique. :param list input_list: List of strings :return: New list with unique names as needed.
2.269139
2.385495
0.951224
ret_val = [text, None, None] # Default return values if text is None: return ret_val # Single character, remain visible res = re.search('(?<=\[).(?=\])', text) if res: start = res.start(0) end = res.end(0) caption = text[:start - 1] + text[start:end] + text[end + 1:] ret_val = [caption, text[start:end], start - 1] # Single character, hide it res = re.search('(?<=\[\[).(?=\]\])', text) if res: start = res.start(0) end = res.end(0) caption = text[:start - 2] + text[end + 2:] ret_val = [caption, text[start:end], None] # a Keysym. Always hide it res = re.search('(?<=\[\<).+(?=\>\])', text) if res: start = res.start(0) end = res.end(0) caption = text[:start - 2] + text[end + 2:] ret_val = [caption, '<{}>'.format(text[start:end]), None] return ret_val
def parse_hotkey(text)
Extract a desired hotkey from the text. The format to enclose the hotkey in square braces as in Button_[1] which would assign the keyboard key 1 to that button. The one will be included in the button text. To hide they key, use double square braces as in: Ex[[qq]] it , which would assign the q key to the Exit button. Special keys such as <Enter> may also be used: Move [<left>] for a full list of special keys, see this reference: http://infohost.nmt.edu/tcc/help/ pubs/tkinter/web/key-names.html :param text: :return: list containing cleaned text, hotkey, and hotkey position within cleaned text.
2.557001
2.392947
1.068557
if filename is None: return None if not os.path.isfile(filename): raise ValueError('Image file {} does not exist.'.format(filename)) tk_image = None filename = os.path.normpath(filename) _, ext = os.path.splitext(filename) try: pil_image = PILImage.open(filename) tk_image = PILImageTk.PhotoImage(pil_image) except: try: # Fallback if PIL isn't available tk_image = tk.PhotoImage(file=filename) except: msg = "Cannot load {}. Check to make sure it is an image file.".format(filename) try: _ = PILImage except: msg += "\nPIL library isn't installed. If it isn't installed, only .gif files can be used." raise ValueError(msg) return tk_image
def load_tk_image(filename)
Load in an image file and return as a tk Image. :param filename: image filename to load :return: tk Image object
2.940039
2.880591
1.020638
if not isinstance(dataFrame, pandas.core.frame.DataFrame): raise TypeError('Argument is not of type pandas.core.frame.DataFrame') self.layoutAboutToBeChanged.emit() self._dataFrame = dataFrame self.layoutChanged.emit()
def setDataFrame(self, dataFrame)
setter function to _dataFrame. Holds all data. Note: It's not implemented with python properties to keep Qt conventions. Raises: TypeError: if dataFrame is not of type pandas.core.frame.DataFrame. Args: dataFrame (pandas.core.frame.DataFrame): assign dataFrame to _dataFrame. Holds all the data displayed.
2.764775
2.658953
1.039798
if not isinstance(editable, bool): raise TypeError('Argument is not of type bool') self._editable = editable
def setEditable(self, editable)
setter to _editable. apply changes while changing dtype. Raises: TypeError: if editable is not of type bool. Args: editable (bool): apply changes while changing dtype.
3.945098
4.999915
0.789033
# an index is invalid, if a row or column does not exist or extends # the bounds of self.columnCount() or self.rowCount() # therefor a check for col>1 is unnecessary. if not index.isValid(): return None col = index.column() #row = self._dataFrame.columns[index.column()] columnName = self._dataFrame.columns[index.row()] columnDtype = self._dataFrame[columnName].dtype if role == Qt.DisplayRole or role == Qt.EditRole: if col == 0: if columnName == index.row(): return index.row() return columnName elif col == 1: return SupportedDtypes.description(columnDtype) elif role == DTYPE_ROLE: if col == 1: return columnDtype else: return None
def data(self, index, role=Qt.DisplayRole)
Retrieve the data stored in the model at the given `index`. Args: index (QtCore.QModelIndex): The model index, which points at a data object. role (Qt.ItemDataRole, optional): Defaults to `Qt.DisplayRole`. You have to use different roles to retrieve different data for an `index`. Accepted roles are `Qt.DisplayRole`, `Qt.EditRole` and `DTYPE_ROLE`. Returns: None if an invalid index is given, the role is not accepted by the model or the column is greater than `1`. The column name will be returned if the given column number equals `0` and the role is either `Qt.DisplayRole` or `Qt.EditRole`. The datatype will be returned, if the column number equals `1`. The `Qt.DisplayRole` or `Qt.EditRole` return a human readable, translated string, whereas the `DTYPE_ROLE` returns the raw data type.
4.782597
4.243036
1.127164
if role != DTYPE_CHANGE_ROLE or not index.isValid(): return False if not self.editable(): return False self.layoutAboutToBeChanged.emit() dtype = SupportedDtypes.dtype(value) currentDtype = np.dtype(index.data(role=DTYPE_ROLE)) if dtype is not None: if dtype != currentDtype: # col = index.column() # row = self._dataFrame.columns[index.column()] columnName = self._dataFrame.columns[index.row()] try: if dtype == np.dtype('<M8[ns]'): if currentDtype in SupportedDtypes.boolTypes(): raise Exception("Can't convert a boolean value into a datetime value.") self._dataFrame[columnName] = self._dataFrame[columnName].apply(pandas.to_datetime) else: self._dataFrame[columnName] = self._dataFrame[columnName].astype(dtype) self.dtypeChanged.emit(index.row(), dtype) self.layoutChanged.emit() return True except Exception: message = 'Could not change datatype %s of column %s to datatype %s' % (currentDtype, columnName, dtype) self.changeFailed.emit(message, index, dtype) raise # self._dataFrame[columnName] = self._dataFrame[columnName].astype(currentDtype) # self.layoutChanged.emit() # self.dtypeChanged.emit(columnName) #raise NotImplementedError, "dtype changing not fully working, original error:\n{}".format(e) return False
def setData(self, index, value, role=DTYPE_CHANGE_ROLE)
Updates the datatype of a column. The model must be initated with a dataframe already, since valid indexes are necessary. The `value` is a translated description of the data type. The translations can be found at `qtpandas.translation.DTypeTranslator`. If a datatype can not be converted, e.g. datetime to integer, a `NotImplementedError` will be raised. Args: index (QtCore.QModelIndex): The index of the column to be changed. value (str): The description of the new datatype, e.g. `positive kleine ganze Zahl (16 Bit)`. role (Qt.ItemDataRole, optional): The role, which accesses and changes data. Defaults to `DTYPE_CHANGE_ROLE`. Raises: NotImplementedError: If an error during conversion occured. Returns: bool: `True` if the datatype could be changed, `False` if not or if the new datatype equals the old one.
3.425548
3.434344
0.997439
if not index.isValid(): return Qt.NoItemFlags col = index.column() flags = Qt.ItemIsEnabled | Qt.ItemIsSelectable if col > 0 and self.editable(): flags = Qt.ItemIsSelectable | Qt.ItemIsEnabled | Qt.ItemIsEditable return flags
def flags(self, index)
Returns the item flags for the given index as ored value, e.x.: Qt.ItemIsUserCheckable | Qt.ItemIsEditable Args: index (QtCore.QModelIndex): Index to define column and row Returns: for column 'column': Qt.ItemIsSelectable | Qt.ItemIsEnabled for column 'data type': Qt.ItemIsSelectable | Qt.ItemIsEnabled | Qt.ItemIsEditable
2.967424
2.893338
1.025606
# there should be a grammar defined and some lexer/parser. # instead of this quick-and-dirty implementation. safeEnvDict = { 'freeSearch': self.freeSearch, 'extentSearch': self.extentSearch, 'indexSearch': self.indexSearch } for col in self._dataFrame.columns: safeEnvDict[col] = self._dataFrame[col] try: searchIndex = eval(self._filterString, { '__builtins__': None}, safeEnvDict) except NameError: return [], False except SyntaxError: return [], False except ValueError: # the use of 'and'/'or' is not valid, need to use binary operators. return [], False except TypeError: # argument must be string or compiled pattern return [], False return searchIndex, True
def search(self)
Applies the filter to the stored dataframe. A safe environment dictionary will be created, which stores all allowed functions and attributes, which may be used for the filter. If any object in the given `filterString` could not be found in the dictionary, the filter does not apply and returns `False`. Returns: tuple: A (indexes, success)-tuple, which indicates identified objects by applying the filter and if the operation was successful in general.
7.688715
6.467187
1.188881
if not self._dataFrame.empty: # set question to the indexes of data and set everything to false. question = self._dataFrame.index == -9999 for column in self._dataFrame.columns: dfColumn = self._dataFrame[column] dfColumn = dfColumn.apply(str) question2 = dfColumn.str.contains(searchString, flags=re.IGNORECASE, regex=True, na=False) question = np.logical_or(question, question2) return question else: return []
def freeSearch(self, searchString)
Execute a free text search for all columns in the dataframe. Parameters ---------- searchString (str): Any string which may be contained in a column. Returns ------- list: A list containing all indexes with filtered data. Matches will be `True`, the remaining items will be `False`. If the dataFrame is empty, an empty list will be returned.
4.490747
4.095984
1.096378
if not self._dataFrame.empty: try: questionMin = (self._dataFrame.lat >= xmin) & ( self._dataFrame.lng >= ymin) questionMax = (self._dataFrame.lat <= xmax) & ( self._dataFrame.lng <= ymax) return np.logical_and(questionMin, questionMax) except AttributeError: return [] else: return []
def extentSearch(self, xmin, ymin, xmax, ymax)
Filters the data by a geographical bounding box. The bounding box is given as lower left point coordinates and upper right point coordinates. Note: It's necessary that the dataframe has a `lat` and `lng` column in order to apply the filter. Check if the method could be removed in the future. (could be done via freeSearch) Returns ------- list: A list containing all indexes with filtered data. Matches will be `True`, the remaining items will be `False`. If the dataFrame is empty, an empty list will be returned.
3.237372
2.79717
1.157374
if not self._dataFrame.empty: filter0 = self._dataFrame.index == -9999 for index in indexes: filter1 = self._dataFrame.index == index filter0 = np.logical_or(filter0, filter1) return filter0 else: return []
def indexSearch(self, indexes)
Filters the data by a list of indexes. Args: indexes (list of int): List of index numbers to return. Returns: list: A list containing all indexes with filtered data. Matches will be `True`, the remaining items will be `False`. If the dataFrame is empty, an empty list will be returned.
3.790253
3.304156
1.147117