code
string | signature
string | docstring
string | loss_without_docstring
float64 | loss_with_docstring
float64 | factor
float64 |
---|---|---|---|---|---|
seed = None
result_lambda = identity
if len(args) == 1:
func = args[0]
elif len(args) == 2:
seed = args[0]
func = args[1]
elif len(args) == 3:
seed = args[0]
func = args[1]
result_lambda = args[2]
else:
raise ValueError('aggregate takes 1-3 arguments, {0} were given'.format(len(args)))
if len(args) == 1:
return result_lambda(self.drop(1).fold_left(self.first(), func))
else:
return result_lambda(self.fold_left(seed, func)) | def aggregate(self, *args) | Aggregates the sequence by specified arguments. Its behavior varies depending on if one,
two, or three arguments are passed. Assuming the type of the sequence is A:
One Argument: argument specifies a function of the type f(current: B, next: A => result: B.
current represents results computed so far, and next is the next element to aggregate into
current in order to return result.
Two Argument: the first argument is the seed value for the aggregation. The second argument
is the same as for the one argument case.
Three Argument: the first two arguments are the same as for one and two argument calls. The
additional third parameter is a function applied to the result of the aggregation before
returning the value.
:param args: options for how to execute the aggregation
:return: aggregated value | 2.422601 | 2.305687 | 1.050707 |
result = zero_value
for element in self:
result = func(result, element)
return _wrap(result) | def fold_left(self, zero_value, func) | Assuming that the sequence elements are of type A, folds from left to right starting with
the seed value given by zero_value (of type A) using a function of type
func(current: B, next: A) => B. current represents the folded value so far and next is the
next element from the sequence to fold into current.
>>> seq('a', 'b', 'c').fold_left(['start'], lambda current, next: current + [next]))
['start', 'a', 'b', 'c']
:param zero_value: zero value to reduce into
:param func: Two parameter function as described by function docs
:return: value from folding values with func into zero_value from left to right. | 4.157572 | 6.06688 | 0.68529 |
result = zero_value
for element in self.reverse():
result = func(element, result)
return _wrap(result) | def fold_right(self, zero_value, func) | Assuming that the sequence elements are of type A, folds from right to left starting with
the seed value given by zero_value (of type A) using a function of type
func(next: A, current: B) => B. current represents the folded value so far and next is the
next element from the sequence to fold into current.
>>> seq('a', 'b', 'c').fold_left(['start'], lambda next, current: current + [next])
['start', 'c', 'b', a']
:param zero_value: zero value to reduce into
:param func: Two parameter function as described by function docs
:return: value from folding values with func into zero_value from right to left | 4.9834 | 6.862059 | 0.726225 |
return self._transform(transformations.join_t(other, join_type)) | def join(self, other, join_type="inner") | Sequence and other must be composed of (Key, Value) pairs. If self.sequence contains (K, V)
pairs and other contains (K, W) pairs, the return result is a sequence of (K, (V, W)) pairs.
If join_type is "left", V values will always be present, W values may be present or None.
If join_type is "right", W values will always be present, W values may be present or None.
If join_type is "outer", V or W may be present or None,
but never at the same time.
>>> seq([('a', 1), ('b', 2), ('c', 3)]).join([('a', 2), ('c', 5)], "inner")
[('a', (1, 2)), ('c', (3, 5))]
>>> seq([('a', 1), ('b', 2), ('c', 3)]).join([('a', 2), ('c', 5)])
[('a', (1, 2)), ('c', (3, 5))]
>>> seq([('a', 1), ('b', 2)]).join([('a', 3), ('c', 4)], "left")
[('a', (1, 3)), ('b', (2, None)]
>>> seq([('a', 1), ('b', 2)]).join([('a', 3), ('c', 4)], "right")
[('a', (1, 3)), ('c', (None, 4)]
>>> seq([('a', 1), ('b', 2)]).join([('a', 3), ('c', 4)], "outer")
[('a', (1, 3)), ('b', (2, None)), ('c', (None, 4))]
:param other: sequence to join with
:param join_type: specifies join_type, may be "left", "right", or "outer"
:return: side joined sequence of (K, (V, W)) pairs | 13.386565 | 27.007122 | 0.495668 |
return self._transform(transformations.sliding_t(_wrap, size, step)) | def sliding(self, size, step=1) | Groups elements in fixed size blocks by passing a sliding window over them.
The last window has at least one element but may have less than size elements
:param size: size of sliding window
:param step: step size between windows
:return: sequence of sliding windows | 23.199642 | 37.212627 | 0.623435 |
return self._transform(transformations.sorted_t(key=key, reverse=reverse)) | def sorted(self, key=None, reverse=False) | Uses python sort and its passed arguments to sort the input.
>>> seq([2, 1, 4, 3]).sorted()
[1, 2, 3, 4]
:param key: sort using key function
:param reverse: return list reversed or not
:return: sorted sequence | 10.662026 | 18.46723 | 0.577348 |
return self._transform(transformations.slice_t(start, until)) | def slice(self, start, until) | Takes a slice of the sequence starting at start and until but not including until.
>>> seq([1, 2, 3, 4]).slice(1, 2)
[2]
>>> seq([1, 2, 3, 4]).slice(1, 3)
[2, 3]
:param start: starting index
:param until: ending index
:return: slice including start until but not including until | 16.876434 | 29.917587 | 0.564097 |
if n is None:
self.cache()
return self._base_sequence
else:
return self.cache().take(n).list() | def to_list(self, n=None) | Converts sequence to list of elements.
>>> type(seq([]).to_list())
list
>>> type(seq([]))
functional.pipeline.Sequence
>>> seq([1, 2, 3]).to_list()
[1, 2, 3]
:param n: Take n elements of sequence if not None
:return: list of elements in sequence | 8.30476 | 8.162253 | 1.017459 |
dictionary = {}
for e in self.sequence:
dictionary[e[0]] = e[1]
if default is None:
return dictionary
else:
if hasattr(default, '__call__'):
return collections.defaultdict(default, dictionary)
else:
return collections.defaultdict(lambda: default, dictionary) | def to_dict(self, default=None) | Converts sequence of (Key, Value) pairs to a dictionary.
>>> type(seq([('a', 1)]).to_dict())
dict
>>> seq([('a', 1), ('b', 2)]).to_dict()
{'a': 1, 'b': 2}
:param default: Can be a callable zero argument function. When not None, the returned
dictionary is a collections.defaultdict with default as value for missing keys. If the
value is not callable, then a zero argument lambda function is created returning the
value and used for collections.defaultdict
:return: dictionary from sequence of (Key, Value) elements | 2.760586 | 2.452229 | 1.125746 |
with universal_write_open(path, mode=mode, buffering=buffering, encoding=encoding,
errors=errors, newline=newline, compression=compression,
compresslevel=compresslevel, format=format, check=check,
preset=preset, filters=filters) as output:
if delimiter:
output.write(six.u(self.make_string(delimiter)))
else:
output.write(six.u(str(self))) | def to_file(self, path, delimiter=None, mode='wt', buffering=-1, encoding=None, errors=None,
newline=None, compresslevel=9, format=None, check=-1, preset=None, filters=None,
compression=None) | Saves the sequence to a file by executing str(self) which becomes str(self.to_list()). If
delimiter is defined will instead execute self.make_string(delimiter)
:param path: path to write file
:param delimiter: if defined, will call make_string(delimiter) and save that to file.
:param mode: file open mode
:param buffering: passed to builtins.open
:param encoding: passed to builtins.open
:param errors: passed to builtins.open
:param newline: passed to builtins.open
:param compression: compression format
:param compresslevel: passed to gzip.open
:param format: passed to lzma.open
:param check: passed to lzma.open
:param preset: passed to lzma.open
:param filters: passed to lzma.open | 2.607097 | 2.355893 | 1.106628 |
with universal_write_open(path, mode=mode, compression=compression) as output:
output.write((self.map(json.dumps).make_string('\n') + '\n').encode('utf-8')) | def to_jsonl(self, path, mode='wb', compression=None) | Saves the sequence to a jsonl file. Each element is mapped using json.dumps then written
with a newline separating each element.
:param path: path to write file
:param mode: mode to write in, defaults to 'w' to overwrite contents
:param compression: compression format | 5.743804 | 7.070858 | 0.812321 |
with universal_write_open(path, mode=mode, compression=compression) as output:
if root_array:
json.dump(self.to_list(), output)
else:
json.dump(self.to_dict(), output) | def to_json(self, path, root_array=True, mode=WRITE_MODE, compression=None) | Saves the sequence to a json file. If root_array is True, then the sequence will be written
to json with an array at the root. If it is False, then the sequence will be converted from
a sequence of (Key, Value) pairs to a dictionary so that the json root is a dictionary.
:param path: path to write file
:param root_array: write json root as an array or dictionary
:param mode: file open mode | 2.751717 | 3.907638 | 0.704189 |
if 'b' in mode:
newline = None
with universal_write_open(path, mode=mode, compression=compression,
newline=newline) as output:
csv_writer = csv.writer(output, dialect=dialect, **fmtparams)
for row in self:
csv_writer.writerow([six.u(str(element)) for element in row]) | def to_csv(self, path, mode=WRITE_MODE, dialect='excel', compression=None,
newline='', **fmtparams) | Saves the sequence to a csv file. Each element should be an iterable which will be expanded
to the elements of each row.
:param path: path to write file
:param mode: file open mode
:param dialect: passed to csv.writer
:param fmtparams: passed to csv.writer | 3.338326 | 3.494295 | 0.955365 |
def _insert_item(item):
if isinstance(item, dict):
cols = ', '.join(item.keys())
placeholders = ', '.join('?' * len(item))
sql = 'INSERT INTO {} ({}) VALUES ({})'.format(table_name, cols, placeholders)
conn.execute(sql, tuple(item.values()))
elif is_namedtuple(item):
cols = ', '.join(item._fields)
placeholders = ', '.join('?' * len(item))
sql = 'INSERT INTO {} ({}) VALUES ({})'.format(table_name, cols, placeholders)
conn.execute(sql, item)
elif isinstance(item, (list, tuple)):
placeholders = ', '.join('?' * len(item))
sql = 'INSERT INTO {} VALUES ({})'.format(table_name, placeholders)
conn.execute(sql, item)
else:
raise TypeError('item must be one of dict, namedtuple, tuple or list got {}'
.format(type(item)))
self.for_each(_insert_item) | def _to_sqlite3_by_table(self, conn, table_name) | Saves the sequence to the specified table of sqlite3 database.
Each element can be a dictionary, namedtuple, tuple or list.
Target table must be created in advance.
:param conn: path or sqlite connection, cursor
:param table_name: table name string | 1.850116 | 1.753103 | 1.055338 |
# pylint: disable=no-member
insert_regex = re.compile(r'(insert|update)\s+into', flags=re.IGNORECASE)
if insert_regex.match(target):
insert_f = self._to_sqlite3_by_query
else:
insert_f = self._to_sqlite3_by_table
if isinstance(conn, (sqlite3.Connection, sqlite3.Cursor)):
insert_f(conn, target)
conn.commit()
elif isinstance(conn, str):
with sqlite3.connect(conn, *args, **kwargs) as input_conn:
insert_f(input_conn, target)
input_conn.commit()
else:
raise ValueError('conn must be a must be a file path or sqlite3 Connection/Cursor') | def to_sqlite3(self, conn, target, *args, **kwargs) | Saves the sequence to sqlite3 database.
Target table must be created in advance.
The table schema is inferred from the elements in the sequence
if only target table name is supplied.
>>> seq([(1, 'Tom'), (2, 'Jack')])\
.to_sqlite3('users.db', 'INSERT INTO user (id, name) VALUES (?, ?)')
>>> seq([{'id': 1, 'name': 'Tom'}, {'id': 2, 'name': 'Jack'}]).to_sqlite3(conn, 'user')
:param conn: path or sqlite connection, cursor
:param target: SQL query string or table name
:param args: passed to sqlite3.connect
:param kwargs: passed to sqlite3.connect | 2.806195 | 2.84591 | 0.986045 |
# pylint: disable=import-error
import pandas
return pandas.DataFrame.from_records(self.to_list(), columns=columns) | def to_pandas(self, columns=None) | Converts sequence to a pandas DataFrame using pandas.DataFrame.from_records
:param columns: columns for pandas to use
:return: DataFrame of sequence | 5.235932 | 5.616266 | 0.93228 |
formatted_seq = self.tabulate(n=n, headers=headers, tablefmt=tablefmt,
floatfmt=floatfmt, numalign=numalign, stralign=stralign,
missingval=missingval)
print(formatted_seq) | def show(self, n=10, headers=(), tablefmt="simple", floatfmt="g", numalign="decimal",
stralign="left", missingval="") | Pretty print first n rows of sequence as a table. See
https://bitbucket.org/astanin/python-tabulate for details on tabulate parameters
:param n: Number of rows to show
:param headers: Passed to tabulate
:param tablefmt: Passed to tabulate
:param floatfmt: Passed to tabulate
:param numalign: Passed to tabulate
:param stralign: Passed to tabulate
:param missingval: Passed to tabulate | 2.441915 | 2.500402 | 0.976609 |
self.cache()
length = self.len()
if length == 0 or not is_tabulatable(self[0]):
return None
if n is None or n >= length:
rows = self.list()
message = ''
else:
rows = self.take(n).list()
if tablefmt == 'simple':
message = '\nShowing {} of {} rows'.format(n, length)
elif tablefmt == 'html':
message = '<p>Showing {} of {} rows'.format(n, length)
else:
message = ''
if len(headers) == 0 and is_namedtuple(rows[0]):
headers = rows[0]._fields
return tabulate(rows, headers=headers, tablefmt=tablefmt, floatfmt=floatfmt,
numalign=numalign, stralign=stralign, missingval=missingval) + message | def tabulate(self, n=None, headers=(), tablefmt="simple", floatfmt="g", numalign="decimal",
stralign="left", missingval="") | Return pretty string table of first n rows of sequence or everything if n is None. See
https://bitbucket.org/astanin/python-tabulate for details on tabulate parameters
:param n: Number of rows to show, if set to None return all rows
:param headers: Passed to tabulate
:param tablefmt: Passed to tabulate
:param floatfmt: Passed to tabulate
:param numalign: Passed to tabulate
:param stralign: Passed to tabulate
:param missingval: Passed to tabulate | 2.740083 | 2.728037 | 1.004415 |
if not re.match('^[rbt]{1,3}$', mode):
raise ValueError('mode argument must be only have r, b, and t')
file_open = get_read_function(path, self.disable_compression)
file = file_open(path, mode=mode, buffering=buffering, encoding=encoding, errors=errors,
newline=newline)
if delimiter is None:
return self(file)
else:
return self(''.join(list(file)).split(delimiter)) | def open(self, path, delimiter=None, mode='r', buffering=-1, encoding=None, errors=None,
newline=None) | Reads and parses input files as defined.
If delimiter is not None, then the file is read in bulk then split on it. If it is None
(the default), then the file is parsed as sequence of lines. The rest of the options are
passed directly to builtins.open with the exception that write/append file modes is not
allowed.
>>> seq.open('examples/gear_list.txt').take(1)
[u'tent\\n']
:param path: path to file
:param delimiter: delimiter to split joined text on. if None, defaults to per line split
:param mode: file open mode
:param buffering: passed to builtins.open
:param encoding: passed to builtins.open
:param errors: passed to builtins.open
:param newline: passed to builtins.open
:return: output of file depending on options wrapped in a Sequence via seq | 4.472818 | 4.743728 | 0.942891 |
if isinstance(csv_file, str):
file_open = get_read_function(csv_file, self.disable_compression)
input_file = file_open(csv_file)
elif hasattr(csv_file, 'next') or hasattr(csv_file, '__next__'):
input_file = csv_file
else:
raise ValueError('csv_file must be a file path or implement the iterator interface')
csv_input = csvapi.reader(input_file, dialect=dialect, **fmt_params)
return self(csv_input).cache(delete_lineage=True) | def csv(self, csv_file, dialect='excel', **fmt_params) | Reads and parses the input of a csv stream or file.
csv_file can be a filepath or an object that implements the iterator interface
(defines next() or __next__() depending on python version).
>>> seq.csv('examples/camping_purchases.csv').take(2)
[['1', 'tent', '300'], ['2', 'food', '100']]
:param csv_file: path to file or iterator object
:param dialect: dialect of csv, passed to csv.reader
:param fmt_params: options passed to csv.reader
:return: Sequence wrapping csv file | 3.983573 | 3.722121 | 1.070243 |
if isinstance(jsonl_file, str):
file_open = get_read_function(jsonl_file, self.disable_compression)
input_file = file_open(jsonl_file)
else:
input_file = jsonl_file
return self(input_file).map(jsonapi.loads).cache(delete_lineage=True) | def jsonl(self, jsonl_file) | Reads and parses the input of a jsonl file stream or file.
Jsonl formatted files must have a single valid json value on each line which is parsed by
the python json module.
>>> seq.jsonl('examples/chat_logs.jsonl').first()
{u'date': u'10/09', u'message': u'hello anyone there?', u'user': u'bob'}
:param jsonl_file: path or file containing jsonl content
:return: Sequence wrapping jsonl file | 5.463872 | 5.703357 | 0.95801 |
if isinstance(json_file, str):
file_open = get_read_function(json_file, self.disable_compression)
input_file = file_open(json_file)
json_input = jsonapi.load(input_file)
elif hasattr(json_file, 'read'):
json_input = jsonapi.load(json_file)
else:
raise ValueError('json_file must be a file path or implement the iterator interface')
if isinstance(json_input, list):
return self(json_input)
else:
return self(six.viewitems(json_input)) | def json(self, json_file) | Reads and parses the input of a json file handler or file.
Json files are parsed differently depending on if the root is a dictionary or an array.
1) If the json's root is a dictionary, these are parsed into a sequence of (Key, Value)
pairs
2) If the json's root is an array, these are parsed into a sequence
of entries
>>> seq.json('examples/users.json').first()
[u'sarah', {u'date_created': u'08/08', u'news_email': True, u'email': u'[email protected]'}]
:param json_file: path or file containing json content
:return: Sequence wrapping jsonl file | 3.371081 | 3.499129 | 0.963406 |
if parameters is None:
parameters = ()
if isinstance(conn, (sqlite3api.Connection, sqlite3api.Cursor)):
return self(conn.execute(sql, parameters))
elif isinstance(conn, str):
with sqlite3api.connect(conn, *args, **kwargs) as input_conn:
return self(input_conn.execute(sql, parameters))
else:
raise ValueError('conn must be a must be a file path or sqlite3 Connection/Cursor') | def sqlite3(self, conn, sql, parameters=None, *args, **kwargs) | Reads input by querying from a sqlite database.
>>> seq.sqlite3('examples/users.db', 'select id, name from users where id = 1;').first()
[(1, 'Tom')]
:param conn: path or sqlite connection, cursor
:param sql: SQL query string
:param parameters: Parameters for sql query
:return: Sequence wrapping SQL cursor | 3.109305 | 3.17429 | 0.979528 |
'''Indicate that a formerly enqueued task is complete.
Used by Queue consumer threads. For each get() used to fetch a task,
a subsequent call to task_done() tells the queue that the processing
on the task is complete.
If a join() is currently blocking, it will resume when all items
have been processed (meaning that a task_done() call was received
for every item that had been put() into the queue).
Raises a ValueError if called more times than there were items
placed in the queue.
'''
self._parent._check_closing()
with self._parent._all_tasks_done:
unfinished = self._parent._unfinished_tasks - 1
if unfinished <= 0:
if unfinished < 0:
raise ValueError('task_done() called too many times')
self._parent._all_tasks_done.notify_all()
self._parent._loop.call_soon_threadsafe(
self._parent._finished.set)
self._parent._unfinished_tasks = unfinished | def task_done(self) | Indicate that a formerly enqueued task is complete.
Used by Queue consumer threads. For each get() used to fetch a task,
a subsequent call to task_done() tells the queue that the processing
on the task is complete.
If a join() is currently blocking, it will resume when all items
have been processed (meaning that a task_done() call was received
for every item that had been put() into the queue).
Raises a ValueError if called more times than there were items
placed in the queue. | 2.347194 | 1.710323 | 1.372369 |
'''Blocks until all items in the Queue have been gotten and processed.
The count of unfinished tasks goes up whenever an item is added to the
queue. The count goes down whenever a consumer thread calls task_done()
to indicate the item was retrieved and all work on it is complete.
When the count of unfinished tasks drops to zero, join() unblocks.
'''
with self._parent._all_tasks_done:
while self._parent._unfinished_tasks:
self._parent._all_tasks_done.wait() | def join(self) | Blocks until all items in the Queue have been gotten and processed.
The count of unfinished tasks goes up whenever an item is added to the
queue. The count goes down whenever a consumer thread calls task_done()
to indicate the item was retrieved and all work on it is complete.
When the count of unfinished tasks drops to zero, join() unblocks. | 4.400999 | 1.915472 | 2.297606 |
'''Put an item into the queue.
If optional args 'block' is true and 'timeout' is None (the default),
block if necessary until a free slot is available. If 'timeout' is
a non-negative number, it blocks at most 'timeout' seconds and raises
the Full exception if no free slot was available within that time.
Otherwise ('block' is false), put an item on the queue if a free slot
is immediately available, else raise the Full exception ('timeout'
is ignored in that case).
'''
self._parent._check_closing()
with self._parent._sync_not_full:
if self._parent._maxsize > 0:
if not block:
if self._parent._qsize() >= self._parent._maxsize:
raise SyncQueueFull
elif timeout is None:
while self._parent._qsize() >= self._parent._maxsize:
self._parent._sync_not_full.wait()
elif timeout < 0:
raise ValueError("'timeout' must be a non-negative number")
else:
time = self._parent._loop.time
endtime = time() + timeout
while self._parent._qsize() >= self._parent._maxsize:
remaining = endtime - time()
if remaining <= 0.0:
raise SyncQueueFull
self._parent._sync_not_full.wait(remaining)
self._parent._put_internal(item)
self._parent._sync_not_empty.notify()
self._parent._notify_async_not_empty(threadsafe=True) | def put(self, item, block=True, timeout=None) | Put an item into the queue.
If optional args 'block' is true and 'timeout' is None (the default),
block if necessary until a free slot is available. If 'timeout' is
a non-negative number, it blocks at most 'timeout' seconds and raises
the Full exception if no free slot was available within that time.
Otherwise ('block' is false), put an item on the queue if a free slot
is immediately available, else raise the Full exception ('timeout'
is ignored in that case). | 1.886962 | 1.788867 | 1.054836 |
'''Remove and return an item from the queue.
If optional args 'block' is true and 'timeout' is None (the default),
block if necessary until an item is available. If 'timeout' is
a non-negative number, it blocks at most 'timeout' seconds and raises
the Empty exception if no item was available within that time.
Otherwise ('block' is false), return an item if one is immediately
available, else raise the Empty exception ('timeout' is ignored
in that case).
'''
self._parent._check_closing()
with self._parent._sync_not_empty:
if not block:
if not self._parent._qsize():
raise SyncQueueEmpty
elif timeout is None:
while not self._parent._qsize():
self._parent._sync_not_empty.wait()
elif timeout < 0:
raise ValueError("'timeout' must be a non-negative number")
else:
time = self._parent._loop.time
endtime = time() + timeout
while not self._parent._qsize():
remaining = endtime - time()
if remaining <= 0.0:
raise SyncQueueEmpty
self._parent._sync_not_empty.wait(remaining)
item = self._parent._get()
self._parent._sync_not_full.notify()
self._parent._notify_async_not_full(threadsafe=True)
return item | def get(self, block=True, timeout=None) | Remove and return an item from the queue.
If optional args 'block' is true and 'timeout' is None (the default),
block if necessary until an item is available. If 'timeout' is
a non-negative number, it blocks at most 'timeout' seconds and raises
the Empty exception if no item was available within that time.
Otherwise ('block' is false), return an item if one is immediately
available, else raise the Empty exception ('timeout' is ignored
in that case). | 1.963896 | 1.868838 | 1.050865 |
if self._parent._maxsize <= 0:
return False
else:
return self.qsize() >= self._parent._maxsize | def full(self) | Return True if there are maxsize items in the queue.
Note: if the Queue was initialized with maxsize=0 (the default),
then full() is never True. | 6.116776 | 4.052619 | 1.509339 |
self._parent._check_closing()
async with self._parent._async_not_full:
self._parent._sync_mutex.acquire()
locked = True
try:
if self._parent._maxsize > 0:
do_wait = True
while do_wait:
do_wait = (
self._parent._qsize() >= self._parent._maxsize
)
if do_wait:
locked = False
self._parent._sync_mutex.release()
await self._parent._async_not_full.wait()
self._parent._sync_mutex.acquire()
locked = True
self._parent._put_internal(item)
self._parent._async_not_empty.notify()
self._parent._notify_sync_not_empty()
finally:
if locked:
self._parent._sync_mutex.release() | async def put(self, item) | Put an item into the queue.
Put an item into the queue. If the queue is full, wait until a free
slot is available before adding item.
This method is a coroutine. | 2.875744 | 2.758241 | 1.042601 |
self._parent._check_closing()
with self._parent._sync_mutex:
if self._parent._maxsize > 0:
if self._parent._qsize() >= self._parent._maxsize:
raise AsyncQueueFull
self._parent._put_internal(item)
self._parent._notify_async_not_empty(threadsafe=False)
self._parent._notify_sync_not_empty() | def put_nowait(self, item) | Put an item into the queue without blocking.
If no free slot is immediately available, raise QueueFull. | 4.301486 | 4.350421 | 0.988752 |
self._parent._check_closing()
async with self._parent._async_not_empty:
self._parent._sync_mutex.acquire()
locked = True
try:
do_wait = True
while do_wait:
do_wait = self._parent._qsize() == 0
if do_wait:
locked = False
self._parent._sync_mutex.release()
await self._parent._async_not_empty.wait()
self._parent._sync_mutex.acquire()
locked = True
item = self._parent._get()
self._parent._async_not_full.notify()
self._parent._notify_sync_not_full()
return item
finally:
if locked:
self._parent._sync_mutex.release() | async def get(self) | Remove and return an item from the queue.
If queue is empty, wait until an item is available.
This method is a coroutine. | 2.917941 | 2.783173 | 1.048422 |
self._parent._check_closing()
with self._parent._sync_mutex:
if self._parent._qsize() == 0:
raise AsyncQueueEmpty
item = self._parent._get()
self._parent._notify_async_not_full(threadsafe=False)
self._parent._notify_sync_not_full()
return item | def get_nowait(self) | Remove and return an item from the queue.
Return an item if one is immediately available, else raise QueueEmpty. | 5.507442 | 5.224546 | 1.054147 |
self._parent._check_closing()
with self._parent._all_tasks_done:
if self._parent._unfinished_tasks <= 0:
raise ValueError('task_done() called too many times')
self._parent._unfinished_tasks -= 1
if self._parent._unfinished_tasks == 0:
self._parent._finished.set()
self._parent._all_tasks_done.notify_all() | def task_done(self) | Indicate that a formerly enqueued task is complete.
Used by queue consumers. For each get() used to fetch a task,
a subsequent call to task_done() tells the queue that the processing
on the task is complete.
If a join() is currently blocking, it will resume when all items have
been processed (meaning that a task_done() call was received for every
item that had been put() into the queue).
Raises ValueError if called more times than there were items placed in
the queue. | 2.82287 | 2.642725 | 1.068166 |
while True:
with self._parent._sync_mutex:
if self._parent._unfinished_tasks == 0:
break
await self._parent._finished.wait() | async def join(self) | Block until all items in the queue have been gotten and processed.
The count of unfinished tasks goes up whenever an item is added to the
queue. The count goes down whenever a consumer calls task_done() to
indicate that the item was retrieved and all work on it is complete.
When the count of unfinished tasks drops to zero, join() unblocks. | 6.401218 | 6.847601 | 0.934812 |
u
if os.path.exists(to_dir) and not os.path.isdir(to_dir):
raise Exception('Not a directory : %s' % to_dir)
elif not os.path.exists(to_dir):
os.makedirs(to_dir, mode=int('0755', 8))
_save(os.path.join(to_dir, FILE_USER_FST_DATA), self.compiledFST[0], compressionlevel)
_save(os.path.join(to_dir, FILE_USER_ENTRIES_DATA), pickle.dumps(self.entries), compressionlevel) | def save(self, to_dir, compressionlevel=9) | u"""
Save compressed compiled dictionary data.
:param to_dir: directory to save dictionary data
:compressionlevel: (Optional) gzip compression level. default is 9 | 3.146994 | 3.117983 | 1.009305 |
u
for cfilter in self.char_filters:
text = cfilter.filter(text)
tokens = self.tokenizer.tokenize(text, stream=True, wakati=False)
for tfilter in self.token_filters:
tokens = tfilter.filter(tokens)
return tokens | def analyze(self, text) | u"""
Analyze the input text with custom CharFilters, Tokenizer and TokenFilters.
:param text: unicode string to be tokenized
:return: token generator. emitted element type depends on the output of the last TokenFilter. (e.g., ExtractAttributeFilter emits strings.) | 4.586222 | 3.916214 | 1.171086 |
u
arcs = []
address = {}
pos = 0
for (num, s) in enumerate(fst.dictionary.values()):
for i, (c, v) in enumerate(sorted(s.trans_map.items(), reverse=True)):
bary = bytearray()
flag = 0
output_size, output = 0, bytes()
if i == 0:
flag += FLAG_LAST_ARC
if v['output']:
flag += FLAG_ARC_HAS_OUTPUT
output_size = len(v['output'])
output = v['output']
# encode flag, label, output_size, output, relative target address
bary += pack('b', flag)
if PY3:
bary += pack('B', c)
else:
bary += pack('c', c)
if output_size > 0:
bary += pack('I', output_size)
bary += output
next_addr = address.get(v['state'].id)
assert next_addr is not None
target = (pos + len(bary) + 4) - next_addr
assert target > 0
bary += pack('I', target)
# add the arc represented in bytes
if PY3:
arcs.append(bytes(bary))
else:
arcs.append(b''.join(chr(b) for b in bary))
# address count up
pos += len(bary)
if s.is_final():
bary = bytearray()
# final state
flag = FLAG_FINAL_ARC
output_count = 0
if s.final_output and any(len(e) > 0 for e in s.final_output):
# the arc has final output
flag += FLAG_ARC_HAS_FINAL_OUTPUT
output_count = len(s.final_output)
if not s.trans_map:
flag += FLAG_LAST_ARC
# encode flag, output size, output
bary += pack('b', flag)
if output_count:
bary += pack('I', output_count)
for out in s.final_output:
output_size = len(out)
bary += pack('I', output_size)
if output_size:
bary += out
# add the arc represented in bytes
if PY3:
arcs.append(bytes(bary))
else:
arcs.append(b''.join(chr(b) for b in bary))
# address count up
pos += len(bary)
address[s.id] = pos
logger.debug('compiled arcs size: %d' % len(arcs))
arcs.reverse()
return b''.join(arcs) | def compileFST(fst) | u"""
convert FST to byte array representing arcs | 2.981872 | 2.892785 | 1.030796 |
u
if self.wakati:
wakati = True
if stream:
return self.__tokenize_stream(text, wakati, baseform_unk, '')
elif dotfile and len(text) < Tokenizer.MAX_CHUNK_SIZE:
return list(self.__tokenize_stream(text, wakati, baseform_unk, dotfile))
else:
return list(self.__tokenize_stream(text, wakati, baseform_unk, '')) | def tokenize(self, text, stream=False, wakati=False, baseform_unk=True, dotfile='') | u"""
Tokenize the input text.
:param text: unicode string to be tokenized
:param stream: (Optional) if given True use stream mode. default is False.
:param wakati: (Optinal) if given True returns surface forms only. default is False.
:param baseform_unk: (Optional) if given True sets base_form attribute for unknown tokens. default is True.
:param dotfile: (Optional) if specified, graphviz dot file is output to the path for later visualizing of the lattice graph. This option is ignored when the input length is larger than MAX_CHUNK_SIZE or running on stream mode.
:return: list of tokens (stream=False, wakati=False) or token generator (stream=True, wakati=False) or list of string (stream=False, wakati=True) or string generator (stream=True, wakati=True) | 3.122544 | 2.869808 | 1.088067 |
best_matched_ip = None
warnings.warn('get_ip is deprecated and will be removed in 3.0.', DeprecationWarning)
for key in defs.IPWARE_META_PRECEDENCE_ORDER:
value = request.META.get(key, request.META.get(key.replace('_', '-'), '')).strip()
if value is not None and value != '':
ips = [ip.strip().lower() for ip in value.split(',')]
if right_most_proxy and len(ips) > 1:
ips = reversed(ips)
for ip_str in ips:
if ip_str and is_valid_ip(ip_str):
if not ip_str.startswith(NON_PUBLIC_IP_PREFIX):
return ip_str
if not real_ip_only:
loopback = defs.IPWARE_LOOPBACK_PREFIX
if best_matched_ip is None:
best_matched_ip = ip_str
elif best_matched_ip.startswith(loopback) and not ip_str.startswith(loopback):
best_matched_ip = ip_str
return best_matched_ip | def get_ip(request, real_ip_only=False, right_most_proxy=False) | Returns client's best-matched ip-address, or None
@deprecated - Do not edit | 2.821451 | 2.69591 | 1.046567 |
warnings.warn('get_real_ip is deprecated and will be removed in 3.0.', DeprecationWarning)
return get_ip(request, real_ip_only=True, right_most_proxy=right_most_proxy) | def get_real_ip(request, right_most_proxy=False) | Returns client's best-matched `real` `externally-routable` ip-address, or None
@deprecated - Do not edit | 3.05225 | 2.877373 | 1.060776 |
warnings.warn('get_trusted_ip is deprecated and will be removed in 3.0.', DeprecationWarning)
if trusted_proxies:
meta_keys = ['HTTP_X_FORWARDED_FOR', 'X_FORWARDED_FOR']
for key in meta_keys:
value = request.META.get(key, request.META.get(key.replace('_', '-'), '')).strip()
if value:
ips = [ip.strip().lower() for ip in value.split(',')]
if len(ips) > 1:
if right_most_proxy:
ips.reverse()
for proxy in trusted_proxies:
if proxy in ips[-1]:
return ips[0]
return None | def get_trusted_ip(request, right_most_proxy=False, trusted_proxies=TRUSTED_PROXY_LIST) | Returns client's ip-address from `trusted` proxy server(s) or None
@deprecated - Do not edit | 2.493304 | 2.41139 | 1.03397 |
try:
socket.inet_pton(socket.AF_INET6, ip_str)
except socket.error:
return False
return True | def is_valid_ipv6(ip_str) | Check the validity of an IPv6 address | 1.804377 | 1.840107 | 0.980583 |
value = request.META.get(key, request.META.get(key.replace('_', '-'), '')).strip()
if value == '':
return None
return value | def get_request_meta(request, key) | Given a key, it returns a cleaned up version of the value from request.META, or None | 3.352666 | 3.425511 | 0.978735 |
ip_list = []
for ip in ip_str.split(','):
clean_ip = ip.strip().lower()
if clean_ip:
ip_list.append(clean_ip)
ip_count = len(ip_list)
if ip_count > 0:
if is_valid_ip(ip_list[0]) and is_valid_ip(ip_list[-1]):
return ip_list, ip_count
return [], 0 | def get_ips_from_string(ip_str) | Given a string, it returns a list of one or more valid IP addresses | 2.131921 | 2.128448 | 1.001632 |
ip = None
is_routable_ip = False
if is_valid_ip(ip_str):
ip = ip_str
is_routable_ip = is_public_ip(ip)
return ip, is_routable_ip | def get_ip_info(ip_str) | Given a string, it returns a tuple of (IP, Routable). | 2.710945 | 2.285733 | 1.186029 |
if last_ip is None:
return next_ip
if is_public_ip(last_ip) and not is_public_ip(next_ip):
return last_ip
if is_private_ip(last_ip) and is_loopback_ip(next_ip):
return last_ip
return next_ip | def get_best_ip(last_ip, next_ip) | Given two IP addresses, it returns the the best match ip.
Order of precedence is (Public, Private, Loopback, None)
Right-most IP is returned | 2.025761 | 1.927176 | 1.051155 |
args = sys_argv[1:]
parser = OptionParser(usage=usage)
options, args = parser.parse_args(args)
template, context = args
return template, context | def parse_args(sys_argv, usage) | Return an OptionParser for the script. | 3.790668 | 3.900427 | 0.97186 |
renderer = Renderer()
return renderer.render(template, context, **kwargs) | def render(template, context=None, **kwargs) | Return the given template string rendered using the given context. | 3.964927 | 3.694518 | 1.073192 |
if isinstance(context, dict):
# Then we consider the argument a "hash" for the purposes of the spec.
#
# We do a membership test to avoid using exceptions for flow control
# (e.g. catching KeyError).
if key in context:
return context[key]
elif type(context).__module__ != _BUILTIN_MODULE:
# Then we consider the argument an "object" for the purposes of
# the spec.
#
# The elif test above lets us avoid treating instances of built-in
# types like integers and strings as objects (cf. issue #81).
# Instances of user-defined classes on the other hand, for example,
# are considered objects by the test above.
try:
attr = getattr(context, key)
except AttributeError:
# TODO: distinguish the case of the attribute not existing from
# an AttributeError being raised by the call to the attribute.
# See the following issue for implementation ideas:
# http://bugs.python.org/issue7559
pass
else:
# TODO: consider using EAFP here instead.
# http://docs.python.org/glossary.html#term-eafp
if callable(attr):
return attr()
return attr
return _NOT_FOUND | def _get_value(context, key) | Retrieve a key's value from a context item.
Returns _NOT_FOUND if the key does not exist.
The ContextStack.get() docstring documents this function's intended behavior. | 5.719513 | 5.520692 | 1.036014 |
items = context
context = ContextStack()
for item in items:
if item is None:
continue
if isinstance(item, ContextStack):
context._stack.extend(item._stack)
else:
context.push(item)
if kwargs:
context.push(kwargs)
return context | def create(*context, **kwargs) | Build a ContextStack instance from a sequence of context-like items.
This factory-style method is more general than the ContextStack class's
constructor in that, unlike the constructor, the argument list
can itself contain ContextStack instances.
Here is an example illustrating various aspects of this method:
>>> obj1 = {'animal': 'cat', 'vegetable': 'carrot', 'mineral': 'copper'}
>>> obj2 = ContextStack({'vegetable': 'spinach', 'mineral': 'silver'})
>>>
>>> context = ContextStack.create(obj1, None, obj2, mineral='gold')
>>>
>>> context.get('animal')
'cat'
>>> context.get('vegetable')
'spinach'
>>> context.get('mineral')
'gold'
Arguments:
*context: zero or more dictionaries, ContextStack instances, or objects
with which to populate the initial context stack. None
arguments will be skipped. Items in the *context list are
added to the stack in order so that later items in the argument
list take precedence over earlier items. This behavior is the
same as the constructor's.
**kwargs: additional key-value data to add to the context stack.
As these arguments appear after all items in the *context list,
in the case of key conflicts these values take precedence over
all items in the *context list. This behavior is the same as
the constructor's. | 3.70861 | 4.129776 | 0.898017 |
if name == '.':
try:
return self.top()
except IndexError:
raise KeyNotFoundError(".", "empty context stack")
parts = name.split('.')
try:
result = self._get_simple(parts[0])
except KeyNotFoundError:
raise KeyNotFoundError(name, "first part")
for part in parts[1:]:
# The full context stack is not used to resolve the remaining parts.
# From the spec--
#
# 5) If any name parts were retained in step 1, each should be
# resolved against a context stack containing only the result
# from the former resolution. If any part fails resolution, the
# result should be considered falsey, and should interpolate as
# the empty string.
#
# TODO: make sure we have a test case for the above point.
result = _get_value(result, part)
# TODO: consider using EAFP here instead.
# http://docs.python.org/glossary.html#term-eafp
if result is _NOT_FOUND:
raise KeyNotFoundError(name, "missing %s" % repr(part))
return result | def get(self, name) | Resolve a dotted name against the current context stack.
This function follows the rules outlined in the section of the
spec regarding tag interpolation. This function returns the value
as is and does not coerce the return value to a string.
Arguments:
name: a dotted or non-dotted name.
default: the value to return if name resolution fails at any point.
Defaults to the empty string per the Mustache spec.
This method queries items in the stack in order from last-added
objects to first (last in, first out). The value returned is
the value of the key in the first item that contains the key.
If the key is not found in any item in the stack, then the default
value is returned. The default value defaults to None.
In accordance with the spec, this method queries items in the
stack for a key differently depending on whether the item is a
hash, object, or neither (as defined in the module docstring):
(1) Hash: if the item is a hash, then the key's value is the
dictionary value of the key. If the dictionary doesn't contain
the key, then the key is considered not found.
(2) Object: if the item is an an object, then the method looks for
an attribute with the same name as the key. If an attribute
with that name exists, the value of the attribute is returned.
If the attribute is callable, however (i.e. if the attribute
is a method), then the attribute is called with no arguments
and that value is returned. If there is no attribute with
the same name as the key, then the key is considered not found.
(3) Neither: if the item is neither a hash nor an object, then
the key is considered not found.
*Caution*:
Callables are handled differently depending on whether they are
dictionary values, as in (1) above, or attributes, as in (2).
The former are returned as-is, while the latter are first
called and that value returned.
Here is an example to illustrate:
>>> def greet():
... return "Hi Bob!"
>>>
>>> class Greeter(object):
... greet = None
>>>
>>> dct = {'greet': greet}
>>> obj = Greeter()
>>> obj.greet = greet
>>>
>>> dct['greet'] is obj.greet
True
>>> ContextStack(dct).get('greet') #doctest: +ELLIPSIS
<function greet at 0x...>
>>> ContextStack(obj).get('greet')
'Hi Bob!'
TODO: explain the rationale for this difference in treatment. | 7.043404 | 7.197634 | 0.978572 |
for item in reversed(self._stack):
result = _get_value(item, name)
if result is not _NOT_FOUND:
return result
raise KeyNotFoundError(name, "part missing") | def _get_simple(self, name) | Query the stack for a non-dotted name. | 8.058268 | 6.500413 | 1.239655 |
# We type-check to avoid "TypeError: decoding Unicode is not supported".
# We avoid the Python ternary operator for Python 2.4 support.
if isinstance(s, unicode):
return s
return self.unicode(s) | def _to_unicode_soft(self, s) | Convert a basestring to unicode, preserving any unicode subclass. | 10.879354 | 9.483279 | 1.147214 |
if encoding is None:
encoding = self.string_encoding
# TODO: Wrap UnicodeDecodeErrors with a message about setting
# the string_encoding and decode_errors attributes.
return unicode(b, encoding, self.decode_errors) | def unicode(self, b, encoding=None) | Convert a byte string to unicode, using string_encoding and decode_errors.
Arguments:
b: a byte string.
encoding: the name of an encoding. Defaults to the string_encoding
attribute for this instance.
Raises:
TypeError: Because this method calls Python's built-in unicode()
function, this method raises the following exception if the
given string is already unicode:
TypeError: decoding Unicode is not supported | 7.652196 | 6.191299 | 1.23596 |
return Loader(file_encoding=self.file_encoding, extension=self.file_extension,
to_unicode=self.unicode, search_dirs=self.search_dirs) | def _make_loader(self) | Create a Loader instance using current attributes. | 7.296574 | 5.497069 | 1.327357 |
loader = self._make_loader()
def load_template(template_name):
return loader.load_name(template_name)
return load_template | def _make_load_template(self) | Return a function that loads a template by name. | 4.347422 | 3.311553 | 1.312804 |
if self.partials is None:
return self._make_load_template()
# Otherwise, create a function from the custom partial loader.
partials = self.partials
def load_partial(name):
# TODO: consider using EAFP here instead.
# http://docs.python.org/glossary.html#term-eafp
# This would mean requiring that the custom partial loader
# raise a KeyError on name not found.
template = partials.get(name)
if template is None:
raise TemplateNotFoundError("Name %s not found in partials: %s" %
(repr(name), type(partials)))
# RenderEngine requires that the return value be unicode.
return self._to_unicode_hard(template)
return load_partial | def _make_load_partial(self) | Return a function that loads a partial by name. | 6.062488 | 5.427688 | 1.116956 |
val = self.missing_tags
if val == MissingTags.strict:
return True
elif val == MissingTags.ignore:
return False
raise Exception("Unsupported 'missing_tags' value: %s" % repr(val)) | def _is_missing_tags_strict(self) | Return whether missing_tags is set to strict. | 4.1665 | 3.353028 | 1.242608 |
load_partial = self._make_load_partial()
if self._is_missing_tags_strict():
return load_partial
# Otherwise, ignore missing tags.
def resolve_partial(name):
try:
return load_partial(name)
except TemplateNotFoundError:
return u''
return resolve_partial | def _make_resolve_partial(self) | Return the resolve_partial function to pass to RenderEngine.__init__(). | 5.503611 | 4.903673 | 1.122345 |
if self._is_missing_tags_strict():
return context_get
# Otherwise, ignore missing tags.
def resolve_context(stack, name):
try:
return context_get(stack, name)
except KeyNotFoundError:
return u''
return resolve_context | def _make_resolve_context(self) | Return the resolve_context function to pass to RenderEngine.__init__(). | 9.186647 | 7.682029 | 1.195862 |
resolve_context = self._make_resolve_context()
resolve_partial = self._make_resolve_partial()
engine = RenderEngine(literal=self._to_unicode_hard,
escape=self._escape_to_unicode,
resolve_context=resolve_context,
resolve_partial=resolve_partial,
to_str=self.str_coerce)
return engine | def _make_render_engine(self) | Return a RenderEngine instance for rendering. | 5.40573 | 4.973101 | 1.086994 |
loader = self._make_loader()
# TODO: consider an approach that does not require using an if
# block here. For example, perhaps this class's loader can be
# a SpecLoader in all cases, and the SpecLoader instance can
# check the object's type. Or perhaps Loader and SpecLoader
# can be refactored to implement the same interface.
if isinstance(obj, TemplateSpec):
loader = SpecLoader(loader)
template = loader.load(obj)
else:
template = loader.load_object(obj)
context = [obj] + list(context)
return self._render_string(template, *context, **kwargs) | def _render_object(self, obj, *context, **kwargs) | Render the template associated with the given object. | 6.234416 | 5.822412 | 1.070762 |
loader = self._make_loader()
template = loader.load_name(template_name)
return self._render_string(template, *context, **kwargs) | def render_name(self, template_name, *context, **kwargs) | Render the template with the given name using the given context.
See the render() docstring for more information. | 3.940881 | 4.161937 | 0.946886 |
loader = self._make_loader()
template = loader.read(template_path)
return self._render_string(template, *context, **kwargs) | def render_path(self, template_path, *context, **kwargs) | Render the template at the given path using the given context.
Read the render() docstring for more information. | 4.486486 | 4.384651 | 1.023225 |
# RenderEngine.render() requires that the template string be unicode.
template = self._to_unicode_hard(template)
render_func = lambda engine, stack: engine.render(template, stack)
return self._render_final(render_func, *context, **kwargs) | def _render_string(self, template, *context, **kwargs) | Render the given template string using the given context. | 7.666327 | 7.052666 | 1.087011 |
stack = ContextStack.create(*context, **kwargs)
self._context = stack
engine = self._make_render_engine()
return render_func(engine, stack) | def _render_final(self, render_func, *context, **kwargs) | Arguments:
render_func: a function that accepts a RenderEngine and ContextStack
instance and returns a template rendering as a unicode string. | 7.756476 | 5.02832 | 1.542558 |
if is_string(template):
return self._render_string(template, *context, **kwargs)
if isinstance(template, ParsedTemplate):
render_func = lambda engine, stack: template.render(engine, stack)
return self._render_final(render_func, *context, **kwargs)
# Otherwise, we assume the template is an object.
return self._render_object(template, *context, **kwargs) | def render(self, template, *context, **kwargs) | Render the given template string, view template, or parsed template.
Returns a unicode string.
Prior to rendering, this method will convert a template that is a
byte string (type str in Python 2) to unicode using the string_encoding
and decode_errors attributes. See the constructor docstring for
more information.
Arguments:
template: a template string that is unicode or a byte string,
a ParsedTemplate instance, or another object instance. In the
final case, the function first looks for the template associated
to the object by calling this class's get_associated_template()
method. The rendering process also uses the passed object as
the first element of the context stack when rendering.
*context: zero or more dictionaries, ContextStack instances, or objects
with which to populate the initial context stack. None
arguments are skipped. Items in the *context list are added to
the context stack in order so that later items in the argument
list take precedence over earlier items.
**kwargs: additional key-value data to add to the context stack.
As these arguments appear after all items in the *context list,
in the case of key conflicts these values take precedence over
all items in the *context list. | 3.634031 | 3.81696 | 0.952074 |
# This function implementation was chosen to be compatible across Python 2/3.
f = open(path, 'rb')
# We avoid use of the with keyword for Python 2.4 support.
try:
b = f.read()
finally:
f.close()
return b.decode(FILE_ENCODING) | def read(path) | Read and return the contents of a text file as a unicode string. | 6.928566 | 5.8624 | 1.181865 |
print("writing to: %s" % path)
# This function implementation was chosen to be compatible across Python 2/3.
f = open(path, "wb")
try:
b = u.encode(FILE_ENCODING)
f.write(b)
finally:
f.close() | def write(u, path) | Write a unicode string to a file (as utf-8). | 4.930712 | 4.635118 | 1.063773 |
root, ext = os.path.splitext(path)
if new_ext is None:
new_ext = ext
temp_path = root + TEMP_EXTENSION + new_ext
return temp_path | def make_temp_path(path, new_ext=None) | Arguments:
new_ext: the new file extension, including the leading dot.
Defaults to preserving the existing file extension. | 2.301749 | 2.842042 | 0.809893 |
lines = text.splitlines(True) # preserve line endings.
# Remove HTML comments (which we only allow to take a special form).
new_lines = filter(lambda line: not line.startswith("<!--"), lines)
return "".join(new_lines) | def strip_html_comments(text) | Strip HTML comments from a unicode string. | 6.068148 | 5.906783 | 1.027319 |
# Pandoc uses the UTF-8 character encoding for both input and output.
command = "pandoc --write=rst --output=%s %s" % (rst_temp_path, md_path)
print("converting with pandoc: %s to %s\n-->%s" % (md_path, rst_temp_path,
command))
if os.path.exists(rst_temp_path):
os.remove(rst_temp_path)
os.system(command)
if not os.path.exists(rst_temp_path):
s = ("Error running: %s\n"
" Did you install pandoc per the %s docstring?" % (command,
__file__))
sys.exit(s)
return read(rst_temp_path) | def convert_md_to_rst(md_path, rst_temp_path) | Convert the contents of a file from Markdown to reStructuredText.
Returns the converted text as a Unicode string.
Arguments:
md_path: a path to a UTF-8 encoded Markdown file to convert.
rst_temp_path: a temporary path to which to write the converted contents. | 3.46933 | 3.683694 | 0.941807 |
readme_path = README_PATH
# Remove our HTML comments because PyPI does not allow it.
# See the setup.py docstring for more info on this.
readme_md = strip_html_comments(read(readme_path))
history_md = strip_html_comments(read(HISTORY_PATH))
license_md = + read(LICENSE_PATH)
sections = [readme_md, history_md, license_md]
md_description = '\n\n'.join(sections)
# Write the combined Markdown file to a temp path.
md_ext = os.path.splitext(readme_path)[1]
md_description_path = make_temp_path(RST_DESCRIPTION_PATH, new_ext=md_ext)
write(md_description, md_description_path)
rst_temp_path = make_temp_path(RST_DESCRIPTION_PATH)
long_description = convert_md_to_rst(md_path=md_description_path,
rst_temp_path=rst_temp_path)
return "\n".join([RST_LONG_DESCRIPTION_INTRO, long_description]) | def make_long_description() | Generate the reST long_description for setup() from source files.
Returns the generated long_description as a unicode string. | 4.000436 | 4.161361 | 0.961329 |
long_description = make_long_description()
if long_description != read(RST_DESCRIPTION_PATH):
print( % (RST_DESCRIPTION_PATH, PREP_COMMAND))
sys.exit()
print("Description up-to-date: %s" % RST_DESCRIPTION_PATH)
answer = raw_input("Are you sure you want to publish to PyPI (yes/no)?")
if answer != "yes":
exit("Aborted: nothing published")
os.system('python setup.py sdist upload') | def publish() | Publish this package to PyPI (aka "the Cheeseshop"). | 6.306851 | 6.279538 | 1.00435 |
if not hasattr(obj, '__module__'):
return None
module = sys.modules[obj.__module__]
if not hasattr(module, '__file__'):
# TODO: add a unit test for this case.
return None
path = module.__file__
return os.path.dirname(path) | def get_object_directory(self, obj) | Return the directory containing an object's defining class.
Returns None if there is no such directory, for example if the
class was defined in an interactive Python session, or in a
doctest that appears in a text file (rather than a Python file). | 3.04364 | 2.892536 | 1.052239 |
template_name = obj.__class__.__name__
def repl(match):
return '_' + match.group(0).lower()
return re.sub('[A-Z]', repl, template_name)[1:] | def make_template_name(self, obj) | Return the canonical template name for an object instance.
This method converts Python-style class names (PEP 8's recommended
CamelCase, aka CapWords) to lower_case_with_underscords. Here
is an example with code:
>>> class HelloWorld(object):
... pass
>>> hi = HelloWorld()
>>>
>>> locator = Locator()
>>> locator.make_template_name(hi)
'hello_world' | 3.818283 | 4.847586 | 0.787667 |
file_name = template_name
if template_extension is None:
template_extension = self.template_extension
if template_extension is not False:
file_name += os.path.extsep + template_extension
return file_name | def make_file_name(self, template_name, template_extension=None) | Generate and return the file name for the given template name.
Arguments:
template_extension: defaults to the instance's extension. | 2.366947 | 3.370998 | 0.70215 |
for dir_path in search_dirs:
file_path = os.path.join(dir_path, file_name)
if os.path.exists(file_path):
return file_path
return None | def _find_path(self, search_dirs, file_name) | Search for the given file, and return the path.
Returns None if the file is not found. | 1.813865 | 1.881785 | 0.963907 |
path = self._find_path(search_dirs, file_name)
if path is None:
raise TemplateNotFoundError('File %s not found in dirs: %s' %
(repr(file_name), repr(search_dirs)))
return path | def _find_path_required(self, search_dirs, file_name) | Return the path to a template with the given file name. | 2.999815 | 2.648753 | 1.132539 |
file_name = self.make_file_name(template_name)
return self._find_path_required(search_dirs, file_name) | def find_name(self, template_name, search_dirs) | Return the path to a template with the given name.
Arguments:
template_name: the name of the template.
search_dirs: the list of directories in which to search. | 5.695819 | 7.726462 | 0.737183 |
if file_name is None:
# TODO: should we define a make_file_name() method?
template_name = self.make_template_name(obj)
file_name = self.make_file_name(template_name)
dir_path = self.get_object_directory(obj)
if dir_path is not None:
search_dirs = [dir_path] + search_dirs
path = self._find_path_required(search_dirs, file_name)
return path | def find_object(self, obj, search_dirs, file_name=None) | Return the path to a template associated with the given object. | 3.358929 | 2.961937 | 1.134031 |
if isinstance(s, unicode):
return unicode(s)
return self.to_unicode(s, encoding) | def unicode(self, s, encoding=None) | Convert a string to unicode using the given encoding, and return it.
This function uses the underlying to_unicode attribute.
Arguments:
s: a basestring instance to convert to unicode. Unlike Python's
built-in unicode() function, it is okay to pass unicode strings
to this function. (Passing a unicode string to Python's unicode()
with the encoding argument throws the error, "TypeError: decoding
Unicode is not supported.")
encoding: the encoding to pass to the to_unicode attribute.
Defaults to None. | 4.361946 | 4.724074 | 0.923344 |
b = common.read(path)
if encoding is None:
encoding = self.file_encoding
return self.unicode(b, encoding) | def read(self, path, encoding=None) | Read the template at the given path, and return it as a unicode string. | 6.179633 | 6.014132 | 1.027519 |
locator = self._make_locator()
path = locator.find_file(file_name, self.search_dirs)
return self.read(path) | def load_file(self, file_name) | Find and return the template with the given file name.
Arguments:
file_name: the file name of the template. | 8.240549 | 8.947068 | 0.921033 |
locator = self._make_locator()
path = locator.find_name(name, self.search_dirs)
return self.read(path) | def load_name(self, name) | Find and return the template with the given template name.
Arguments:
name: the name of the template. | 9.662214 | 12.015843 | 0.804123 |
locator = self._make_locator()
path = locator.find_object(obj, self.search_dirs)
return self.read(path) | def load_object(self, obj) | Find and return the template associated to the given object.
Arguments:
obj: an instance of a user-defined class.
search_dirs: the list of directories in which to search. | 9.493796 | 9.149124 | 1.037673 |
# We avoid use of the ternary operator for Python 2.4 support.
def get_unicode(node):
if type(node) is unicode:
return node
return node.render(engine, context)
parts = map(get_unicode, self._parse_tree)
s = ''.join(parts)
return unicode(s) | def render(self, engine, context) | Returns: a string of type unicode. | 6.338279 | 5.4782 | 1.157 |
if type(template) is not unicode:
raise Exception("Template is not unicode: %s" % type(template))
parser = _Parser(delimiters)
return parser.parse(template) | def parse(template, delimiters=None) | Parse a unicode template string and return a ParsedTemplate instance.
Arguments:
template: a unicode template string.
delimiters: a 2-tuple of delimiters. Defaults to the package default.
Examples:
>>> parsed = parse(u"Hey {{#who}}{{name}}!{{/who}}")
>>> print str(parsed).replace('u', '') # This is a hack to get the test to pass both in Python 2 and 3.
['Hey ', _SectionNode(key='who', index_begin=12, index_end=21, parsed=[_EscapeNode(key='name'), '!'])] | 3.35604 | 4.930418 | 0.680681 |
# The possible tag type characters following the opening tag,
# excluding "=" and "{".
tag_types = "!>&/#^"
# TODO: are we following this in the spec?
#
# The tag's content MUST be a non-whitespace character sequence
# NOT containing the current closing delimiter.
#
tag = r % {'tag_types': tag_types, 'otag': re.escape(delimiters[0]), 'ctag': re.escape(delimiters[1])}
return re.compile(tag, re.VERBOSE) | def _compile_template_re(delimiters) | Return a regular expression object (re.RegexObject) instance. | 9.811007 | 10.129847 | 0.968525 |
self._compile_delimiters()
start_index = 0
content_end_index, parsed_section, section_key = None, None, None
parsed_template = ParsedTemplate()
states = []
while True:
match = self._template_re.search(template, start_index)
if match is None:
break
match_index = match.start()
end_index = match.end()
matches = match.groupdict()
# Normalize the matches dictionary.
if matches['change'] is not None:
matches.update(tag='=', tag_key=matches['delims'])
elif matches['raw'] is not None:
matches.update(tag='&', tag_key=matches['raw_name'])
tag_type = matches['tag']
tag_key = matches['tag_key']
leading_whitespace = matches['whitespace']
# Standalone (non-interpolation) tags consume the entire line,
# both leading whitespace and trailing newline.
did_tag_begin_line = match_index == 0 or template[match_index - 1] in END_OF_LINE_CHARACTERS
did_tag_end_line = end_index == len(template) or template[end_index] in END_OF_LINE_CHARACTERS
is_tag_interpolating = tag_type in ['', '&']
if did_tag_begin_line and did_tag_end_line and not is_tag_interpolating:
if end_index < len(template):
end_index += template[end_index] == '\r' and 1 or 0
if end_index < len(template):
end_index += template[end_index] == '\n' and 1 or 0
elif leading_whitespace:
match_index += len(leading_whitespace)
leading_whitespace = ''
# Avoid adding spurious empty strings to the parse tree.
if start_index != match_index:
parsed_template.add(template[start_index:match_index])
start_index = end_index
if tag_type in ('#', '^'):
# Cache current state.
state = (tag_type, end_index, section_key, parsed_template)
states.append(state)
# Initialize new state
section_key, parsed_template = tag_key, ParsedTemplate()
continue
if tag_type == '/':
if tag_key != section_key:
raise ParsingError("Section end tag mismatch: %s != %s" % (tag_key, section_key))
# Restore previous state with newly found section data.
parsed_section = parsed_template
(tag_type, section_start_index, section_key, parsed_template) = states.pop()
node = self._make_section_node(template, tag_type, tag_key, parsed_section,
section_start_index, match_index)
else:
node = self._make_interpolation_node(tag_type, tag_key, leading_whitespace)
parsed_template.add(node)
# Avoid adding spurious empty strings to the parse tree.
if start_index != len(template):
parsed_template.add(template[start_index:])
return parsed_template | def parse(self, template) | Parse a template string starting at some index.
This method uses the current tag delimiter.
Arguments:
template: a unicode string that is the template to parse.
index: the index at which to start parsing.
Returns:
a ParsedTemplate instance. | 3.010822 | 3.029631 | 0.993792 |
# TODO: switch to using a dictionary instead of a bunch of ifs and elifs.
if tag_type == '!':
return _CommentNode()
if tag_type == '=':
delimiters = tag_key.split()
self._change_delimiters(delimiters)
return _ChangeNode(delimiters)
if tag_type == '':
return _EscapeNode(tag_key)
if tag_type == '&':
return _LiteralNode(tag_key)
if tag_type == '>':
return _PartialNode(tag_key, leading_whitespace)
raise Exception("Invalid symbol for interpolation tag: %s" % repr(tag_type)) | def _make_interpolation_node(self, tag_type, tag_key, leading_whitespace) | Create and return a non-section node for the parse tree. | 4.07027 | 3.978452 | 1.023079 |
if tag_type == '#':
return _SectionNode(tag_key, parsed_section, self._delimiters,
template, section_start_index, section_end_index)
if tag_type == '^':
return _InvertedNode(tag_key, parsed_section)
raise Exception("Invalid symbol for section tag: %s" % repr(tag_type)) | def _make_section_node(self, template, tag_type, tag_key, parsed_section,
section_start_index, section_end_index) | Create and return a section node for the parse tree. | 3.680814 | 3.686269 | 0.99852 |
if spec.template_rel_path is not None:
return os.path.split(spec.template_rel_path)
# Otherwise, determine the file name separately.
locator = self.loader._make_locator()
# We do not use the ternary operator for Python 2.4 support.
if spec.template_name is not None:
template_name = spec.template_name
else:
template_name = locator.make_template_name(spec)
file_name = locator.make_file_name(template_name, spec.template_extension)
return (spec.template_rel_directory, file_name) | def _find_relative(self, spec) | Return the path to the template as a relative (dir, file_name) pair.
The directory returned is relative to the directory containing the
class definition of the given object. The method returns None for
this directory if the directory is unknown without first searching
the search directories. | 4.370255 | 3.985864 | 1.096438 |
if spec.template_path is not None:
return spec.template_path
dir_path, file_name = self._find_relative(spec)
locator = self.loader._make_locator()
if dir_path is None:
# Then we need to search for the path.
path = locator.find_object(spec, self.loader.search_dirs, file_name=file_name)
else:
obj_dir = locator.get_object_directory(spec)
path = os.path.join(obj_dir, dir_path, file_name)
return path | def _find(self, spec) | Find and return the path to the template associated to the instance. | 4.722676 | 4.157545 | 1.135929 |
if spec.template is not None:
return self.loader.unicode(spec.template, spec.template_encoding)
path = self._find(spec)
return self.loader.read(path, spec.template_encoding) | def load(self, spec) | Find and return the template associated to a TemplateSpec instance.
Returns the template as a unicode string.
Arguments:
spec: a TemplateSpec instance. | 6.095084 | 5.084846 | 1.198676 |
val = self.resolve_context(context, name)
if callable(val):
# Return because _render_value() is already a string.
return self._render_value(val(), context)
if not is_string(val):
return self.to_str(val)
return val | def fetch_string(self, context, name) | Get a value from the given context as a basestring instance. | 6.5194 | 6.128743 | 1.063742 |
data = self.resolve_context(context, name)
# From the spec:
#
# If the data is not of a list type, it is coerced into a list
# as follows: if the data is truthy (e.g. `!!data == true`),
# use a single-element list containing the data, otherwise use
# an empty list.
#
if not data:
data = []
else:
# The least brittle way to determine whether something
# supports iteration is by trying to call iter() on it:
#
# http://docs.python.org/library/functions.html#iter
#
# It is not sufficient, for example, to check whether the item
# implements __iter__ () (the iteration protocol). There is
# also __getitem__() (the sequence protocol). In Python 2,
# strings do not implement __iter__(), but in Python 3 they do.
try:
iter(data)
except TypeError:
# Then the value does not support iteration.
data = [data]
else:
if is_string(data) or isinstance(data, dict):
# Do not treat strings and dicts (which are iterable) as lists.
data = [data]
# Otherwise, treat the value as a list.
return data | def fetch_section_data(self, context, name) | Fetch the value of a section as a list. | 4.764754 | 4.653904 | 1.023819 |
if not is_string(val):
# In case the template is an integer, for example.
val = self.to_str(val)
if type(val) is not unicode:
val = self.literal(val)
return self.render(val, context, delimiters) | def _render_value(self, val, context, delimiters=None) | Render an arbitrary value. | 5.16392 | 4.989949 | 1.034864 |
parsed_template = parse(template, delimiters)
return parsed_template.render(self, context_stack) | def render(self, template, context_stack, delimiters=None) | Render a unicode template string, and return as unicode.
Arguments:
template: a template string of type unicode (but not a proper
subclass of unicode).
context_stack: a ContextStack instance. | 4.838707 | 7.493419 | 0.645728 |
if WEBENGINE:
return self.dom.runJavaScript("{}".format(script))
else:
return self.dom.evaluateJavaScript("{}".format(script)) | def evaluate(self, script) | Evaluate script in page frame.
:param script: The script to evaluate. | 6.681581 | 9.564892 | 0.698553 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.