code
string | signature
string | docstring
string | loss_without_docstring
float64 | loss_with_docstring
float64 | factor
float64 |
---|---|---|---|---|---|
return await self.execute_command('XGROUP SETID', name, group, stream_id) | async def xgroup_set_id(self, name: str, group: str, stream_id: str) -> bool | [NOTICE] Not officially released yet
:param name: name of the stream
:param group: name of the consumer group
:param stream_id:
If we provide $ as we did, then only new messages arriving
in the stream from now on will be provided to the consumers in the group.
If we specify 0 instead the consumer group will consume all the messages
in the stream history to start with.
Of course, you can specify any other valid ID | 4.597935 | 4.980526 | 0.923183 |
return await self.execute_command('XGROUP DESTROY', name, group) | async def xgroup_destroy(self, name: str, group: str) -> int | [NOTICE] Not officially released yet
XGROUP is used in order to create, destroy and manage consumer groups.
:param name: name of the stream
:param group: name of the consumer group | 6.910826 | 7.419821 | 0.931401 |
return await self.execute_command('XGROUP DELCONSUMER', name, group, consumer) | async def xgroup_del_consumer(self, name: str, group: str, consumer: str) -> int | [NOTICE] Not officially released yet
XGROUP is used in order to create, destroy and manage consumer groups.
:param name: name of the stream
:param group: name of the consumer group
:param consumer: name of the consumer | 5.248905 | 5.673094 | 0.925228 |
if timeout is None:
timeout = 0
return await self.execute_command('BRPOPLPUSH', src, dst, timeout) | async def brpoplpush(self, src, dst, timeout=0) | Pop a value off the tail of ``src``, push it on the head of ``dst``
and then return it.
This command blocks until a value is in ``src`` or until ``timeout``
seconds elapse, whichever is first. A ``timeout`` value of 0 blocks
forever. | 2.892213 | 3.865427 | 0.748226 |
return await self.execute_command('LINSERT', name, where, refvalue, value) | async def linsert(self, name, where, refvalue, value) | Insert ``value`` in list ``name`` either immediately before or after
[``where``] ``refvalue``
Returns the new length of the list on success or -1 if ``refvalue``
is not in the list. | 3.043392 | 5.003938 | 0.608199 |
return await self.execute_command('LTRIM', name, start, end) | async def ltrim(self, name, start, end) | Trim the list ``name``, removing all values not within the slice
between ``start`` and ``end``
``start`` and ``end`` can be negative numbers just like
Python slicing notation | 3.688256 | 5.018112 | 0.734989 |
try:
value = await self.brpop(src, timeout=timeout)
if value is None:
return None
except TimeoutError:
# Timeout was reached
return None
await self.lpush(dst, value[1])
return value[1] | async def brpoplpush(self, src, dst, timeout=0) | Pop a value off the tail of ``src``, push it on the head of ``dst``
and then return it.
This command blocks until a value is in ``src`` or until ``timeout``
seconds elapse, whichever is first. A ``timeout`` value of 0 blocks
forever.
Cluster impl:
Call brpop() then send the result into lpush()
Operation is no longer atomic. | 3.105861 | 3.563928 | 0.871471 |
value = await self.rpop(src)
if value:
await self.lpush(dst, value)
return value
return None | async def rpoplpush(self, src, dst) | RPOP a value off of the ``src`` list and atomically LPUSH it
on to the ``dst`` list. Returns the value.
Cluster impl:
Call rpop() then send the result into lpush()
Operation is no longer atomic. | 3.418008 | 3.815992 | 0.895706 |
if (start is None and num is not None) or \
(start is not None and num is None):
raise RedisError("RedisError: ``start`` and ``num`` must both be specified")
try:
data_type = b(await self.type(name))
if data_type == b("none"):
return []
elif data_type == b("set"):
data = list(await self.smembers(name))[:]
elif data_type == b("list"):
data = await self.lrange(name, 0, -1)
else:
raise RedisClusterException("Unable to sort data type : {0}".format(data_type))
if by is not None:
# _sort_using_by_arg mutates data so we don't
# need need a return value.
data = await self._sort_using_by_arg(data, by, alpha)
elif not alpha:
data.sort(key=self._strtod_key_func)
else:
data.sort()
if desc:
data = data[::-1]
if not (start is None and num is None):
data = data[start:start + num]
if get:
data = await self._retrive_data_from_sort(data, get)
if store is not None:
if data_type == b("set"):
await self.delete(store)
await self.rpush(store, *data)
elif data_type == b("list"):
await self.delete(store)
await self.rpush(store, *data)
else:
raise RedisClusterException("Unable to store sorted data for data type : {0}".format(data_type))
return len(data)
if groups:
if not get or isinstance(get, str) or len(get) < 2:
raise DataError('when using "groups" the "get" argument '
'must be specified and contain at least '
'two keys')
n = len(get)
return list(zip(*[data[i::n] for i in range(n)]))
else:
return data
except KeyError:
return [] | async def sort(self, name, start=None, num=None, by=None, get=None, desc=False, alpha=False, store=None, groups=None) | Sort and return the list, set or sorted set at ``name``.
:start: and :num:
allow for paging through the sorted data
:by:
allows using an external key to weight and sort the items.
Use an "*" to indicate where in the key the item value is located
:get:
allows for returning items from external keys rather than the
sorted data itself. Use an "*" to indicate where int he key
the item value is located
:desc:
allows for reversing the sort
:alpha:
allows for sorting lexicographically rather than numerically
:store:
allows for storing the result of the sort into the key `store`
ClusterImpl:
A full implementation of the server side sort mechanics because many of the
options work on multiple keys that can exist on multiple servers. | 2.692785 | 2.680829 | 1.00446 |
if get is not None:
if isinstance(get, str):
get = [get]
new_data = []
for k in data:
for g in get:
single_item = await self._get_single_item(k, g)
new_data.append(single_item)
data = new_data
return data | async def _retrive_data_from_sort(self, data, get) | Used by sort() | 2.581642 | 2.400606 | 1.075412 |
if getattr(k, "decode", None):
k = k.decode("utf-8")
if '*' in g:
g = g.replace('*', k)
if '->' in g:
key, hash_key = g.split('->')
single_item = await self.get(key, {}).get(hash_key)
else:
single_item = await self.get(g)
elif '#' in g:
single_item = k
else:
single_item = None
return b(single_item) | async def _get_single_item(self, k, g) | Used by sort() | 3.646766 | 3.512961 | 1.038089 |
if getattr(by, "decode", None):
by = by.decode("utf-8")
async def _by_key(arg):
if getattr(arg, "decode", None):
arg = arg.decode("utf-8")
key = by.replace('*', arg)
if '->' in by:
key, hash_key = key.split('->')
v = await self.hget(key, hash_key)
if alpha:
return v
else:
return float(v)
else:
return await self.get(key)
sorted_data = []
for d in data:
sorted_data.append((d, await _by_key(d)))
return [x[0] for x in sorted(sorted_data, key=lambda x: x[1])] | async def _sort_using_by_arg(self, data, by, alpha) | Used by sort() | 2.751429 | 2.711377 | 1.014772 |
def inner(*args, **kwargs):
raise RedisClusterException(
"ERROR: Calling pipelined function {0} is blocked when running redis in cluster mode...".format(
func.__name__))
return inner | def block_pipeline_command(func) | Prints error because some pipelined commands should be blocked when running in cluster-mode | 12.009478 | 7.629027 | 1.574182 |
command_name = args[0]
conn = self.connection
# if this is the first call, we need a connection
if not conn:
conn = self.connection_pool.get_connection()
self.connection = conn
try:
await conn.send_command(*args)
return await self.parse_response(conn, command_name, **options)
except (ConnectionError, TimeoutError) as e:
conn.disconnect()
if not conn.retry_on_timeout and isinstance(e, TimeoutError):
raise
# if we're not already watching, we can safely retry the command
try:
if not self.watching:
await conn.send_command(*args)
return await self.parse_response(conn, command_name, **options)
except ConnectionError:
# the retry failed so cleanup.
conn.disconnect()
await self.reset()
raise | async def immediate_execute_command(self, *args, **options) | Execute a command immediately, but don't auto-retry on a
ConnectionError if we're already WATCHing a variable. Used when
issuing WATCH or subsequent commands retrieving their values but before
MULTI is called. | 3.428548 | 3.063653 | 1.119105 |
"Execute all the commands in the current pipeline"
stack = self.command_stack
if not stack:
return []
if self.scripts:
await self.load_scripts()
if self.transaction or self.explicit_transaction:
exec = self._execute_transaction
else:
exec = self._execute_pipeline
conn = self.connection
if not conn:
conn = self.connection_pool.get_connection()
# assign to self.connection so reset() releases the connection
# back to the pool after we're done
self.connection = conn
try:
return await exec(conn, stack, raise_on_error)
except (ConnectionError, TimeoutError) as e:
conn.disconnect()
if not conn.retry_on_timeout and isinstance(e, TimeoutError):
raise
# if we were watching a variable, the watch is no longer valid
# since this connection has died. raise a WatchError, which
# indicates the user should retry his transaction. If this is more
# than a temporary failure, the WATCH that the user next issues
# will fail, propegating the real ConnectionError
if self.watching:
raise WatchError("A ConnectionError occured on while watching "
"one or more keys")
# otherwise, it's safe to retry since the transaction isn't
# predicated on any state
return await exec(conn, stack, raise_on_error)
finally:
await self.reset() | async def execute(self, raise_on_error=True) | Execute all the commands in the current pipeline | 6.195215 | 5.865147 | 1.056276 |
if len(args) <= 1:
raise RedisClusterException("No way to dispatch this command to Redis Cluster. Missing key.")
command = args[0]
if command in ['EVAL', 'EVALSHA']:
numkeys = args[2]
keys = args[3: 3 + numkeys]
slots = {self.connection_pool.nodes.keyslot(key) for key in keys}
if len(slots) != 1:
raise RedisClusterException("{0} - all keys must map to the same key slot".format(command))
return slots.pop()
key = args[1]
return self.connection_pool.nodes.keyslot(key) | def _determine_slot(self, *args) | figure out what slot based on command and args | 4.016119 | 3.894173 | 1.031315 |
self.command_stack = []
self.scripts = set()
self.watches = []
# clean up the other instance attributes
self.watching = False
self.explicit_transaction = False | def reset(self) | Reset back to empty pipeline. | 15.59867 | 13.047722 | 1.195509 |
"Watches the values at keys ``names``"
for name in names:
slot = self._determine_slot('WATCH', name)
dist_node = self.connection_pool.get_node_by_slot(slot)
if node.get('name') != dist_node['name']:
# raise error if commands in a transaction can not hash to same node
if len(node) > 0:
raise ClusterTransactionError("Keys in request don't hash to the same node")
if self.explicit_transaction:
raise RedisError('Cannot issue a WATCH after a MULTI')
await conn.send_command('WATCH', *names)
return await conn.read_response() | async def _watch(self, node, conn, names) | Watches the values at keys ``names`` | 7.667259 | 6.333812 | 1.210528 |
"Unwatches all previously specified keys"
await conn.send_command('UNWATCH')
res = await conn.read_response()
return self.watching and res or True | async def _unwatch(self, conn) | Unwatches all previously specified keys | 9.623599 | 6.93293 | 1.3881 |
connection = self.connection
commands = self.commands
# We are going to clobber the commands with the write, so go ahead
# and ensure that nothing is sitting there from a previous run.
for c in commands:
c.result = None
# build up all commands into a single request to increase network perf
# send all the commands and catch connection and timeout errors.
try:
await connection.send_packed_command(connection.pack_commands([c.args for c in commands]))
except (ConnectionError, TimeoutError) as e:
for c in commands:
c.result = e | async def write(self) | Code borrowed from StrictRedis so it can be fixed | 7.858352 | 7.027457 | 1.118236 |
self._command_stack.extend(['SET', type, offset, value])
return self | def set(self, type, offset, value) | Set the specified bit field and returns its old value. | 9.033679 | 8.737225 | 1.03393 |
self._command_stack.extend(['GET', type, offset])
return self | def get(self, type, offset) | Returns the specified bit field. | 10.632668 | 10.240195 | 1.038327 |
self._command_stack.extend(['INCRBY', type, offset, increment])
return self | def incrby(self, type, offset, increment) | Increments or decrements (if a negative increment is given)
the specified bit field and returns the new value. | 6.633677 | 6.739145 | 0.98435 |
params = [key]
if start is not None and end is not None:
params.append(start)
params.append(end)
elif (start is not None and end is None) or \
(end is not None and start is None):
raise RedisError("Both start and end must be specified")
return await self.execute_command('BITCOUNT', *params) | async def bitcount(self, key, start=None, end=None) | Returns the count of set bits in the value of ``key``. Optional
``start`` and ``end`` paramaters indicate which bytes to consider | 2.157415 | 2.347941 | 0.918854 |
return await self.execute_command('GETRANGE', key, start, end) | async def getrange(self, key, start, end) | Returns the substring of the string value stored at ``key``,
determined by the offsets ``start`` and ``end`` (both are inclusive) | 4.505001 | 4.465023 | 1.008954 |
args = list_or_args(keys, args)
return await self.execute_command('MGET', *args) | async def mget(self, keys, *args) | Returns a list of values ordered identically to ``keys`` | 4.304781 | 4.104484 | 1.0488 |
if args:
if len(args) != 1 or not isinstance(args[0], dict):
raise RedisError('MSETNX requires **kwargs or a single '
'dict arg')
kwargs.update(args[0])
items = []
for pair in iteritems(kwargs):
items.extend(pair)
return await self.execute_command('MSETNX', *items) | async def msetnx(self, *args, **kwargs) | Sets key/values based on a mapping if none of the keys are already set.
Mapping can be supplied as a single dictionary argument or as kwargs.
Returns a boolean indicating if the operation was successful. | 3.385063 | 3.324811 | 1.018122 |
if isinstance(time_ms, datetime.timedelta):
ms = int(time_ms.microseconds / 1000)
time_ms = (time_ms.seconds + time_ms.days * 24 * 3600) * 1000 + ms
return await self.execute_command('PSETEX', name, time_ms, value) | async def psetex(self, name, time_ms, value) | Set the value of key ``name`` to ``value`` that expires in ``time_ms``
milliseconds. ``time_ms`` can be represented by an integer or a Python
timedelta object | 1.933149 | 1.949964 | 0.991377 |
if isinstance(time, datetime.timedelta):
time = time.seconds + time.days * 24 * 3600
return await self.execute_command('SETEX', name, time, value) | async def setex(self, name, time, value) | Set the value of key ``name`` to ``value`` that expires in ``time``
seconds. ``time`` can be represented by an integer or a Python
timedelta object. | 2.420004 | 2.222026 | 1.089098 |
return await self.execute_command('SUBSTR', name, start, end) | async def substr(self, name, start, end=-1) | Return a substring of the string at key ``name``. ``start`` and ``end``
are 0-based integers specifying the portion of the string to return. | 4.546422 | 4.071025 | 1.116776 |
res = list()
for arg in list_or_args(keys, args):
res.append(await self.get(arg))
return res | async def mget(self, keys, *args) | Returns a list of values ordered identically to ``keys``
Cluster impl:
Itterate all keys and send GET for each key.
This will go alot slower than a normal mget call in StrictRedis.
Operation is no longer atomic. | 5.535994 | 6.620634 | 0.836173 |
if args:
if len(args) != 1 or not isinstance(args[0], dict):
raise RedisError('MSET requires **kwargs or a single dict arg')
kwargs.update(args[0])
for pair in iteritems(kwargs):
await self.set(pair[0], pair[1])
return True | async def mset(self, *args, **kwargs) | Sets key/values based on a mapping. Mapping can be supplied as a single
dictionary argument or as kwargs.
Cluster impl:
Itterate over all items and do SET on each (k,v) pair
Operation is no longer atomic. | 3.638099 | 3.561477 | 1.021514 |
if args:
if len(args) != 1 or not isinstance(args[0], dict):
raise RedisError('MSETNX requires **kwargs or a single dict arg')
kwargs.update(args[0])
# Itterate over all items and fail fast if one value is True.
for k, _ in kwargs.items():
if await self.get(k):
return False
return await self.mset(**kwargs) | async def msetnx(self, *args, **kwargs) | Sets key/values based on a mapping if none of the keys are already set.
Mapping can be supplied as a single dictionary argument or as kwargs.
Returns a boolean indicating if the operation was successful.
Clutser impl:
Itterate over all items and do GET to determine if all keys do not exists.
If true then call mset() on all keys. | 4.781088 | 3.805148 | 1.256479 |
if not response or not options['withscores']:
return response
score_cast_func = options.get('score_cast_func', float)
it = iter(response)
return list(zip(it, map(score_cast_func, it))) | def zset_score_pairs(response, **options) | If ``withscores`` is specified in the options, return the response as
a list of (value, score) pairs | 3.59169 | 3.398437 | 1.056865 |
pieces = []
if args:
if len(args) % 2 != 0:
raise RedisError("ZADD requires an equal number of "
"values and scores")
pieces.extend(args)
for pair in iteritems(kwargs):
pieces.append(pair[1])
pieces.append(pair[0])
return await self.execute_command('ZADD', name, *pieces) | async def zadd(self, name, *args, **kwargs) | Set any number of score, element-name pairs to the key ``name``. Pairs
can be specified in two ways:
As *args, in the form of: score1, name1, score2, name2, ...
or as **kwargs, in the form of: name1=score1, name2=score2, ...
The following example would add four values to the 'my-key' key:
redis.zadd('my-key', 1.1, 'name1', 2.2, 'name2', name3=3.3, name4=4.4) | 2.736482 | 2.689723 | 1.017384 |
if not option:
raise RedisError("ZADDOPTION must take options")
options = set(opt.upper() for opt in option.split())
if options - VALID_ZADD_OPTIONS:
raise RedisError("ZADD only takes XX, NX, CH, or INCR")
if 'NX' in options and 'XX' in options:
raise RedisError("ZADD only takes one of XX or NX")
pieces = list(options)
members = []
if args:
if len(args) % 2 != 0:
raise RedisError("ZADD requires an equal number of "
"values and scores")
members.extend(args)
for pair in iteritems(kwargs):
members.append(pair[1])
members.append(pair[0])
if 'INCR' in options and len(members) != 2:
raise RedisError("ZADD with INCR only takes one score-name pair")
return await self.execute_command('ZADD', name, *pieces, *members) | async def zaddoption(self, name, option=None, *args, **kwargs) | Differs from zadd in that you can set either 'XX' or 'NX' option as
described here: https://redis.io/commands/zadd. Only for Redis 3.0.2 or
later.
The following example would add four values to the 'my-key' key:
redis.zaddoption('my-key', 'XX', 1.1, 'name1', 2.2, 'name2', name3=3.3, name4=4.4)
redis.zaddoption('my-key', 'NX CH', name1=2.2) | 3.279161 | 3.092003 | 1.06053 |
if desc:
return await self.zrevrange(name, start, end, withscores,
score_cast_func)
pieces = ['ZRANGE', name, start, end]
if withscores:
pieces.append(b('WITHSCORES'))
options = {
'withscores': withscores,
'score_cast_func': score_cast_func
}
return await self.execute_command(*pieces, **options) | async def zrange(self, name, start, end, desc=False, withscores=False,
score_cast_func=float) | Return a range of values from sorted set ``name`` between
``start`` and ``end`` sorted in ascending order.
``start`` and ``end`` can be negative, indicating the end of the range.
``desc`` a boolean indicating whether to sort the results descendingly
``withscores`` indicates to return the scores along with the values.
The return type is a list of (value, score) pairs
``score_cast_func`` a callable used to cast the score return value | 2.222293 | 3.296262 | 0.674186 |
if (start is not None and num is None) or \
(num is not None and start is None):
raise RedisError("``start`` and ``num`` must both be specified")
pieces = ['ZRANGEBYLEX', name, min, max]
if start is not None and num is not None:
pieces.extend([b('LIMIT'), start, num])
return await self.execute_command(*pieces) | async def zrangebylex(self, name, min, max, start=None, num=None) | Return the lexicographical range of values from sorted set ``name``
between ``min`` and ``max``.
If ``start`` and ``num`` are specified, then return a slice of the
range. | 2.334737 | 2.365734 | 0.986897 |
return await self.execute_command('ZREMRANGEBYSCORE', name, min, max) | async def zremrangebyscore(self, name, min, max) | Remove all elements in the sorted set ``name`` with scores
between ``min`` and ``max``. Returns the number of elements removed. | 3.021283 | 3.147751 | 0.959823 |
if (start is not None and num is None) or \
(num is not None and start is None):
raise RedisError("``start`` and ``num`` must both be specified")
pieces = ['ZREVRANGEBYSCORE', name, max, min]
if start is not None and num is not None:
pieces.extend([b('LIMIT'), start, num])
if withscores:
pieces.append(b('WITHSCORES'))
options = {
'withscores': withscores,
'score_cast_func': score_cast_func
}
return await self.execute_command(*pieces, **options) | async def zrevrangebyscore(self, name, max, min, start=None, num=None,
withscores=False, score_cast_func=float) | Return a range of values from the sorted set ``name`` with scores
between ``min`` and ``max`` in descending order.
If ``start`` and ``num`` are specified, then return a slice
of the range.
``withscores`` indicates to return the scores along with the values.
The return type is a list of (value, score) pairs
``score_cast_func`` a callable used to cast the score return value | 2.00551 | 2.332296 | 0.859887 |
if not response or not options['groups']:
return response
n = options['groups']
return list(zip(*[response[i::n] for i in range(n)])) | def sort_return_tuples(response, **options) | If ``groups`` is specified, return the response as a list of
n-element tuples with n being the value found in options['groups'] | 4.893571 | 3.097601 | 1.579794 |
if isinstance(time, datetime.timedelta):
time = time.seconds + time.days * 24 * 3600
return await self.execute_command('EXPIRE', name, time) | async def expire(self, name, time) | Set an expire flag on key ``name`` for ``time`` seconds. ``time``
can be represented by an integer or a Python timedelta object. | 2.664906 | 2.218453 | 1.201245 |
if isinstance(time, datetime.timedelta):
ms = int(time.microseconds / 1000)
time = (time.seconds + time.days * 24 * 3600) * 1000 + ms
return await self.execute_command('PEXPIRE', name, time) | async def pexpire(self, name, time) | Set an expire flag on key ``name`` for ``time`` milliseconds.
``time`` can be represented by an integer or a Python timedelta
object. | 2.001271 | 1.963889 | 1.019035 |
if (start is not None and num is None) or \
(num is not None and start is None):
raise RedisError("``start`` and ``num`` must both be specified")
pieces = [name]
if by is not None:
pieces.append(b('BY'))
pieces.append(by)
if start is not None and num is not None:
pieces.append(b('LIMIT'))
pieces.append(start)
pieces.append(num)
if get is not None:
# If get is a string assume we want to get a single value.
# Otherwise assume it's an interable and we want to get multiple
# values. We can't just iterate blindly because strings are
# iterable.
if isinstance(get, str):
pieces.append(b('GET'))
pieces.append(get)
else:
for g in get:
pieces.append(b('GET'))
pieces.append(g)
if desc:
pieces.append(b('DESC'))
if alpha:
pieces.append(b('ALPHA'))
if store is not None:
pieces.append(b('STORE'))
pieces.append(store)
if groups:
if not get or isinstance(get, str) or len(get) < 2:
raise DataError('when using "groups" the "get" argument '
'must be specified and contain at least '
'two keys')
options = {'groups': len(get) if groups else None}
return await self.execute_command('SORT', *pieces, **options) | async def sort(self, name, start=None, num=None, by=None, get=None,
desc=False, alpha=False, store=None, groups=False) | Sort and return the list, set or sorted set at ``name``.
``start`` and ``num`` allow for paging through the sorted data
``by`` allows using an external key to weight and sort the items.
Use an "*" to indicate where in the key the item value is located
``get`` allows for returning items from external keys rather than the
sorted data itself. Use an "*" to indicate where int he key
the item value is located
``desc`` allows for reversing the sort
``alpha`` allows for sorting lexicographically rather than numerically
``store`` allows for storing the result of the sort into
the key ``store``
``groups`` if set to True and if ``get`` contains at least two
elements, sort will return a list of tuples, each containing the
values fetched from the arguments to ``get``. | 2.348425 | 2.344203 | 1.001801 |
pieces = [cursor]
if match is not None:
pieces.extend([b('MATCH'), match])
if count is not None:
pieces.extend([b('COUNT'), count])
return await self.execute_command('SCAN', *pieces) | async def scan(self, cursor=0, match=None, count=None) | Incrementally return lists of key names. Also return a cursor
indicating the scan position.
``match`` allows for filtering the keys by pattern
``count`` allows for hint the minimum number of returns | 3.08007 | 3.566683 | 0.863567 |
if src == dst:
raise ResponseError("source and destination objects are the same")
data = await self.dump(src)
if data is None:
raise ResponseError("no such key")
ttl = await self.pttl(src)
if ttl is None or ttl < 1:
ttl = 0
await self.delete(dst)
await self.restore(dst, ttl, data)
await self.delete(src)
return True | async def rename(self, src, dst) | Rename key ``src`` to ``dst``
Cluster impl:
This operation is no longer atomic because each key must be querried
then set in separate calls because they maybe will change cluster node | 4.022133 | 3.822142 | 1.052324 |
count = 0
for arg in names:
count += await self.execute_command('DEL', arg)
return count | async def delete(self, *names) | "Delete one or more keys specified by ``names``"
Cluster impl:
Iterate all keys and send DELETE for each key.
This will go a lot slower than a normal delete call in StrictRedis.
Operation is no longer atomic. | 6.351367 | 6.411769 | 0.99058 |
if not await self.exists(dst):
return await self.rename(src, dst)
return False | async def renamenx(self, src, dst) | Rename key ``src`` to ``dst`` if ``dst`` doesn't already exist
Cluster impl:
Check if dst key do not exists, then calls rename().
Operation is no longer atomic. | 4.513905 | 4.904753 | 0.920312 |
if len(values) % 3 != 0:
raise RedisError("GEOADD requires places with lon, lat and name"
" values")
return await self.execute_command('GEOADD', name, *values) | async def geoadd(self, name, *values) | Add the specified geospatial items to the specified key identified
by the ``name`` argument. The Geospatial items are given as ordered
members of the ``values`` argument, each item or place is formed by
the triad latitude, longitude and name. | 5.395041 | 4.361152 | 1.237068 |
return await self._georadiusgeneric('GEORADIUS',
name, longitude, latitude, radius,
unit=unit, withdist=withdist,
withcoord=withcoord, withhash=withhash,
count=count, sort=sort, store=store,
store_dist=store_dist) | async def georadius(self, name, longitude, latitude, radius, unit=None,
withdist=False, withcoord=False, withhash=False, count=None,
sort=None, store=None, store_dist=None) | Return the members of the specified key identified by the
``name`` argument which are within the borders of the area specified
with the ``latitude`` and ``longitude`` location and the maximum
distance from the center specified by the ``radius`` value.
The units must be one of the following : m, km mi, ft. By default
``withdist`` indicates to return the distances of each place.
``withcoord`` indicates to return the latitude and longitude of
each place.
``withhash`` indicates to return the geohash string of each place.
``count`` indicates to return the number of elements up to N.
``sort`` indicates to return the places in a sorted way, ASC for
nearest to fairest and DESC for fairest to nearest.
``store`` indicates to save the places names in a sorted set named
with a specific key, each element of the destination sorted set is
populated with the score got from the original geo sorted set.
``store_dist`` indicates to save the places names in a sorted set
named with a specific key, instead of ``store`` the sorted set
destination score is set with the distance. | 2.018039 | 2.374146 | 0.850007 |
return await self._georadiusgeneric('GEORADIUSBYMEMBER',
name, member, radius, unit=unit,
withdist=withdist, withcoord=withcoord,
withhash=withhash, count=count,
sort=sort, store=store,
store_dist=store_dist) | async def georadiusbymember(self, name, member, radius, unit=None,
withdist=False, withcoord=False, withhash=False,
count=None, sort=None, store=None, store_dist=None) | This command is exactly like ``georadius`` with the sole difference
that instead of taking, as the center of the area to query, a longitude
and latitude value, it takes the name of a member already existing
inside the geospatial index represented by the sorted set. | 1.982929 | 2.158461 | 0.918677 |
"Round-robin slave balancer"
slaves = await self.sentinel_manager.discover_slaves(self.service_name)
slave_address = list()
if slaves:
if self.slave_rr_counter is None:
self.slave_rr_counter = random.randint(0, len(slaves) - 1)
for _ in range(len(slaves)):
self.slave_rr_counter = (self.slave_rr_counter + 1) % len(slaves)
slave_address.append(slaves[self.slave_rr_counter])
return slave_address
# Fallback to the master connection
try:
return await self.get_master_address()
except MasterNotFoundError:
pass
raise SlaveNotFoundError('No slave found for %r' % (self.service_name)) | async def rotate_slaves(self) | Round-robin slave balancer | 3.272641 | 2.994674 | 1.09282 |
kwargs['is_master'] = True
connection_kwargs = dict(self.connection_kwargs)
connection_kwargs.update(kwargs)
return redis_class(connection_pool=connection_pool_class(
service_name, self, **connection_kwargs)) | def master_for(self, service_name, redis_class=StrictRedis,
connection_pool_class=SentinelConnectionPool, **kwargs) | Returns a redis client instance for the ``service_name`` master.
A SentinelConnectionPool class is used to retrive the master's
address before establishing a new connection.
NOTE: If the master's address has changed, any cached connections to
the old master are closed.
By default clients will be a redis.StrictRedis instance. Specify a
different class to the ``redis_class`` argument if you desire
something different.
The ``connection_pool_class`` specifies the connection pool to use.
The SentinelConnectionPool will be used by default.
All other keyword arguments are merged with any connection_kwargs
passed to this class and passed to the connection pool as keyword
arguments to be used to initialize Redis connections. | 3.035147 | 3.98238 | 0.762144 |
kwargs['is_master'] = False
connection_kwargs = dict(self.connection_kwargs)
connection_kwargs.update(kwargs)
return redis_class(connection_pool=connection_pool_class(
service_name, self, **connection_kwargs)) | def slave_for(self, service_name, redis_class=StrictRedis,
connection_pool_class=SentinelConnectionPool, **kwargs) | Returns redis client instance for the ``service_name`` slave(s).
A SentinelConnectionPool class is used to retrive the slave's
address before establishing a new connection.
By default clients will be a redis.StrictRedis instance. Specify a
different class to the ``redis_class`` argument if you desire
something different.
The ``connection_pool_class`` specifies the connection pool to use.
The SentinelConnectionPool will be used by default.
All other keyword arguments are merged with any connection_kwargs
passed to this class and passed to the connection pool as keyword
arguments to be used to initialize Redis connections. | 3.102836 | 4.244632 | 0.731002 |
sleep = self.sleep
token = b(uuid.uuid1().hex)
if blocking is None:
blocking = self.blocking
if blocking_timeout is None:
blocking_timeout = self.blocking_timeout
stop_trying_at = None
if blocking_timeout is not None:
stop_trying_at = mod_time.time() + blocking_timeout
while True:
if await self.do_acquire(token):
self.local.token = token
return True
if not blocking:
return False
if stop_trying_at is not None and mod_time.time() > stop_trying_at:
return False
await asyncio.sleep(sleep, loop=self.redis.connection_pool.loop) | async def acquire(self, blocking=None, blocking_timeout=None) | Use Redis to hold a shared, distributed lock named ``name``.
Returns True once the lock is acquired.
If ``blocking`` is False, always return immediately. If the lock
was acquired, return True, otherwise return False.
``blocking_timeout`` specifies the maximum number of seconds to
wait trying to acquire the lock. | 2.950043 | 3.116401 | 0.946619 |
"Releases the already acquired lock"
expected_token = self.local.token
if expected_token is None:
raise LockError("Cannot release an unlocked lock")
self.local.token = None
await self.do_release(expected_token) | async def release(self) | Releases the already acquired lock | 6.409477 | 5.716636 | 1.121197 |
if self.local.token is None:
raise LockError("Cannot extend an unlocked lock")
if self.timeout is None:
raise LockError("Cannot extend a lock with no timeout")
return await self.do_extend(additional_time) | async def extend(self, additional_time) | Adds more time to an already acquired lock.
``additional_time`` can be specified as an integer or a float, both
representing the number of seconds to add. | 5.069791 | 4.746181 | 1.068183 |
sleep = self.sleep
token = b(uuid.uuid1().hex)
if blocking is None:
blocking = self.blocking
if blocking_timeout is None:
blocking_timeout = self.blocking_timeout
blocking_timeout = blocking_timeout or self.timeout
stop_trying_at = mod_time.time() + min(blocking_timeout, self.timeout)
while True:
if await self.do_acquire(token):
lock_acquired_at = mod_time.time()
if await self.check_lock_in_slaves(token):
check_finished_at = mod_time.time()
# if time expends on acquiring lock is greater than given time
# the lock should be released manually
if check_finished_at > stop_trying_at:
await self.do_release(token)
return False
self.local.token = token
# validity time is considered to be the
# initial validity time minus the time elapsed during check
await self.do_extend(lock_acquired_at - check_finished_at)
return True
else:
await self.do_release(token)
return False
if not blocking or mod_time.time() > stop_trying_at:
return False
await asyncio.sleep(sleep, loop=self.redis.connection_pool.loop) | async def acquire(self, blocking=None, blocking_timeout=None) | Use Redis to hold a shared, distributed lock named ``name``.
Returns True once the lock is acquired.
If ``blocking`` is False, always return immediately. If the lock
was acquired, return True, otherwise return False.
``blocking_timeout`` specifies the maximum number of seconds to
wait trying to acquire the lock. It should not be greater than
expire time of the lock | 4.093987 | 4.155936 | 0.985094 |
out = {}
for node in mapping:
for slot in node['slots']:
out[str(slot)] = node['id']
return out | def _nodes_slots_to_slots_nodes(self, mapping) | Converts a mapping of
{id: <node>, slots: (slot1, slot2)}
to
{slot1: <node>, slot2: <node>}
Operation is expensive so use with caution | 4.302594 | 3.672846 | 1.17146 |
cluster_nodes = self._nodes_slots_to_slots_nodes(await self.cluster_nodes())
res = list()
for slot in slots:
res.append(await self.execute_command('CLUSTER DELSLOTS', slot, node_id=cluster_nodes[slot]))
return res | async def cluster_delslots(self, *slots) | Set hash slots as unbound in the cluster.
It determines by it self what node the slot is in and sends it there
Returns a list of the results for each processed slot. | 4.191792 | 4.264668 | 0.982911 |
if not isinstance(option, str) or option.upper() not in {'FORCE', 'TAKEOVER'}:
raise ClusterError('Wrong option provided')
return await self.execute_command('CLUSTER FAILOVER', option, node_id=node_id) | async def cluster_failover(self, node_id, option) | Forces a slave to perform a manual failover of its master
Sends to specefied node | 4.515627 | 5.028538 | 0.898 |
return await self.execute_command('CLUSTER MEET', host, port, node_id=node_id) | async def cluster_meet(self, node_id, host, port) | Force a node cluster to handshake with another node.
Sends to specefied node | 4.624657 | 6.617886 | 0.698812 |
option = 'SOFT' if soft else 'HARD'
return await self.execute_command('CLUSTER RESET', option, node_id=node_id) | async def cluster_reset(self, node_id, soft=True) | Reset a Redis Cluster node
If 'soft' is True then it will send 'SOFT' argument
If 'soft' is False then it will send 'HARD' argument
Sends to specefied node | 4.754493 | 5.002487 | 0.950426 |
option = 'SOFT' if soft else 'HARD'
res = list()
for node in await self.cluster_nodes():
res.append(
await self.execute_command(
'CLUSTER RESET', option, node_id=node['id']
))
return res | async def cluster_reset_all_nodes(self, soft=True) | Send CLUSTER RESET to all nodes in the cluster
If 'soft' is True then it will send 'SOFT' argument
If 'soft' is False then it will send 'HARD' argument
Sends to all nodes in the cluster | 4.386977 | 4.359213 | 1.006369 |
if state.upper() in {'IMPORTING', 'MIGRATING', 'NODE'} and node_id is not None:
return await self.execute_command('CLUSTER SETSLOT', slot_id, state, node_id)
elif state.upper() == 'STABLE':
return await self.execute_command('CLUSTER SETSLOT', slot_id, 'STABLE')
else:
raise RedisError('Invalid slot state: {0}'.format(state)) | async def cluster_setslot(self, node_id, slot_id, state) | Bind an hash slot to a specific node
Sends to specified node | 3.403307 | 3.558662 | 0.956345 |
"Execute the script, passing any required ``args``"
if client is None:
client = self.registered_client
args = tuple(keys) + tuple(args)
# make sure the Redis server knows about the script
if isinstance(client, BasePipeline):
# make sure this script is good to go on pipeline
client.scripts.add(self)
try:
return await client.evalsha(self.sha, len(keys), *args)
except NoScriptError:
# Maybe the client is pointed to a differnet server than the client
# that created this instance?
# Overwrite the sha just in case there was a discrepancy.
self.sha = await client.script_load(self.script)
return await client.evalsha(self.sha, len(keys), *args) | async def execute(self, keys=[], args=[], client=None) | Execute the script, passing any required ``args`` | 5.377218 | 4.947855 | 1.086778 |
all_k = []
# Fetch all HLL objects via GET and store them client side as strings
all_hll_objects = list()
for hll_key in sources:
all_hll_objects.append(await self.get(hll_key))
# Randomize a keyslot hash that should be used inside {} when doing SET
random_hash_slot = self._random_id()
# Special handling of dest variable if it allready exists, then it shold be included in the HLL merge
# dest can exists anywhere in the cluster.
dest_data = await self.get(dest)
if dest_data:
all_hll_objects.append(dest_data)
# SET all stored HLL objects with SET {RandomHash}RandomKey hll_obj
for hll_object in all_hll_objects:
k = self._random_good_hashslot_key(random_hash_slot)
all_k.append(k)
await self.set(k, hll_object)
# Do regular PFMERGE operation and store value in random key in {RandomHash}
tmp_dest = self._random_good_hashslot_key(random_hash_slot)
await self.execute_command("PFMERGE", tmp_dest, *all_k)
# Do GET and SET so that result will be stored in the destination object any where in the cluster
parsed_dest = await self.get(tmp_dest)
await self.set(dest, parsed_dest)
# Cleanup tmp variables
await self.delete(tmp_dest)
for k in all_k:
await self.delete(k)
return True | async def pfmerge(self, dest, *sources) | Merge N different HyperLogLogs into a single one.
Cluster impl:
Very special implementation is required to make pfmerge() work
But it works :]
It works by first fetching all HLL objects that should be merged and
move them to one hashslot so that pfmerge operation can be performed without
any 'CROSSSLOT' error.
After the PFMERGE operation is done then it will be moved to the correct location
within the cluster and cleanup is done.
This operation is no longer atomic because of all the operations that has to be done. | 6.337436 | 5.5611 | 1.139601 |
return ''.join(random.choice(chars) for _ in range(size)) | def _random_id(self, size=16, chars=string.ascii_uppercase + string.digits) | Generates a random id based on `size` and `chars` variable.
By default it will generate a 16 character long string based on
ascii uppercase letters and digits. | 3.344092 | 4.12996 | 0.809715 |
"Add a new master to Sentinel to be monitored"
return await self.execute_command('SENTINEL MONITOR', name, ip, port, quorum) | async def sentinel_monitor(self, name, ip, port, quorum) | Add a new master to Sentinel to be monitored | 7.291105 | 3.845282 | 1.896117 |
"Set Sentinel monitoring parameters for a given master"
return await self.execute_command('SENTINEL SET', name, option, value) | async def sentinel_set(self, name, option, value) | Set Sentinel monitoring parameters for a given master | 12.120943 | 5.055213 | 2.397712 |
"Return the difference of sets specified by ``keys``"
args = list_or_args(keys, args)
return await self.execute_command('SDIFF', *args) | async def sdiff(self, keys, *args) | Return the difference of sets specified by ``keys`` | 5.33138 | 4.069398 | 1.310115 |
args = list_or_args(keys, args)
return await self.execute_command('SDIFFSTORE', dest, *args) | async def sdiffstore(self, dest, keys, *args) | Store the difference of sets specified by ``keys`` into a new
set named ``dest``. Returns the number of keys in the new set. | 3.417189 | 3.825566 | 0.893251 |
"Return the intersection of sets specified by ``keys``"
args = list_or_args(keys, args)
return await self.execute_command('SINTER', *args) | async def sinter(self, keys, *args) | Return the intersection of sets specified by ``keys`` | 5.353121 | 3.847623 | 1.39128 |
args = list_or_args(keys, args)
return await self.execute_command('SINTERSTORE', dest, *args) | async def sinterstore(self, dest, keys, *args) | Store the intersection of sets specified by ``keys`` into a new
set named ``dest``. Returns the number of keys in the new set. | 3.125627 | 3.528399 | 0.885848 |
if count and isinstance(count, int):
return await self.execute_command('SPOP', name, count)
else:
return await self.execute_command('SPOP', name) | async def spop(self, name, count=None) | Remove and return a random member of set ``name``
``count`` should be type of int and default set to 1.
If ``count`` is supplied, pops a list of ``count`` random
+ members of set ``name`` | 2.496006 | 2.538472 | 0.983271 |
args = number and [number] or []
return await self.execute_command('SRANDMEMBER', name, *args) | async def srandmember(self, name, number=None) | If ``number`` is None, returns a random member of set ``name``.
If ``number`` is supplied, returns a list of ``number`` random
memebers of set ``name``. Note this is only available when running
Redis 2.6+. | 3.950637 | 7.974781 | 0.495391 |
"Return the union of sets specified by ``keys``"
args = list_or_args(keys, args)
return await self.execute_command('SUNION', *args) | async def sunion(self, keys, *args) | Return the union of sets specified by ``keys`` | 5.264853 | 3.98339 | 1.321702 |
args = list_or_args(keys, args)
return await self.execute_command('SUNIONSTORE', dest, *args) | async def sunionstore(self, dest, keys, *args) | Store the union of sets specified by ``keys`` into a new
set named ``dest``. Returns the number of keys in the new set. | 3.200636 | 3.562221 | 0.898495 |
res = await self.sdiff(keys, *args)
await self.delete(dest)
if not res:
return 0
return await self.sadd(dest, *res) | async def sdiffstore(self, dest, keys, *args) | Store the difference of sets specified by ``keys`` into a new
set named ``dest``. Returns the number of keys in the new set.
Overwrites dest key if it exists.
Cluster impl:
Use sdiff() --> Delete dest key --> store result in dest key | 3.89265 | 3.650627 | 1.066296 |
k = list_or_args(keys, args)
res = await self.smembers(k[0])
for arg in k[1:]:
res &= await self.smembers(arg)
return res | async def sinter(self, keys, *args) | Return the intersection of sets specified by ``keys``
Cluster impl:
Querry all keys, intersection and return result | 3.977499 | 4.225621 | 0.941282 |
res = await self.sinter(keys, *args)
await self.delete(dest)
if res:
await self.sadd(dest, *res)
return len(res)
else:
return 0 | async def sinterstore(self, dest, keys, *args) | Store the intersection of sets specified by ``keys`` into a new
set named ``dest``. Returns the number of keys in the new set.
Cluster impl:
Use sinter() --> Delete dest key --> store result in dest key | 2.994122 | 2.975497 | 1.006259 |
res = await self.srem(src, value)
# Only add the element if existed in src set
if res == 1:
await self.sadd(dst, value)
return res | async def smove(self, src, dst, value) | Move ``value`` from set ``src`` to set ``dst`` atomically
Cluster impl:
SMEMBERS --> SREM --> SADD. Function is no longer atomic. | 5.060294 | 5.529703 | 0.915111 |
res = await self.sunion(keys, *args)
await self.delete(dest)
return await self.sadd(dest, *res) | async def sunionstore(self, dest, keys, *args) | Store the union of sets specified by ``keys`` into a new
set named ``dest``. Returns the number of keys in the new set.
Cluster impl:
Use sunion() --> Dlete dest key --> store result in dest key
Operation is no longer atomic. | 5.020522 | 5.079725 | 0.988345 |
"Parses a response from the Redis server"
response = await connection.read_response()
if command_name in self.response_callbacks:
callback = self.response_callbacks[command_name]
return callback(response, **options)
return response | async def parse_response(self, connection, command_name, **options) | Parses a response from the Redis server | 3.684542 | 3.081664 | 1.195634 |
from aredis.pipeline import StrictPipeline
pipeline = StrictPipeline(self.connection_pool, self.response_callbacks,
transaction, shard_hint)
await pipeline.reset()
return pipeline | async def pipeline(self, transaction=True, shard_hint=None) | Return a new pipeline object that can queue multiple commands for
later execution. ``transaction`` indicates whether all commands
should be executed atomically. Apart from making a group of operations
atomic, pipelines are useful for reducing the back-and-forth overhead
between the client and server. | 5.752428 | 7.444129 | 0.772747 |
connection_pool = ClusterConnectionPool.from_url(url, db=db, **kwargs)
return cls(connection_pool=connection_pool, skip_full_coverage_check=skip_full_coverage_check) | def from_url(cls, url, db=None, skip_full_coverage_check=False, **kwargs) | Return a Redis client object configured from the given URL, which must
use either `the ``redis://`` scheme
<http://www.iana.org/assignments/uri-schemes/prov/redis>`_ for RESP
connections or the ``unix://`` scheme for Unix domain sockets.
For example::
redis://[:password]@localhost:6379/0
unix://[:password]@/path/to/socket.sock?db=0
There are several ways to specify a database number. The parse function
will return the first specified option:
1. A ``db`` querystring option, e.g. redis://localhost?db=0
2. If using the redis:// scheme, the path argument of the url, e.g.
redis://localhost/0
3. The ``db`` argument to this function.
If none of these options are specified, db=0 is used.
Any additional querystring arguments and keyword arguments will be
passed along to the ConnectionPool class's initializer. In the case
of conflicting arguments, querystring arguments always win. | 2.926799 | 2.658405 | 1.100961 |
if command in self.result_callbacks:
return self.result_callbacks[command](res, **kwargs)
# Default way to handle result
return first_key(res) | def _merge_result(self, command, res, **kwargs) | `res` is a dict with the following structure Dict(NodeName, CommandResult) | 6.027117 | 5.766889 | 1.045125 |
if not self.connection_pool.initialized:
await self.connection_pool.initialize()
if not args:
raise RedisClusterException("Unable to determine command to use")
command = args[0]
node = self.determine_node(*args, **kwargs)
if node:
return await self.execute_command_on_nodes(node, *args, **kwargs)
# If set externally we must update it before calling any commands
if self.refresh_table_asap:
await self.connection_pool.nodes.initialize()
self.refresh_table_asap = False
redirect_addr = None
asking = False
try_random_node = False
slot = self._determine_slot(*args)
ttl = int(self.RedisClusterRequestTTL)
while ttl > 0:
ttl -= 1
if asking:
node = self.connection_pool.nodes.nodes[redirect_addr]
r = self.connection_pool.get_connection_by_node(node)
elif try_random_node:
r = self.connection_pool.get_random_connection()
try_random_node = False
else:
if self.refresh_table_asap:
# MOVED
node = self.connection_pool.get_master_node_by_slot(slot)
else:
node = self.connection_pool.get_node_by_slot(slot)
r = self.connection_pool.get_connection_by_node(node)
try:
if asking:
await r.send_command('ASKING')
await self.parse_response(r, "ASKING", **kwargs)
asking = False
await r.send_command(*args)
return await self.parse_response(r, command, **kwargs)
except (RedisClusterException, BusyLoadingError):
raise
except (CancelledError, ConnectionError, TimeoutError):
try_random_node = True
if ttl < self.RedisClusterRequestTTL / 2:
await asyncio.sleep(0.1)
except ClusterDownError as e:
self.connection_pool.disconnect()
self.connection_pool.reset()
self.refresh_table_asap = True
raise e
except MovedError as e:
# Reinitialize on ever x number of MovedError.
# This counter will increase faster when the same client object
# is shared between multiple threads. To reduce the frequency you
# can set the variable 'reinitialize_steps' in the constructor.
self.refresh_table_asap = True
await self.connection_pool.nodes.increment_reinitialize_counter()
node = self.connection_pool.nodes.set_node(e.host, e.port, server_type='master')
self.connection_pool.nodes.slots[e.slot_id][0] = node
except TryAgainError as e:
if ttl < self.RedisClusterRequestTTL / 2:
await asyncio.sleep(0.05)
except AskError as e:
redirect_addr, asking = "{0}:{1}".format(e.host, e.port), True
finally:
self.connection_pool.release(r)
raise ClusterError('TTL exhausted.') | async def execute_command(self, *args, **kwargs) | Send a command to a node in the cluster | 3.700007 | 3.657757 | 1.011551 |
await self.connection_pool.initialize()
if shard_hint:
raise RedisClusterException("shard_hint is deprecated in cluster mode")
from aredis.pipeline import StrictClusterPipeline
return StrictClusterPipeline(
connection_pool=self.connection_pool,
startup_nodes=self.connection_pool.nodes.startup_nodes,
result_callbacks=self.result_callbacks,
response_callbacks=self.response_callbacks,
transaction=transaction,
watches=watches
) | async def pipeline(self, transaction=None, shard_hint=None, watches=None) | Cluster impl:
Pipelines do not work in cluster mode the same way they do in normal mode.
Create a clone of this object so that simulating pipelines will work correctly.
Each command will be called directly when used and when calling execute() will only return the result stack.
cluster transaction can only be run with commands in the same node, otherwise error will be raised. | 3.923698 | 3.858352 | 1.016936 |
"Called when the stream connects"
self._stream = connection._reader
self._buffer = SocketBuffer(self._stream, self._read_size)
if connection.decode_responses:
self.encoding = connection.encoding | def on_connect(self, connection) | Called when the stream connects | 9.14302 | 7.90191 | 1.157064 |
"Called when the stream disconnects"
if self._stream is not None:
self._stream = None
if self._buffer is not None:
self._buffer.close()
self._buffer = None
self.encoding = None | def on_disconnect(self) | Called when the stream disconnects | 4.354187 | 4.063745 | 1.071472 |
"See if there's data that can be read."
if not (self._reader and self._writer):
await self.connect()
return self._parser.can_read() | async def can_read(self) | See if there's data that can be read. | 8.448201 | 5.390481 | 1.567245 |
"Send an already packed command to the Redis server"
if not self._writer:
await self.connect()
try:
if isinstance(command, str):
command = [command]
self._writer.writelines(command)
except asyncio.futures.TimeoutError:
self.disconnect()
raise TimeoutError("Timeout writing to socket")
except Exception:
e = sys.exc_info()[1]
self.disconnect()
if len(e.args) == 1:
errno, errmsg = 'UNKNOWN', e.args[0]
else:
errno = e.args[0]
errmsg = e.args[1]
raise ConnectionError("Error %s while writing to socket. %s." %
(errno, errmsg))
except:
self.disconnect()
raise | async def send_packed_command(self, command) | Send an already packed command to the Redis server | 3.211811 | 2.975581 | 1.079389 |
"Disconnects from the Redis server"
self._parser.on_disconnect()
try:
self._writer.close()
except Exception:
pass
self._reader = None
self._writer = None | def disconnect(self) | Disconnects from the Redis server | 5.086578 | 5.050179 | 1.007207 |
"Pack a series of arguments into the Redis protocol"
output = []
# the client might have included 1 or more literal arguments in
# the command name, e.g., 'CONFIG GET'. The Redis server expects these
# arguments to be sent separately, so split the first argument
# manually. All of these arguements get wrapped in the Token class
# to prevent them from being encoded.
command = args[0]
if ' ' in command:
args = tuple([b(s) for s in command.split()]) + args[1:]
else:
args = (b(command),) + args[1:]
buff = SYM_EMPTY.join(
(SYM_STAR, b(str(len(args))), SYM_CRLF))
for arg in map(self.encode, args):
# to avoid large string mallocs, chunk the command into the
# output list if we're sending large values
if len(buff) > 6000 or len(arg) > 6000:
buff = SYM_EMPTY.join(
(buff, SYM_DOLLAR, b(str(len(arg))), SYM_CRLF))
output.append(buff)
output.append(b(arg))
buff = SYM_CRLF
else:
buff = SYM_EMPTY.join((buff, SYM_DOLLAR, b(str(len(arg))),
SYM_CRLF, b(arg), SYM_CRLF))
output.append(buff)
return output | def pack_command(self, *args) | Pack a series of arguments into the Redis protocol | 4.372415 | 4.180795 | 1.045833 |
"Pack multiple commands into the Redis protocol"
output = []
pieces = []
buffer_length = 0
for cmd in commands:
for chunk in self.pack_command(*cmd):
pieces.append(chunk)
buffer_length += len(chunk)
if buffer_length > 6000:
output.append(SYM_EMPTY.join(pieces))
buffer_length = 0
pieces = []
if pieces:
output.append(SYM_EMPTY.join(pieces))
return output | def pack_commands(self, commands) | Pack multiple commands into the Redis protocol | 3.466216 | 3.09119 | 1.121321 |
if self.db:
warnings.warn('SELECT DB is not allowed in cluster mode')
self.db = ''
await super(ClusterConnection, self).on_connect()
if self.readonly:
await self.send_command('READONLY')
if nativestr(await self.read_response()) != 'OK':
raise ConnectionError('READONLY command failed') | async def on_connect(self) | Initialize the connection, authenticate and select a database and send READONLY if it is
set during object initialization. | 6.109934 | 4.813703 | 1.26928 |
if isinstance(schema, SchemaBuilder):
schema_uri = schema.schema_uri
schema = schema.to_schema()
if schema_uri is None:
del schema['$schema']
elif isinstance(schema, SchemaNode):
schema = schema.to_schema()
if '$schema' in schema:
self.schema_uri = self.schema_uri or schema['$schema']
schema = dict(schema)
del schema['$schema']
self._root_node.add_schema(schema) | def add_schema(self, schema) | Merge in a JSON schema. This can be a ``dict`` or another
``SchemaBuilder``
:param schema: a JSON Schema
.. note::
There is no schema validation. If you pass in a bad schema,
you might get back a bad schema. | 3.079126 | 3.067039 | 1.003941 |
schema = self._base_schema()
schema.update(self._root_node.to_schema())
return schema | def to_schema(self) | Generate a schema based on previous inputs.
:rtype: ``dict`` | 6.25813 | 7.39441 | 0.846332 |
return json.dumps(self.to_schema(), *args, **kwargs) | def to_json(self, *args, **kwargs) | Generate a schema and convert it directly to serialized JSON.
:rtype: ``str`` | 5.228323 | 4.859884 | 1.075812 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.