code
string
signature
string
docstring
string
loss_without_docstring
float64
loss_with_docstring
float64
factor
float64
self.remove_partition(partition) broker_destination.add_partition(partition)
def move_partition(self, partition, broker_destination)
Move partition to destination broker and adjust replicas.
4.111706
3.822608
1.075628
return sum(1 for p in topic.partitions if p in self.partitions)
def count_partitions(self, topic)
Return count of partitions for given topic.
8.705226
8.228855
1.05789
# Only partitions not having replica in broker are valid # Get best fit partition, based on avoiding partition from same topic # and partition with least siblings in destination-broker. eligible_partitions = self.partitions - broker.partitions if eligible_partitions: pref_partition = min( eligible_partitions, key=lambda source_partition: sibling_distance[source_partition.topic], ) return pref_partition else: return None
def get_preferred_partition(self, broker, sibling_distance)
The preferred partition belongs to the topic with the minimum (also negative) distance between destination and source. :param broker: Destination broker :param sibling_distance: dict {topic: distance} negative distance should mean that destination broker has got less partition of a certain topic than source self. :returns: A partition or None if no eligible partitions are available
8.980957
7.796546
1.151915
# Possible partitions which can grant leadership to broker owned_partitions = list(filter( lambda p: self is not p.leader and len(p.replicas) > 1, self.partitions, )) for partition in owned_partitions: # Partition not available to grant leadership when: # 1. Broker is already under leadership change or # 2. Partition has already granted leadership before if partition.leader in skip_brokers or partition in skip_partitions: continue # Current broker is granted leadership temporarily prev_leader = partition.swap_leader(self) # Partition shouldn't be used again skip_partitions.append(partition) # Continue if prev-leader remains balanced # If leadership of prev_leader is to be revoked, it is considered balanced if prev_leader.count_preferred_replica() >= opt_count or \ prev_leader.revoked_leadership: # If current broker is leader-balanced return else # request next-partition if self.count_preferred_replica() >= opt_count: return else: continue else: # prev-leader (broker) became unbalanced # Append skip-brokers list so that it is not unbalanced further skip_brokers.append(prev_leader) # Try recursively arrange leadership for prev-leader prev_leader.request_leadership(opt_count, skip_brokers, skip_partitions) # If prev-leader couldn't be leader-balanced # revert its previous grant to current-broker if prev_leader.count_preferred_replica() < opt_count: # Partition can be used again for rebalancing skip_partitions.remove(partition) partition.swap_leader(prev_leader) # Try requesting leadership from next partition continue else: # If prev-leader successfully balanced skip_partitions.append(partition) # Removing from skip-broker list, since it can now again be # used for granting leadership for some other partition skip_brokers.remove(prev_leader) if self.count_preferred_replica() >= opt_count: # Return if current-broker is leader-balanced return else: continue
def request_leadership(self, opt_count, skip_brokers, skip_partitions)
Under-balanced broker requests leadership from current leader, on the pretext that it recursively can maintain its leadership count as optimal. :key_terms: leader-balanced: Count of brokers as leader is at least opt-count Algorithm: ========= Step-1: Broker will request leadership from current-leader of partitions it belongs to. Step-2: Current-leaders will grant their leadership if one of these happens:- a) Either they remain leader-balanced. b) Or they will recursively request leadership from other partitions until they are become leader-balanced. If both of these conditions fail, they will revoke their leadership-grant Step-3: If current-broker becomes leader-balanced it will return otherwise it moves ahead with next partition.
5.180735
4.732226
1.094778
owned_partitions = list(filter( lambda p: self is p.leader and len(p.replicas) > 1, self.partitions, )) for partition in owned_partitions: # Skip using same partition with broker if already used before potential_new_leaders = list(filter( lambda f: f not in skip_brokers, partition.followers, )) for follower in potential_new_leaders: # Don't swap the broker-pair if already swapped before # in same partition if (partition, self, follower) in used_edges: continue partition.swap_leader(follower) used_edges.append((partition, follower, self)) # new-leader didn't unbalance if follower.count_preferred_replica() <= opt_count + 1: # over-broker balanced # If over-broker is the one which needs to be revoked from leadership # it's considered balanced only if its preferred replica count is 0 if (self.count_preferred_replica() <= opt_count + 1 and not self.revoked_leadership) or \ (self.count_preferred_replica() == 0 and self.revoked_leadership): return else: # Try next-partition, not another follower break else: # new-leader (broker) became over-balanced skip_brokers.append(follower) follower.donate_leadership(opt_count, skip_brokers, used_edges) # new-leader couldn't be balanced, revert if follower.count_preferred_replica() > opt_count + 1: used_edges.append((partition, follower, self)) partition.swap_leader(self) # Try next leader or partition continue else: # New-leader was successfully balanced used_edges.append((partition, follower, self)) # New-leader can be reused skip_brokers.remove(follower) # If broker is the one which needs to be revoked from leadership # it's considered balanced only if its preferred replica count is 0 if (self.count_preferred_replica() <= opt_count + 1 and not self.revoked_leadership) or \ (self.count_preferred_replica() == 0 and self.revoked_leadership): # Now broker is balanced return else: # Try next-partition, not another follower break
def donate_leadership(self, opt_count, skip_brokers, used_edges)
Over-loaded brokers tries to donate their leadership to one of their followers recursively until they become balanced. :key_terms: used_edges: Represent list of tuple/edges (partition, prev-leader, new-leader), which have already been used for donating leadership from prev-leader to new-leader in same partition before. skip_brokers: This is to avoid using same broker recursively for balancing to prevent loops. :Algorithm: * Over-loaded leader tries to donate its leadership to one of its followers * Follower will try to be balanced recursively if it becomes over-balanced * If it is successful, over-loaded leader moves to next partition if required, return otherwise. * If it is unsuccessful, it tries for next-follower or next-partition whatever or returns if none available.
3.60097
3.140564
1.1466
ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(host) return ssh
def ssh_client(host)
Start an ssh client. :param host: the host :type host: str :returns: ssh client :rtype: Paramiko client
1.602154
2.213729
0.723735
if minutes: return FIND_MINUTES_COMMAND.format( data_path=data_path, minutes=minutes, ) if start_time: if end_time: return FIND_RANGE_COMMAND.format( data_path=data_path, start_time=start_time, end_time=end_time, ) else: return FIND_START_COMMAND.format( data_path=data_path, start_time=start_time, )
def find_files_cmd(data_path, minutes, start_time, end_time)
Find the log files depending on their modification time. :param data_path: the path to the Kafka data directory :type data_path: str :param minutes: check the files modified in the last N minutes :type minutes: int :param start_time: check the files modified after start_time :type start_time: str :param end_time: check the files modified before end_time :type end_time: str :returns: the find command :rtype: str
1.67188
1.78112
0.938668
files_str = ",".join(files) check_command = CHECK_COMMAND.format( ionice=IONICE, java_home=java_home, files=files_str, ) # One line per message can generate several MB/s of data # Use pre-filtering on the server side to reduce it command = "{check_command} | {reduce_output}".format( check_command=check_command, reduce_output=REDUCE_OUTPUT, ) return command
def check_corrupted_files_cmd(java_home, files)
Check the file corruption of the specified files. :param java_home: the JAVA_HOME :type java_home: string :param files: list of files to be checked :type files: list of string
6.350274
7.226776
0.878715
with closing(ssh_client(host)) as ssh: _, stdout, stderr = ssh.exec_command(command) lines = stdout.read().splitlines() report_stderr(host, stderr) return lines
def get_output_lines_from_command(host, command)
Execute a command on the specified host, returning a list of output lines. :param host: the host name :type host: str :param command: the command :type commmand: str
3.201391
4.623378
0.692435
command = find_files_cmd(data_path, minutes, start_time, end_time) pool = Pool(len(brokers)) result = pool.map( partial(get_output_lines_from_command, command=command), [host for broker, host in brokers]) return [(broker, host, files) for (broker, host), files in zip(brokers, result)]
def find_files(data_path, brokers, minutes, start_time, end_time)
Find all the Kafka log files on the broker that have been modified in the speficied time range. start_time and end_time should be in the format specified by TIME_FORMAT_REGEX. :param data_path: the path to the lof files on the broker :type data_path: str :param brokers: the brokers :type brokers: list of (broker_id, host) pairs :param minutes: check the files modified in the last N minutes :type minutes: int :param start_time: check the files modified after start_time :type start_time: str :param end_time: check the files modified before end_time :type end_time: str :returns: the files :rtype: list of (broker, host, file_path) tuples
3.792179
3.680805
1.030258
current_file = None for line in output.readlines(): file_name_search = FILE_PATH_REGEX.search(line) if file_name_search: current_file = file_name_search.group(1) continue if INVALID_MESSAGE_REGEX.match(line) or INVALID_BYTES_REGEX.match(line): print_line(host, current_file, line, "ERROR") elif VALID_MESSAGE_REGEX.match(line) or \ line.startswith('Starting offset:'): continue else: print_line(host, current_file, line, "UNEXPECTED OUTPUT")
def parse_output(host, output)
Parse the output of the dump tool and print warnings or error messages accordingly. :param host: the source :type host: str :param output: the output of the script on host :type output: list of str
3.641162
3.541606
1.02811
print( "{ltype} Host: {host}, File: {path}".format( ltype=line_type, host=host, path=path, ) ) print("{ltype} Output: {line}".format(ltype=line_type, line=line))
def print_line(host, path, line, line_type)
Print a dump tool line to stdout. :param host: the source host :type host: str :param path: the path to the file that is being analyzed :type path: str :param line: the line to be printed :type line: str :param line_type: a header for the line :type line_type: str
2.698772
2.960062
0.911728
with closing(ssh_client(host)) as ssh: for i, batch in enumerate(chunks(files, batch_size)): command = check_corrupted_files_cmd(java_home, batch) _, stdout, stderr = ssh.exec_command(command) report_stderr(host, stderr) print( " {host}: file {n_file} of {total}".format( host=host, n_file=(i * DEFAULT_BATCH_SIZE), total=len(files), ) ) parse_output(host, stdout)
def check_files_on_host(java_home, host, files, batch_size)
Check the files on the host. Files are grouped together in groups of batch_size files. The dump class will be executed on each batch, sequentially. :param java_home: the JAVA_HOME of the broker :type java_home: str :param host: the host where the tool will be executed :type host: str :param files: the list of files to be analyzed :type files: list of str :param batch_size: the size of each batch :type batch_size: int
3.868195
4.308304
0.897846
client = KafkaClient(cluster_config.broker_list) result = {} for topic, topic_data in six.iteritems(client.topic_partitions): for partition, p_data in six.iteritems(topic_data): topic_partition = topic + "-" + str(partition) result[topic_partition] = p_data.leader return result
def get_partition_leaders(cluster_config)
Return the current leaders of all partitions. Partitions are returned as a "topic-partition" string. :param cluster_config: the cluster :type cluster_config: kafka_utils.utils.config.ClusterConfig :returns: leaders for partitions :rtype: map of ("topic-partition", broker_id) pairs
3.027911
2.989364
1.012895
match = TP_FROM_FILE_REGEX.match(file_path) if not match: print("File path is not valid: " + file_path) sys.exit(1) return match.group(1)
def get_tp_from_file(file_path)
Return the name of the topic-partition given the path to the file. :param file_path: the path to the log file :type file_path: str :returns: the name of the topic-partition, ex. "topic_name-0" :rtype: str
3.058046
3.365763
0.908574
print("Filtering leaders") leader_of = get_partition_leaders(cluster_config) result = [] for broker, host, files in broker_files: filtered = [] for file_path in files: tp = get_tp_from_file(file_path) if tp not in leader_of or leader_of[tp] == broker: filtered.append(file_path) result.append((broker, host, filtered)) print( "Broker: {broker}, leader of {l_count} over {f_count} files".format( broker=broker, l_count=len(filtered), f_count=len(files), ) ) return result
def filter_leader_files(cluster_config, broker_files)
Given a list of broker files, filters out all the files that are in the replicas. :param cluster_config: the cluster :type cluster_config: kafka_utils.utils.config.ClusterConfig :param broker_files: the broker files :type broker_files: list of (b_id, host, [file_path, file_path ...]) tuples :returns: the filtered list :rtype: list of (broker_id, host, [file_path, file_path ...]) tuples
3.398989
3.177786
1.069609
brokers = get_broker_list(cluster_config) broker_files = find_files(data_path, brokers, minutes, start_time, end_time) if not check_replicas: # remove replicas broker_files = filter_leader_files(cluster_config, broker_files) processes = [] print("Starting {n} parallel processes".format(n=len(broker_files))) try: for broker, host, files in broker_files: print( " Broker: {host}, {n} files to check".format( host=host, n=len(files)), ) p = Process( name="dump_process_" + host, target=check_files_on_host, args=(java_home, host, files, batch_size), ) p.start() processes.append(p) print("Processes running:") for process in processes: process.join() except KeyboardInterrupt: print("Terminating all processes") for process in processes: process.terminate() process.join() print("All processes terminated") sys.exit(1)
def check_cluster( cluster_config, data_path, java_home, check_replicas, batch_size, minutes, start_time, end_time, )
Check the integrity of the Kafka log files in a cluster. start_time and end_time should be in the format specified by TIME_FORMAT_REGEX. :param data_path: the path to the log folder on the broker :type data_path: str :param java_home: the JAVA_HOME of the broker :type java_home: str :param check_replicas: also checks the replica files :type check_replicas: bool :param batch_size: the size of the batch :type batch_size: int :param minutes: check the files modified in the last N minutes :type minutes: int :param start_time: check the files modified after start_time :type start_time: str :param end_time: check the files modified before end_time :type end_time: str
3.175158
3.160804
1.004541
if not args.minutes and not args.start_time: print("Error: missing --minutes or --start-time") return False if args.minutes and args.start_time: print("Error: --minutes shouldn't be specified if --start-time is used") return False if args.end_time and not args.start_time: print("Error: --end-time can't be used without --start-time") return False if args.minutes and args.minutes <= 0: print("Error: --minutes must be > 0") return False if args.start_time and not TIME_FORMAT_REGEX.match(args.start_time): print("Error: --start-time format is not valid") print("Example format: '2015-11-26 11:00:00'") return False if args.end_time and not TIME_FORMAT_REGEX.match(args.end_time): print("Error: --end-time format is not valid") print("Example format: '2015-11-26 11:00:00'") return False if args.batch_size <= 0: print("Error: --batch-size must be > 0") return False return True
def validate_args(args)
Basic option validation. Returns False if the options are not valid, True otherwise. :param args: the command line options :type args: map :param brokers_num: the number of brokers
1.793859
1.880198
0.95408
if self.args.topic in ct.topics: topic = ct.topics[self.args.topic] else: self.log.error( "Topic {topic} not found. Exiting." .format(topic=self.args.topic), ) sys.exit(1) if topic.replication_factor == self.args.replication_factor: self.log.info( "Topic {topic} already has replication factor {rf}. " "No action to perform." .format(topic=topic.id, rf=self.args.replication_factor), ) return if self.args.replication_factor > len(ct.brokers): self.log.error( "Replication factor {rf} is greater than the total number of " "brokers {brokers}. Exiting." .format( rf=self.args.replication_factor, brokers=len(ct.brokers) ), ) sys.exit(1) base_assignment = ct.assignment changes_per_partition = abs( self.args.replication_factor - topic.replication_factor ) if topic.replication_factor < self.args.replication_factor: self.log.info( "Increasing topic {topic} replication factor from {old_rf} to " "{new_rf}." .format( topic=topic.id, old_rf=topic.replication_factor, new_rf=self.args.replication_factor, ), ) for partition in topic.partitions: cluster_balancer.add_replica( partition.name, changes_per_partition, ) else: self.log.info( "Decreasing topic {topic} replication factor from {old_rf} to " "{new_rf}." .format( topic=topic.id, old_rf=topic.replication_factor, new_rf=self.args.replication_factor, ), ) topic_data = self.zk.get_topics(topic.id)[topic.id] for partition in topic.partitions: partition_data = topic_data['partitions'][str(partition.partition_id)] isr = partition_data['isr'] osr_broker_ids = [b.id for b in partition.replicas if b.id not in isr] if osr_broker_ids: self.log.info( "The out of sync replica(s) {osr_broker_ids} will be " "prioritized for removal." .format(osr_broker_ids=osr_broker_ids) ) cluster_balancer.remove_replica( partition.name, osr_broker_ids, changes_per_partition, ) # Each replica addition/removal for each partition counts for one # partition movement partition_movement_count = len(topic.partitions) * changes_per_partition reduced_assignment = self.get_reduced_assignment( base_assignment, ct, max_partition_movements=partition_movement_count, max_leader_only_changes=0, ) self.process_assignment(reduced_assignment, allow_rf_change=True)
def run_command(self, ct, cluster_balancer)
Get executable proposed plan(if any) for display or execution.
2.276392
2.305141
0.987528
optimum, extra = compute_optimum(len(groups), total) over_loaded, under_loaded, optimal = [], [], [] for group in sorted(groups, key=key, reverse=True): n_elements = key(group) additional_element = 1 if extra else 0 if n_elements > optimum + additional_element: over_loaded.append(group) elif n_elements == optimum + additional_element: optimal.append(group) elif n_elements < optimum + additional_element: under_loaded.append(group) extra -= additional_element return over_loaded, under_loaded, optimal
def _smart_separate_groups(groups, key, total)
Given a list of group objects, and a function to extract the number of elements for each of them, return the list of groups that have an excessive number of elements (when compared to a uniform distribution), a list of groups with insufficient elements, and a list of groups that already have the optimal number of elements. :param list groups: list of group objects :param func key: function to retrieve the current number of elements from the group object :param int total: total number of elements to distribute Example: .. code-block:: python smart_separate_groups([11, 9, 10, 14], lambda g: g) => ([14], [10, 9], [11])
3.049587
3.106089
0.981809
optimum, extra = compute_optimum(len(groups), total) over_loaded, under_loaded, optimal = _smart_separate_groups(groups, key, total) # If every group is optimal return if not extra: return over_loaded, under_loaded # Some groups in optimal may have a number of elements that is optimum + 1. # In this case they should be considered over_loaded. potential_under_loaded = [ group for group in optimal if key(group) == optimum ] potential_over_loaded = [ group for group in optimal if key(group) > optimum ] revised_under_loaded = under_loaded + potential_under_loaded revised_over_loaded = over_loaded + potential_over_loaded return ( sorted(revised_over_loaded, key=key, reverse=True), sorted(revised_under_loaded, key=key), )
def separate_groups(groups, key, total)
Separate the group into overloaded and under-loaded groups. The revised over-loaded groups increases the choice space for future selection of most suitable group based on search criteria. For example: Given the groups (a:4, b:4, c:3, d:2) where the number represents the number of elements for each group. smart_separate_groups sets 'a' and 'c' as optimal, 'b' as over-loaded and 'd' as under-loaded. separate-groups combines 'a' with 'b' as over-loaded, allowing to select between these two groups to transfer the element to 'd'. :param groups: list of groups :param key: function to retrieve element count from group :param total: total number of elements to distribute :returns: sorted lists of over loaded (descending) and under loaded (ascending) group
3.954417
3.39817
1.16369
return { broker for broker in self._brokers if not broker.inactive and not broker.decommissioned }
def active_brokers(self)
Return set of brokers that are not inactive or decommissioned.
6.391566
3.142921
2.033638
if broker not in self._brokers: self._brokers.add(broker) else: self.log.warning( 'Broker {broker_id} already present in ' 'replication-group {rg_id}'.format( broker_id=broker.id, rg_id=self._id, ) )
def add_broker(self, broker)
Add broker to current broker-list.
3.103616
2.931509
1.058709
return sum(1 for b in partition.replicas if b in self.brokers)
def count_replica(self, partition)
Return count of replicas of given partition.
10.475706
10.157993
1.031277
broker_dest = self._elect_dest_broker(partition) if not broker_dest: raise NotEligibleGroupError( "No eligible brokers to accept partition {p}".format(p=partition), ) source_broker.move_partition(partition, broker_dest)
def acquire_partition(self, partition, source_broker)
Move a partition from a broker to any of the eligible brokers of the replication group. :param partition: Partition to move :param source_broker: Broker the partition currently belongs to
5.234447
5.199403
1.00674
# Select best-fit source and destination brokers for partition # Best-fit is based on partition-count and presence/absence of # Same topic-partition over brokers broker_source, broker_destination = self._select_broker_pair( rg_destination, victim_partition, ) # Actual-movement of victim-partition self.log.debug( 'Moving partition {p_name} from broker {broker_source} to ' 'replication-group:broker {rg_dest}:{dest_broker}'.format( p_name=victim_partition.name, broker_source=broker_source.id, dest_broker=broker_destination.id, rg_dest=rg_destination.id, ), ) broker_source.move_partition(victim_partition, broker_destination)
def move_partition(self, rg_destination, victim_partition)
Move partition(victim) from current replication-group to destination replication-group. Step 1: Evaluate source and destination broker Step 2: Move partition from source-broker to destination-broker
4.543132
4.390528
1.034758
broker_source = self._elect_source_broker(victim_partition) broker_destination = rg_destination._elect_dest_broker(victim_partition) return broker_source, broker_destination
def _select_broker_pair(self, rg_destination, victim_partition)
Select best-fit source and destination brokers based on partition count and presence of partition over the broker. * Get overloaded and underloaded brokers Best-fit Selection Criteria: Source broker: Select broker containing the victim-partition with maximum partitions. Destination broker: NOT containing the victim-partition with minimum partitions. If no such broker found, return first broker. This helps in ensuring:- * Topic-partitions are distributed across brokers. * Partition-count is balanced across replication-groups.
3.352439
3.971857
0.844048
broker_subset = broker_subset or self._brokers over_loaded_brokers = sorted( [ broker for broker in broker_subset if victim_partition in broker.partitions and not broker.inactive ], key=lambda b: len(b.partitions), reverse=True, ) if not over_loaded_brokers: return None broker_topic_partition_cnt = [ (broker, broker.count_partitions(victim_partition.topic)) for broker in over_loaded_brokers ] max_count_pair = max( broker_topic_partition_cnt, key=lambda ele: ele[1], ) return max_count_pair[0]
def _elect_source_broker(self, victim_partition, broker_subset=None)
Select first over loaded broker having victim_partition. Note: The broker with maximum siblings of victim-partitions (same topic) is selected to reduce topic-partition imbalance.
2.819059
2.531123
1.113758
under_loaded_brokers = sorted( [ broker for broker in self._brokers if (victim_partition not in broker.partitions and not broker.inactive and not broker.decommissioned) ], key=lambda b: len(b.partitions) ) if not under_loaded_brokers: return None broker_topic_partition_cnt = [ (broker, broker.count_partitions(victim_partition.topic)) for broker in under_loaded_brokers if victim_partition not in broker.partitions ] min_count_pair = min( broker_topic_partition_cnt, key=lambda ele: ele[1], ) return min_count_pair[0]
def _elect_dest_broker(self, victim_partition)
Select first under loaded brokers preferring not having partition of same topic as victim partition.
3.041257
2.647006
1.148942
total_partitions = sum(len(b.partitions) for b in self.brokers) blacklist = set(b for b in self.brokers if b.decommissioned) active_brokers = self.get_active_brokers() - blacklist if not active_brokers: raise EmptyReplicationGroupError("No active brokers in %s", self._id) # Separate brokers based on partition count over_loaded_brokers, under_loaded_brokers = separate_groups( active_brokers, lambda b: len(b.partitions), total_partitions, ) # Decommissioned brokers are considered overloaded until they have # no more partitions assigned. over_loaded_brokers += [b for b in blacklist if not b.empty()] if not over_loaded_brokers and not under_loaded_brokers: self.log.info( 'Brokers of replication-group: %s already balanced for ' 'partition-count.', self._id, ) return sibling_distance = self.generate_sibling_distance() while under_loaded_brokers and over_loaded_brokers: # Get best-fit source-broker, destination-broker and partition broker_source, broker_destination, victim_partition = \ self._get_target_brokers( over_loaded_brokers, under_loaded_brokers, sibling_distance, ) # No valid source or target brokers found if broker_source and broker_destination: # Move partition self.log.debug( 'Moving partition {p_name} from broker {broker_source} to ' 'broker {broker_destination}' .format( p_name=victim_partition.name, broker_source=broker_source.id, broker_destination=broker_destination.id, ), ) broker_source.move_partition(victim_partition, broker_destination) sibling_distance = self.update_sibling_distance( sibling_distance, broker_destination, victim_partition.topic, ) else: # Brokers are balanced or could not be balanced further break # Re-evaluate under and over-loaded brokers over_loaded_brokers, under_loaded_brokers = separate_groups( active_brokers, lambda b: len(b.partitions), total_partitions, ) # As before add brokers to decommission. over_loaded_brokers += [b for b in blacklist if not b.empty()]
def rebalance_brokers(self)
Rebalance partition-count across brokers.
3.035851
2.939831
1.032662
# Sort given brokers to ensure determinism over_loaded_brokers = sorted( over_loaded_brokers, key=lambda b: len(b.partitions), reverse=True, ) under_loaded_brokers = sorted( under_loaded_brokers, key=lambda b: len(b.partitions), ) # pick pair of brokers from source and destination brokers with # minimum same-partition-count # Set result in format: (source, dest, preferred-partition) target = (None, None, None) min_distance = sys.maxsize best_partition = None for source in over_loaded_brokers: for dest in under_loaded_brokers: # A decommissioned broker can have less partitions than # destination. We consider it a valid source because we want to # move all the partitions out from it. if (len(source.partitions) - len(dest.partitions) > 1 or source.decommissioned): best_partition = source.get_preferred_partition( dest, sibling_distance[dest][source], ) # If no eligible partition continue with next broker. if best_partition is None: continue distance = sibling_distance[dest][source][best_partition.topic] if distance < min_distance: min_distance = distance target = (source, dest, best_partition) else: # If relatively-unbalanced then all brokers in destination # will be thereafter, return from here. break return target
def _get_target_brokers(self, over_loaded_brokers, under_loaded_brokers, sibling_distance)
Pick best-suitable source-broker, destination-broker and partition to balance partition-count over brokers in given replication-group.
4.301038
4.081126
1.053885
sibling_distance = defaultdict(lambda: defaultdict(dict)) topics = {p.topic for p in self.partitions} for source in self.brokers: for dest in self.brokers: if source != dest: for topic in topics: sibling_distance[dest][source][topic] = \ dest.count_partitions(topic) - \ source.count_partitions(topic) return sibling_distance
def generate_sibling_distance(self)
Generate a dict containing the distance computed as difference in in number of partitions of each topic from under_loaded_brokers to over_loaded_brokers. Negative distance means that the destination broker has got less partitions of a certain topic than the source broker. returns: dict {dest: {source: {topic: distance}}}
3.575705
2.462251
1.45221
for source in six.iterkeys(sibling_distance[dest]): sibling_distance[dest][source][topic] = \ dest.count_partitions(topic) - \ source.count_partitions(topic) return sibling_distance
def update_sibling_distance(self, sibling_distance, dest, topic)
Update the sibling distance for topic and destination broker.
4.559264
4.269051
1.067981
# Evaluate possible source and destination-broker source_broker, dest_broker = self._get_eligible_broker_pair( under_loaded_rg, eligible_partition, ) if source_broker and dest_broker: self.log.debug( 'Moving partition {p_name} from broker {source_broker} to ' 'replication-group:broker {rg_dest}:{dest_broker}'.format( p_name=eligible_partition.name, source_broker=source_broker.id, dest_broker=dest_broker.id, rg_dest=under_loaded_rg.id, ), ) # Move partition if eligible brokers found source_broker.move_partition(eligible_partition, dest_broker)
def move_partition_replica(self, under_loaded_rg, eligible_partition)
Move partition to under-loaded replication-group if possible.
3.328475
3.095031
1.075425
under_brokers = list(filter( lambda b: eligible_partition not in b.partitions, under_loaded_rg.brokers, )) over_brokers = list(filter( lambda b: eligible_partition in b.partitions, self.brokers, )) # Get source and destination broker source_broker, dest_broker = None, None if over_brokers: source_broker = max( over_brokers, key=lambda broker: len(broker.partitions), ) if under_brokers: dest_broker = min( under_brokers, key=lambda broker: len(broker.partitions), ) return (source_broker, dest_broker)
def _get_eligible_broker_pair(self, under_loaded_rg, eligible_partition)
Evaluate and return source and destination broker-pair from over-loaded and under-loaded replication-group if possible, return None otherwise. Return source broker with maximum partitions and destination broker with minimum partitions based on following conditions:- 1) At-least one broker in under-loaded group which does not have victim-partition. This is because a broker cannot have duplicate replica. 2) At-least one broker in over-loaded group which has victim-partition
2.172642
2.246031
0.967325
if not isinstance(res, dict): raise ValueError('Value should be of dict type') result = set([]) for _, v in res.items(): for value in v: result.add(value) return list(result)
def merge_result(res)
Merge all items in `res` into a list. This command is used when sending a command to multiple nodes and they result from each node should be merged into a single list.
3.958463
4.463309
0.88689
if not isinstance(res, dict): raise ValueError('Value should be of dict type') if len(res.keys()) != 1: raise RedisClusterException("More then 1 result from command") return list(res.values())[0]
def first_key(res)
Returns the first result for the given command. If more then 1 result is returned then a `RedisClusterException` is raised.
5.537674
3.45085
1.604728
@wraps(func) async def inner(*args, **kwargs): for _ in range(0, 3): try: return await func(*args, **kwargs) except ClusterDownError: # Try again with the new cluster setup. All other errors # should be raised. pass # If it fails 3 times then raise exception back to caller raise ClusterDownError("CLUSTERDOWN error. Unable to rebuild the cluster") return inner
def clusterdown_wrapper(func)
Wrapper for CLUSTERDOWN error handling. If the cluster reports it is down it is assumed that: - connection_pool was disconnected - connection_pool was reseted - refereh_table_asap set to True It will try 3 times to rerun the command and raises ClusterDownException if it continues to fail.
5.347635
4.722602
1.132349
"Parse the results of Redis's DEBUG OBJECT command into a Python dict" # The 'type' of the object is the first item in the response, but isn't # prefixed with a name response = nativestr(response) response = 'type:' + response response = dict([kv.split(':') for kv in response.split()]) # parse some expected int values from the string response # note: this cmd isn't spec'd so these may not appear in all redis versions int_fields = ('refcount', 'serializedlength', 'lru', 'lru_seconds_idle') for field in int_fields: if field in response: response[field] = int(response[field]) return response
def parse_debug_object(response)
Parse the results of Redis's DEBUG OBJECT command into a Python dict
8.036768
7.003501
1.147536
"Parse the result of Redis's INFO command into a Python dict" info = {} response = nativestr(response) def get_value(value): if ',' not in value or '=' not in value: try: if '.' in value: return float(value) else: return int(value) except ValueError: return value else: sub_dict = {} for item in value.split(','): k, v = item.rsplit('=', 1) sub_dict[k] = get_value(v) return sub_dict for line in response.splitlines(): if line and not line.startswith('#'): if line.find(':') != -1: key, value = line.split(':', 1) info[key] = get_value(value) else: # if the line isn't splittable, append it to the "__raw__" key info.setdefault('__raw__', []).append(line) return info
def parse_info(response)
Parse the result of Redis's INFO command into a Python dict
2.584528
2.362947
1.093774
if host is None and port is None: return await self.execute_command('SLAVEOF', b('NO'), b('ONE')) return await self.execute_command('SLAVEOF', host, port)
async def slaveof(self, host=None, port=None)
Set the server to be a replicated slave of the instance identified by the ``host`` and ``port``. If called without arguments, the instance is promoted to a master instead.
3.484951
3.536935
0.985302
args = ['SLOWLOG GET'] if num is not None: args.append(num) return await self.execute_command(*args)
async def slowlog_get(self, num=None)
Get the entries from the slowlog. If ``num`` is specified, get the most recent ``num`` items.
3.404099
3.057128
1.113496
return cache_class(self, app=name, identity_generator_class=identity_generator_class, compressor_class=compressor_class, serializer_class=serializer_class, *args, **kwargs)
def cache(self, name, cache_class=Cache, identity_generator_class=IdentityGenerator, compressor_class=Compressor, serializer_class=Serializer, *args, **kwargs)
Return a cache object using default identity generator, serializer and compressor. ``name`` is used to identify the series of your cache ``cache_class`` Cache is for normal use and HerdCache is used in case of Thundering Herd Problem ``identity_generator_class`` is the class used to generate the real unique key in cache, can be overwritten to meet your special needs. It should provide `generate` API ``compressor_class`` is the class used to compress cache in redis, can be overwritten with API `compress` and `decompress` retained. ``serializer_class`` is the class used to serialize content before compress, can be overwritten with API `serialize` and `deserialize` retained.
2.100146
2.585481
0.812284
if lock_class is None: if self._use_lua_lock is None: # the first time .lock() is called, determine if we can use # Lua by attempting to register the necessary scripts try: LuaLock.register_scripts(self) self._use_lua_lock = True except ResponseError: self._use_lua_lock = False lock_class = self._use_lua_lock and LuaLock or Lock return lock_class(self, name, timeout=timeout, sleep=sleep, blocking_timeout=blocking_timeout, thread_local=thread_local)
def lock(self, name, timeout=None, sleep=0.1, blocking_timeout=None, lock_class=None, thread_local=True)
Return a new Lock object using key ``name`` that mimics the behavior of threading.Lock. If specified, ``timeout`` indicates a maximum life for the lock. By default, it will remain locked until release() is called. ``sleep`` indicates the amount of time to sleep per loop iteration when the lock is in blocking mode and another client is currently holding the lock. ``blocking_timeout`` indicates the maximum amount of time in seconds to spend trying to acquire the lock. A value of ``None`` indicates continue trying forever. ``blocking_timeout`` can be specified as a float or integer, both representing the number of seconds to wait. ``lock_class`` forces the specified lock implementation. ``thread_local`` indicates whether the lock token is placed in thread-local storage. By default, the token is placed in thread local storage so that a thread only sees its token, not a token set by another thread. Consider the following timeline: time: 0, thread-1 acquires `my-lock`, with a timeout of 5 seconds. thread-1 sets the token to "abc" time: 1, thread-2 blocks trying to acquire `my-lock` using the Lock instance. time: 5, thread-1 has not yet completed. redis expires the lock key. time: 5, thread-2 acquired `my-lock` now that it's available. thread-2 sets the token to "xyz" time: 6, thread-1 finishes its work and calls release(). if the token is *not* stored in thread local storage, then thread-1 would see the token value as "xyz" and would be able to successfully release the thread-2's lock. In some use cases it's necessary to disable thread local storage. For example, if you have code where one thread acquires a lock and passes that lock instance to a worker thread to release later. If thread local storage isn't disabled in this case, the worker thread won't see the token set by the thread that acquired the lock. Our assumption is that these cases aren't common and as such default to using thread local storage.
2.880691
2.949513
0.976667
"Increment the value of ``key`` in hash ``name`` by ``amount``" return await self.execute_command('HINCRBY', name, key, amount)
async def hincrby(self, name, key, amount=1)
Increment the value of ``key`` in hash ``name`` by ``amount``
3.45937
3.371952
1.025925
return await self.execute_command('HINCRBYFLOAT', name, key, amount)
async def hincrbyfloat(self, name, key, amount=1.0)
Increment the value of ``key`` in hash ``name`` by floating ``amount``
3.028424
3.049209
0.993183
return await self.execute_command('HSET', name, key, value)
async def hset(self, name, key, value)
Set ``key`` to ``value`` within hash ``name`` Returns 1 if HSET created a new field, otherwise 0
3.799948
3.678545
1.033003
return await self.execute_command('HSETNX', name, key, value)
async def hsetnx(self, name, key, value)
Set ``key`` to ``value`` within hash ``name`` if ``key`` does not exist. Returns 1 if HSETNX created a field, otherwise 0.
3.319169
3.557925
0.932895
if not mapping: raise DataError("'hmset' with 'mapping' of length 0") items = [] for pair in iteritems(mapping): items.extend(pair) return await self.execute_command('HMSET', name, *items)
async def hmset(self, name, mapping)
Set key to value within hash ``name`` for each corresponding key and value from the ``mapping`` dict.
4.103167
3.811265
1.076589
pieces = [name, cursor] if match is not None: pieces.extend([b('MATCH'), match]) if count is not None: pieces.extend([b('COUNT'), count]) return await self.execute_command('HSCAN', *pieces)
async def hscan(self, name, cursor=0, match=None, count=None)
Incrementally return key/value slices in a hash. Also return a cursor indicating the scan position. ``match`` allows for filtering the keys by pattern ``count`` allows for hint the minimum number of returns
2.235106
2.927555
0.763472
shard_hint = kwargs.pop('shard_hint', None) value_from_callable = kwargs.pop('value_from_callable', False) watch_delay = kwargs.pop('watch_delay', None) async with await self.pipeline(True, shard_hint) as pipe: while True: try: if watches: await pipe.watch(*watches) func_value = await func(pipe) exec_value = await pipe.execute() return func_value if value_from_callable else exec_value except WatchError: if watch_delay is not None and watch_delay > 0: await asyncio.sleep( watch_delay, loop=self.connection_pool.loop ) continue
async def transaction(self, func, *watches, **kwargs)
Convenience method for executing the callable `func` as a transaction while watching all keys specified in `watches`. The 'func' callable should expect a single argument which is a Pipeline object.
2.824443
2.599499
1.086534
if isinstance(value, bytes): return value elif isinstance(value, int): value = b(str(value)) elif isinstance(value, float): value = b(repr(value)) elif not isinstance(value, str): value = str(value) if isinstance(value, str): value = value.encode() return value
def encode(self, value)
Return a bytestring representation of the value
2.205032
2.118421
1.040885
nodes_cache = {} tmp_slots = {} all_slots_covered = False disagreements = [] startup_nodes_reachable = False nodes = self.orig_startup_nodes # With this option the client will attempt to connect to any of the previous set of nodes instead of the original set of nodes if self.nodemanager_follow_cluster: nodes = self.startup_nodes for node in nodes: try: r = self.get_redis_link(host=node['host'], port=node['port']) cluster_slots = await r.cluster_slots() startup_nodes_reachable = True except ConnectionError: continue except Exception: raise RedisClusterException('ERROR sending "cluster slots" command to redis server: {0}'.format(node)) all_slots_covered = True # If there's only one server in the cluster, its ``host`` is '' # Fix it to the host in startup_nodes if len(cluster_slots) == 1 and len(self.startup_nodes) == 1: single_node_slots = cluster_slots.get((0, self.RedisClusterHashSlots - 1))[0] if len(single_node_slots['host']) == 0: single_node_slots['host'] = self.startup_nodes[0]['host'] single_node_slots['server_type'] = 'master' # No need to decode response because StrictRedis should handle that for us... for min_slot, max_slot in cluster_slots: nodes = cluster_slots.get((min_slot, max_slot)) master_node, slave_nodes = nodes[0], nodes[1:] if master_node['host'] == '': master_node['host'] = node['host'] self.set_node_name(master_node) nodes_cache[master_node['name']] = master_node for i in range(min_slot, max_slot + 1): if i not in tmp_slots: tmp_slots[i] = [master_node] for slave_node in slave_nodes: self.set_node_name(slave_node) nodes_cache[slave_node['name']] = slave_node tmp_slots[i].append(slave_node) else: # Validate that 2 nodes want to use the same slot cache setup if tmp_slots[i][0]['name'] != node['name']: disagreements.append('{0} vs {1} on slot: {2}'.format( tmp_slots[i][0]['name'], node['name'], i), ) if len(disagreements) > 5: raise RedisClusterException('startup_nodes could not agree on a valid slots cache. {0}' .format(', '.join(disagreements))) self.populate_startup_nodes() self.refresh_table_asap = False if self._skip_full_coverage_check: need_full_slots_coverage = False else: need_full_slots_coverage = await self.cluster_require_full_coverage(nodes_cache) # Validate if all slots are covered or if we should try next startup node for i in range(0, self.RedisClusterHashSlots): if i not in tmp_slots and need_full_slots_coverage: all_slots_covered = False if all_slots_covered: # All slots are covered and application can continue to execute break if not startup_nodes_reachable: raise RedisClusterException('Redis Cluster cannot be connected. ' 'Please provide at least one reachable node.') if not all_slots_covered: raise RedisClusterException('Not all slots are covered after query all startup_nodes. ' '{0} of {1} covered...'.format(len(tmp_slots), self.RedisClusterHashSlots)) # Set the tmp variables to the real variables self.slots = tmp_slots self.nodes = nodes_cache self.reinitialize_counter = 0
async def initialize(self)
Init the slots cache by asking all startup nodes what the current cluster configuration is TODO: Currently the last node will have the last say about how the configuration is setup. Maybe it should stop to try after it have correctly covered all slots or when one node is reached and it could execute CLUSTER SLOTS command.
3.859036
3.719038
1.037644
nodes = nodes_cache or self.nodes async def node_require_full_coverage(node): r_node = self.get_redis_link(host=node['host'], port=node['port']) node_config = await r_node.config_get('cluster-require-full-coverage') return 'yes' in node_config.values() # at least one node should have cluster-require-full-coverage yes for node in nodes.values(): if await node_require_full_coverage(node): return True return False
async def cluster_require_full_coverage(self, nodes_cache)
if exists 'cluster-require-full-coverage no' config on redis servers, then even all slots are not covered, cluster still will be able to respond
3.513776
3.156761
1.113095
node_name = "{0}:{1}".format(host, port) node = { 'host': host, 'port': port, 'name': node_name, 'server_type': server_type } self.nodes[node_name] = node return node
def set_node(self, host, port, server_type=None)
Update data for a node.
2.153079
2.138736
1.006706
for item in self.startup_nodes: self.set_node_name(item) for n in self.nodes.values(): if n not in self.startup_nodes: self.startup_nodes.append(n) # freeze it so we can set() it uniq = {frozenset(node.items()) for node in self.startup_nodes} # then thaw it back out into a list of dicts self.startup_nodes = [dict(node) for node in uniq]
def populate_startup_nodes(self)
Do something with all startup nodes and filters out any duplicates
4.604838
4.208447
1.094189
url = urlparse(url) qs = url.query url_options = {} for name, value in iter(parse_qs(qs).items()): if value and len(value) > 0: parser = URL_QUERY_ARGUMENT_PARSERS.get(name) if parser: try: url_options[name] = parser(value[0]) except (TypeError, ValueError): warnings.warn(UserWarning( "Invalid value for `%s` in connection URL." % name )) else: url_options[name] = value[0] if decode_components: password = unquote(url.password) if url.password else None path = unquote(url.path) if url.path else None hostname = unquote(url.hostname) if url.hostname else None else: password = url.password path = url.path hostname = url.hostname # We only support redis:// and unix:// schemes. if url.scheme == 'unix': url_options.update({ 'password': password, 'path': path, 'connection_class': UnixDomainSocketConnection, }) else: url_options.update({ 'host': hostname, 'port': int(url.port or 6379), 'password': password, }) # If there's a path argument, use it as the db argument if a # querystring value wasn't specified if 'db' not in url_options and path: try: url_options['db'] = int(path.replace('/', '')) except (AttributeError, ValueError): pass if url.scheme == 'rediss': keyfile = url_options.pop('ssl_keyfile', None) certfile = url_options.pop('ssl_certfile', None) cert_reqs = url_options.pop('ssl_cert_reqs', None) ca_certs = url_options.pop('ssl_ca_certs', None) url_options['ssl_context'] = RedisSSLContext(keyfile, certfile, cert_reqs, ca_certs).get() # last shot at the db value url_options['db'] = int(url_options.get('db', db or 0)) # update the arguments from the URL values kwargs.update(url_options) return cls(**kwargs)
def from_url(cls, url, db=None, decode_components=False, **kwargs)
Return a connection pool configured from the given URL. For example:: redis://[:password]@localhost:6379/0 rediss://[:password]@localhost:6379/0 unix://[:password]@/path/to/socket.sock?db=0 Three URL schemes are supported: - ```redis://`` <http://www.iana.org/assignments/uri-schemes/prov/redis>`_ creates a normal TCP socket connection - ```rediss://`` <http://www.iana.org/assignments/uri-schemes/prov/rediss>`_ creates a SSL wrapped TCP socket connection - ``unix://`` creates a Unix Domain Socket connection There are several ways to specify a database number. The parse function will return the first specified option: 1. A ``db`` querystring option, e.g. redis://localhost?db=0 2. If using the redis:// scheme, the path argument of the url, e.g. redis://localhost/0 3. The ``db`` argument to this function. If none of these options are specified, db=0 is used. The ``decode_components`` argument allows this function to work with percent-encoded URLs. If this argument is set to ``True`` all ``%xx`` escapes will be replaced by their single-character equivalents after the URL has been parsed. This only applies to the ``hostname``, ``path``, and ``password`` components. Any additional querystring arguments and keyword arguments will be passed along to the ConnectionPool class's initializer. The querystring arguments ``connect_timeout`` and ``stream_timeout`` if supplied are parsed as float values. The arguments ``retry_on_timeout`` are parsed to boolean values that accept True/False, Yes/No values to indicate state. Invalid types cause a ``UserWarning`` to be raised. In the case of conflicting arguments, querystring arguments always win.
2.552052
2.319511
1.100254
"Get a connection from the pool" self._checkpid() try: connection = self._available_connections.pop() except IndexError: connection = self.make_connection() self._in_use_connections.add(connection) return connection
def get_connection(self, *args, **kwargs)
Get a connection from the pool
4.282482
4.009662
1.068041
"Releases the connection back to the pool" self._checkpid() if connection.pid != self.pid: return self._in_use_connections.remove(connection) # discard connection with unread response if connection.awaiting_response: connection.disconnect() self._created_connections -= 1 else: self._available_connections.append(connection)
def release(self, connection)
Releases the connection back to the pool
6.582651
6.526232
1.008645
self.pid = os.getpid() self._created_connections = 0 self._created_connections_per_node = {} # Dict(Node, Int) self._available_connections = {} # Dict(Node, List) self._in_use_connections = {} # Dict(Node, Set) self._check_lock = threading.Lock() self.initialized = False
def reset(self)
Resets the connection pool back to a clean state.
4.762676
3.937428
1.20959
if self.count_all_num_connections(node) >= self.max_connections: if self.max_connections_per_node: raise RedisClusterException("Too many connection ({0}) for node: {1}" .format(self.count_all_num_connections(node), node['name'])) raise RedisClusterException("Too many connections") self._created_connections_per_node.setdefault(node['name'], 0) self._created_connections_per_node[node['name']] += 1 connection = self.connection_class(host=node["host"], port=node["port"], **self.connection_kwargs) # Must store node in the connection to make it eaiser to track connection.node = node return connection
def make_connection(self, node)
Create a new connection
3.694745
3.703476
0.997642
self._checkpid() if connection.pid != self.pid: return # Remove the current connection from _in_use_connection and add it back to the available pool # There is cases where the connection is to be removed but it will not exist and there # must be a safe way to remove i_c = self._in_use_connections.get(connection.node["name"], set()) if connection in i_c: i_c.remove(connection) else: pass # discard connection with unread response if connection.awaiting_response: connection.disconnect() # reduce node connection count in case of too many connection error raised if self.max_connections_per_node and self._created_connections_per_node.get(connection.node['name']): self._created_connections_per_node[connection.node['name']] -= 1 else: self._available_connections.setdefault(connection.node["name"], []).append(connection)
def release(self, connection)
Releases the connection back to the pool
6.340268
6.049544
1.048057
all_conns = chain( self._available_connections.values(), self._in_use_connections.values(), ) for node_connections in all_conns: for connection in node_connections: connection.disconnect()
def disconnect(self)
Nothing that requires any overwrite.
4.557483
4.37768
1.041073
if self._available_connections: node_name = random.choice(list(self._available_connections.keys())) conn_list = self._available_connections[node_name] # check it in case of empty connection list if conn_list: return conn_list.pop() for node in self.nodes.random_startup_node_iter(): connection = self.get_connection_by_node(node) if connection: return connection raise Exception("Cant reach a single startup node.")
def get_random_connection(self)
Open new connection to random redis server.
5.122038
4.826889
1.061147
self._checkpid() try: return self.get_connection_by_node(self.get_node_by_slot(slot)) except KeyError: return self.get_random_connection()
def get_connection_by_slot(self, slot)
Determine what server a specific slot belongs to and return a redis object that is connected
4.81204
4.735545
1.016153
self._checkpid() self.nodes.set_node_name(node) try: # Try to get connection from existing pool connection = self._available_connections.get(node["name"], []).pop() except IndexError: connection = self.make_connection(node) self._in_use_connections.setdefault(node["name"], set()).add(connection) return connection
def get_connection_by_node(self, node)
get a connection by node
5.534763
5.31499
1.04135
"Re-subscribe to any channels and patterns previously subscribed to" # NOTE: for python3, we can't pass bytestrings as keyword arguments # so we need to decode channel/pattern names back to str strings # before passing them to [p]subscribe. if self.channels: channels = {} for k, v in iteritems(self.channels): if not self.decode_responses: k = k.decode(self.encoding) channels[k] = v await self.subscribe(**channels) if self.patterns: patterns = {} for k, v in iteritems(self.patterns): if not self.decode_responses: k = k.decode(self.encoding) patterns[k] = v await self.psubscribe(**patterns)
async def on_connect(self, connection)
Re-subscribe to any channels and patterns previously subscribed to
3.965583
3.413245
1.161822
if self.decode_responses and isinstance(value, bytes): value = value.decode(self.encoding) elif not self.decode_responses and isinstance(value, str): value = value.encode(self.encoding) return value
def encode(self, value)
Encode the value so that it's identical to what we'll read off the connection
2.590276
2.499807
1.036191
"Parse the response from a publish/subscribe command" connection = self.connection if connection is None: raise RuntimeError( 'pubsub connection not set: ' 'did you forget to call subscribe() or psubscribe()?') coro = self._execute(connection, connection.read_response) if not block and timeout > 0: try: return await asyncio.wait_for(coro, timeout) except Exception: return None return await coro
async def parse_response(self, block=True, timeout=0)
Parse the response from a publish/subscribe command
4.892583
4.011367
1.21968
if args: args = list_or_args(args[0], args[1:]) new_patterns = {} new_patterns.update(dict.fromkeys(map(self.encode, args))) for pattern, handler in iteritems(kwargs): new_patterns[self.encode(pattern)] = handler ret_val = await self.execute_command('PSUBSCRIBE', *iterkeys(new_patterns)) # update the patterns dict AFTER we send the command. we don't want to # subscribe twice to these patterns, once for the command and again # for the reconnection. self.patterns.update(new_patterns) return ret_val
async def psubscribe(self, *args, **kwargs)
Subscribe to channel patterns. Patterns supplied as keyword arguments expect a pattern name as the key and a callable as the value. A pattern's callable will be invoked automatically when a message is received on that pattern rather than producing a message via ``listen()``.
4.833466
4.514008
1.07077
if args: args = list_or_args(args[0], args[1:]) return await self.execute_command('PUNSUBSCRIBE', *args)
async def punsubscribe(self, *args)
Unsubscribe from the supplied patterns. If empy, unsubscribe from all patterns.
4.262028
4.509888
0.945041
if args: args = list_or_args(args[0], args[1:]) new_channels = {} new_channels.update(dict.fromkeys(map(self.encode, args))) for channel, handler in iteritems(kwargs): new_channels[self.encode(channel)] = handler ret_val = await self.execute_command('SUBSCRIBE', *iterkeys(new_channels)) # update the channels dict AFTER we send the command. we don't want to # subscribe twice to these channels, once for the command and again # for the reconnection. self.channels.update(new_channels) return ret_val
async def subscribe(self, *args, **kwargs)
Subscribe to channels. Channels supplied as keyword arguments expect a channel name as the key and a callable as the value. A channel's callable will be invoked automatically when a message is received on that channel rather than producing a message via ``listen()`` or ``get_message()``.
5.080337
4.619475
1.099765
if args: args = list_or_args(args[0], args[1:]) return await self.execute_command('UNSUBSCRIBE', *args)
async def unsubscribe(self, *args)
Unsubscribe from the supplied channels. If empty, unsubscribe from all channels
4.527956
5.006832
0.904355
"Listen for messages on channels this client has been subscribed to" if self.subscribed: return self.handle_message(await self.parse_response(block=True))
async def listen(self)
Listen for messages on channels this client has been subscribed to
16.728565
9.574595
1.747182
response = await self.parse_response(block=False, timeout=timeout) if response: return self.handle_message(response, ignore_subscribe_messages) return None
async def get_message(self, ignore_subscribe_messages=False, timeout=0)
Get the next message if one is available, otherwise None. If timeout is specified, the system will wait for `timeout` seconds before returning. Timeout should be specified as a floating point number.
4.459537
4.740153
0.9408
message_type = nativestr(response[0]) if message_type == 'pmessage': message = { 'type': message_type, 'pattern': response[1], 'channel': response[2], 'data': response[3] } else: message = { 'type': message_type, 'pattern': None, 'channel': response[1], 'data': response[2] } # if this is an unsubscribe message, remove it from memory if message_type in self.UNSUBSCRIBE_MESSAGE_TYPES: subscribed_dict = None if message_type == 'punsubscribe': subscribed_dict = self.patterns else: subscribed_dict = self.channels try: del subscribed_dict[message['channel']] except KeyError: pass if message_type in self.PUBLISH_MESSAGE_TYPES: # if there's a message handler, invoke it handler = None if message_type == 'pmessage': handler = self.patterns.get(message['pattern'], None) else: handler = self.channels.get(message['channel'], None) if handler: handler(message) return None else: # this is a subscribe/unsubscribe message. ignore if we don't # want them if ignore_subscribe_messages or self.ignore_subscribe_messages: return None return message
def handle_message(self, response, ignore_subscribe_messages=False)
Parses a pub/sub message. If the channel or pattern was subscribed to with a message handler, the handler is invoked instead of a parsed message being returned.
2.271947
2.090447
1.086823
# NOTE: don't parse the response in this function -- it could pull a # legitimate message off the stack if the connection is already # subscribed to one or more channels await self.connection_pool.initialize() if self.connection is None: self.connection = self.connection_pool.get_connection( 'pubsub', channel=args[1], ) # register a callback that re-subscribes to any channels we # were listening to when we were disconnected self.connection.register_connect_callback(self.on_connect) connection = self.connection await self._execute(connection, connection.send_command, *args)
async def execute_command(self, *args, **kwargs)
Execute a publish/subscribe command. Taken code from redis-py and tweak to make it work within a cluster.
6.947853
6.245548
1.112449
if self.identity_generator and param is not None: if self.serializer: param = self.serializer.serialize(param) if self.compressor: param = self.compressor.compress(param) identity = self.identity_generator.generate(key, param) else: identity = key return identity
def _gen_identity(self, key, param=None)
generate identity according to key and param given
2.777943
2.629824
1.056323
if self.serializer: content = self.serializer.serialize(content) if self.compressor: content = self.compressor.compress(content) return content
def _pack(self, content)
pack the content using serializer and compressor
3.109745
1.925532
1.615005
if self.compressor: try: content = self.compressor.decompress(content) except CompressError: pass if self.serializer: content = self.serializer.deserialize(content) return content
def _unpack(self, content)
unpack cache using serializer and compressor
3.309193
2.278605
1.452289
identity = self._gen_identity(key, param) return await self.client.delete(identity)
async def delete(self, key, param=None)
delete cache corresponding to identity generated from key and param
7.798144
4.568682
1.70687
cursor = '0' count_deleted = 0 while cursor != 0: cursor, identities = await self.client.scan( cursor=cursor, match=pattern, count=count ) count_deleted += await self.client.delete(*identities) return count_deleted
async def delete_pattern(self, pattern, count=None)
delete cache according to pattern in redis, delete `count` keys each time
4.342268
3.27694
1.325099
identity = self._gen_identity(key, param) return await self.client.exists(identity)
async def exist(self, key, param=None)
see if specific identity exists
7.152221
4.743013
1.507949
identity = self._gen_identity(key, param) return await self.client.ttl(identity)
async def ttl(self, key, param=None)
get time to live of a specific identity
8.371465
5.625719
1.48807
identity = self._gen_identity(key, param) expected_expired_ts = int(time.time()) if expire_time: expected_expired_ts += expire_time expected_expired_ts += herd_timeout or self.default_herd_timeout value = self._pack([value, expected_expired_ts]) return await self.client.set(identity, value, ex=expire_time)
async def set(self, key, value, param=None, expire_time=None, herd_timeout=None)
Use key and param to generate identity and pack the content, expire the key within real_timeout if expire_time is given. real_timeout is equal to the sum of expire_time and herd_time. The content is cached with expire_time.
3.577266
3.100489
1.153774
identity = self._gen_identity(key, param) res = await self.client.get(identity) if res: res, timeout = self._unpack(res) now = int(time.time()) if timeout <= now: extend_timeout = extend_herd_timeout or self.extend_herd_timeout expected_expired_ts = now + extend_timeout value = self._pack([res, expected_expired_ts]) await self.client.set(identity, value, extend_timeout) return None return res
async def get(self, key, param=None, extend_herd_timeout=None)
Use key or identity generate from key and param to get cached content and expire time. Compare expire time with time.now(), return None and set cache with extended timeout if cache is expired, else, return unpacked content
3.659912
3.115328
1.174808
aggregate = options.get('aggregate', True) if not aggregate: return res return merge_result(res)
def parse_cluster_pubsub_channels(res, **options)
Result callback, handles different return types switchable by the `aggregate` flag.
6.805108
5.865212
1.160249
aggregate = options.get('aggregate', True) if not aggregate: return res numpat = 0 for node, node_numpat in res.items(): numpat += node_numpat return numpat
def parse_cluster_pubsub_numpat(res, **options)
Result callback, handles different return types switchable by the `aggregate` flag.
3.526815
3.387479
1.041133
aggregate = options.get('aggregate', True) if not aggregate: return res numsub_d = dict() for _, numsub_tups in res.items(): for channel, numsubbed in numsub_tups: try: numsub_d[channel] += numsubbed except KeyError: numsub_d[channel] = numsubbed ret_numsub = [] for channel, numsub in numsub_d.items(): ret_numsub.append((channel, numsub)) return ret_numsub
def parse_cluster_pubsub_numsub(res, **options)
Result callback, handles different return types switchable by the `aggregate` flag.
2.550853
2.500206
1.020257
pieces = [] if max_len is not None: if not isinstance(max_len, int) or max_len < 1: raise RedisError("XADD maxlen must be a positive integer") pieces.append('MAXLEN') if approximate: pieces.append('~') pieces.append(str(max_len)) pieces.append(stream_id) for kv in entry.items(): pieces.extend(list(kv)) return await self.execute_command('XADD', name, *pieces)
async def xadd(self, name: str, entry: dict, max_len=None, stream_id='*', approximate=True) -> str
Appends the specified stream entry to the stream at the specified key. If the key does not exist, as a side effect of running this command the key is created with a stream value. Available since 5.0.0. Time complexity: O(log(N)) with N being the number of items already into the stream. :param name: name of the stream :param entry: key-values to be appended to the stream :param max_len: max length of the stream length will not be limited max_len is set to None notice: max_len should be int greater than 0, if set to 0 or negative, the stream length will not be limited :param stream_id: id of the options appended to the stream. The XADD command will auto-generate a unique id for you if the id argument specified is the * character. ID are specified by two numbers separated by a "-" character :param approximate: whether redis will limit the stream with given max length exactly, if set to True, there will be a few tens of entries more, but never less than 1000 items :return: id auto generated or the specified id given. notice: specified id without "-" character will be completed like "id-0"
2.538614
2.642344
0.960743
pieces = [start, end] if count is not None: if not isinstance(count, int) or count < 1: raise RedisError("XRANGE count must be a positive integer") pieces.append("COUNT") pieces.append(str(count)) return await self.execute_command('XRANGE', name, *pieces)
async def xrange(self, name: str, start='-', end='+', count=None) -> list
Read stream values within an interval. Available since 5.0.0. Time complexity: O(log(N)+M) with N being the number of elements in the stream and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(log(N)). :param name: name of the stream. :param start: first stream ID. defaults to '-', meaning the earliest available. :param end: last stream ID. defaults to '+', meaning the latest available. :param count: if set, only return this many items, beginning with the earliest available. :return list of (stream_id, entry(k-v pair))
2.920508
3.268617
0.8935
pieces = [] if block is not None: if not isinstance(block, int) or block < 1: raise RedisError("XREAD block must be a positive integer") pieces.append("BLOCK") pieces.append(str(block)) if count is not None: if not isinstance(count, int) or count < 1: raise RedisError("XREAD count must be a positive integer") pieces.append("COUNT") pieces.append(str(count)) pieces.append("STREAMS") ids = [] for partial_stream in streams.items(): pieces.append(partial_stream[0]) ids.append(partial_stream[1]) pieces.extend(ids) return await self.execute_command('XREAD', *pieces)
async def xread(self, count=None, block=None, **streams) -> dict
Available since 5.0.0. Time complexity: For each stream mentioned: O(log(N)+M) with N being the number of elements in the stream and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(log(N)). On the other side, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data. Read data from one or multiple streams, only returning entries with an ID greater than the last received ID reported by the caller. :param count: int, if set, only return this many items, beginning with the earliest available. :param block: int, milliseconds we want to block before timing out, if the BLOCK option is not used, the command is synchronous :param streams: stream_name - stream_id mapping :return dict like {stream_name: [(stream_id: entry), ...]}
1.957179
1.894178
1.03326
pieces = ['GROUP', group, consumer_id] if block is not None: if not isinstance(block, int) or block < 1: raise RedisError("XREAD block must be a positive integer") pieces.append("BLOCK") pieces.append(str(block)) if count is not None: if not isinstance(count, int) or count < 1: raise RedisError("XREAD count must be a positive integer") pieces.append("COUNT") pieces.append(str(count)) pieces.append("STREAMS") ids = [] for partial_stream in streams.items(): pieces.append(partial_stream[0]) ids.append(partial_stream[1]) pieces.extend(ids) return await self.execute_command('XREADGROUP', *pieces)
async def xreadgroup(self, group: str, consumer_id: str, count=None, block=None, **streams)
Available since 5.0.0. Time complexity: For each stream mentioned: O(log(N)+M) with N being the number of elements in the stream and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(log(N)). On the other side, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data. Read data from one or multiple streams via the consumer group, only returning entries with an ID greater than the last received ID reported by the caller. :param group: the name of the consumer group :param consumer_id: the name of the consumer that is attempting to read :param count: int, if set, only return this many items, beginning with the earliest available. :param block: int, milliseconds we want to block before timing out, if the BLOCK option is not used, the command is synchronous :param streams: stream_name - stream_id mapping :return dict like {stream_name: [(stream_id: entry), ...]}
1.981914
2.077243
0.954108
pieces = [name, group] if count is not None: pieces.extend([start, end, count]) if consumer is not None: pieces.append(str(consumer)) # todo: may there be a parse function return await self.execute_command('XPENDING', *pieces)
async def xpending(self, name: str, group: str, start='-', end='+', count=None, consumer=None) -> list
Available since 5.0.0. Time complexity: O(log(N)+M) with N being the number of elements in the consumer group pending entries list, and M the number of elements being returned. When the command returns just the summary it runs in O(1) time assuming the list of consumers is small, otherwise there is additional O(N) time needed to iterate every consumer. Fetching data from a stream via a consumer group, and not acknowledging such data, has the effect of creating pending entries. The XPENDING command is the interface to inspect the list of pending messages. :param name: name of the stream :param group: name of the consumer group :param start: first stream ID. defaults to '-', meaning the earliest available. :param end: last stream ID. defaults to '+', meaning the latest available. :param count: int, number of entries [NOTICE] only when count is set to int, start & end options will have effect and detail of pending entries will be returned :param consumer: str, consumer of the stream in the group [NOTICE] only when count is set to int, this option can be appended to query pending entries of given consumer
4.864145
5.757152
0.844887
pieces = ['MAXLEN'] if approximate: pieces.append('~') pieces.append(max_len) return await self.execute_command('XTRIM', name, *pieces)
async def xtrim(self, name: str, max_len: int, approximate=True) -> int
[NOTICE] Not officially released yet XTRIM is designed to accept different trimming strategies, even if currently only MAXLEN is implemented. :param name: name of the stream :param max_len: max length of the stream after being trimmed :param approximate: whether redis will limit the stream with given max length exactly, if set to True, there will be a few tens of entries more, but never less than 1000 items: :return: number of entries trimmed
5.652356
5.536719
1.020886
return await self.execute_command('XDEL', name, stream_id)
async def xdel(self, name: str, stream_id: str) -> int
[NOTICE] Not officially released yet [NOTICE] In the current implementation, memory is not really reclaimed until a macro node is completely empty, so you should not abuse this feature. remove items from the middle of a stream, just by ID. :param name: name of the stream :param stream_id: id of the options appended to the stream.
5.210691
5.791314
0.899743
return await self.execute_command('XINFO CONSUMERS', name, group)
async def xinfo_consumers(self, name: str, group: str) -> list
[NOTICE] Not officially released yet XINFO command is an observability interface that can be used with sub-commands in order to get information about streams or consumer groups. :param name: name of the stream :param group: name of the consumer group
5.522385
7.666241
0.720351
return await self.execute_command('XACK', name, group, stream_id)
async def xack(self, name: str, group: str, stream_id: str) -> int
[NOTICE] Not officially released yet XACK is the command that allows a consumer to mark a pending message as correctly processed. :param name: name of the stream :param group: name of the consumer group :param stream_id: id of the entry the consumer wants to mark :return: number of entry marked
4.150182
5.905117
0.702811
return await self.execute_command('XCLAIM', name, group, consumer, min_idle_time, *stream_ids)
async def xclaim(self, name: str, group: str, consumer: str, min_idle_time: int, *stream_ids)
[NOTICE] Not officially released yet Gets ownership of one or multiple messages in the Pending Entries List of a given stream consumer group. :param name: name of the stream :param group: name of the consumer group :param consumer: name of the consumer :param min_idle_time: ms If the message ID (among the specified ones) exists, and its idle time greater or equal to min_idle_time, then the message new owner becomes the specified <consumer>. If the minimum idle time specified is zero, messages are claimed regardless of their idle time. :param stream_ids:
2.592645
2.846649
0.910771
return await self.execute_command('XGROUP CREATE', name, group, stream_id)
async def xgroup_create(self, name: str, group: str, stream_id='$') -> bool
[NOTICE] Not officially released yet XGROUP is used in order to create, destroy and manage consumer groups. :param name: name of the stream :param group: name of the consumer group :param stream_id: If we provide $ as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify 0 instead the consumer group will consume all the messages in the stream history to start with. Of course, you can specify any other valid ID
4.619023
5.313025
0.869377