Unnamed: 0
int64
0
2.93k
code
stringlengths
101
62.2k
docs
stringlengths
51
10.7k
doc_len
int64
4
1.74k
words
int64
4
4.82k
lang
stringclasses
1 value
prompt
stringlengths
320
71.2k
1,300
def monthdays2calendar(self, year, month): days = list(self.itermonthdays2(year, month)) return [ days[i:i+7] for i in range(0, len(days), 7) ]
Return a matrix representing a month's calendar. Each row represents a week; week entries are (day number, weekday number) tuples. Day numbers outside this month are zero.
27
18
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def monthdays2calendar(self, year, month): days = list(self.itermonthdays2(year, month)) return [ days[i:i+7] for i in range(0, len(days), 7) ] ``` ###Assistant : Return a matrix representing a month's calendar. Each row represents a week; week entries are (day number, weekday number) tuples. Day numbers outside this month are zero.
1,301
async def predict_with_route(self, route_path, *args, **kwargs): if route_path not in self.dags: raise RayServeException(f"{route_path} does not exist in dags routes") return await self.dags[route_path].remote(*args, **kwargs)
Perform inference directly without HTTP for multi dags.
8
23
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python async def predict_with_route(self, route_path, *args, **kwargs): if route_path not in self.dags: raise RayServeException(f"{route_path} does not exist in dags routes") return await self.dags[route_path].remote(*args, **kwargs) ``` ###Assistant : Perform inference directly without HTTP for multi dags.
1,302
def should_recover(self): return ( self.num_failures < self.max_failures or self.max_failures < 0 or ( self.num_failures == self.max_failures and self.num_restore_failures < int(os.environ.get("TUNE_RESTORE_RETRY_NUM", 0)) ) )
Returns whether the trial qualifies for retrying. This is if the trial has not failed more than max_failures. Note this may return true even when there is no checkpoint, either because `self.checkpoint_freq` is `0` or because the trial failed before a checkpoint has been made.
45
23
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def should_recover(self): return ( self.num_failures < self.max_failures or self.max_failures < 0 or ( self.num_failures == self.max_failures and self.num_restore_failures < int(os.environ.get("TUNE_RESTORE_RETRY_NUM", 0)) ) ) ``` ###Assistant : Returns whether the trial qualifies for retrying. This is if the trial has not failed more than max_failures. Note this may return true even when there is no checkpoint, either because `self.checkpoint_freq` is `0` or because the trial failed before a checkpoint has been made.
1,303
def test_change_root_page_locale_on_locale_deletion(self): # change 'real' pages first Page.objects.filter(depth__gt=1).update( locale=Locale.objects.get(language_code="fr") ) self.assertEqual(Page.get_first_root_node().locale.language_code, "en") Locale.objects.get(language_code="en").delete() self.assertEqual(Page.get_first_root_node().locale.language_code, "fr")
On deleting the locale used for the root page (but no 'real' pages), the root page should be reassigned to a new locale (the default one, if possible)
28
15
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_change_root_page_locale_on_locale_deletion(self): # change 'real' pages first Page.objects.filter(depth__gt=1).update( locale=Locale.objects.get(language_code="fr") ) self.assertEqual(Page.get_first_root_node().locale.language_code, "en") Locale.objects.get(language_code="en").delete() self.assertEqual(Page.get_first_root_node().locale.language_code, "fr") ``` ###Assistant : On deleting the locale used for the root page (but no 'real' pages), the root page should be reassigned to a new locale (the default one, if possible)
1,304
def _proc_pax(self, tarfile): # Read the header information. buf = tarfile.fileobj.read(self._block(self.size)) # A pax header stores supplemental information for either # the following file (extended) or all following files # (global). if self.type == XGLTYPE: pax_headers = tarfile.pax_headers else: pax_headers = tarfile.pax_headers.copy() # Check if the pax header contains a hdrcharset field. This tells us # the encoding of the path, linkpath, uname and gname fields. Normally, # these fields are UTF-8 encoded but since POSIX.1-2008 tar # implementations are allowed to store them as raw binary strings if # the translation to UTF-8 fails. match = re.search(br"\d+ hdrcharset=([^\n]+)\n", buf) if match is not None: pax_headers["hdrcharset"] = match.group(1).decode("utf8") # For the time being, we don't care about anything other than "BINARY". # The only other value that is currently allowed by the standard is # "ISO-IR 10646 2000 UTF-8" in other words UTF-8. hdrcharset = pax_headers.get("hdrcharset") if hdrcharset == "BINARY": encoding = tarfile.encoding else: encoding = "utf8" # Parse pax header information. A record looks like that: # "%d %s=%s\n" % (length, keyword, value). length is the size # of the complete record including the length field itself and # the newline. keyword and value are both UTF-8 encoded strings. regex = re.compile(br"(\d+) ([^=]+)=") pos = 0 while True: match = regex.match(buf, pos) if not match: break length, keyword = match.groups() length = int(length) value = buf[match.end(2) + 1:match.start(1) + length - 1] # Normally, we could just use "utf8" as the encoding and "strict" # as the error handler, but we better not take the risk. For # example, GNU tar <= 1.23 is known to store filenames it cannot # translate to UTF-8 as raw strings (unfortunately without a # hdrcharset=BINARY header). # We first try the strict standard encoding, and if that fails we # fall back on the user's encoding and error handler. keyword = self._decode_pax_field(keyword, "utf8", "utf8", tarfile.errors) if keyword in PAX_NAME_FIELDS: value = self._decode_pax_field(value, encoding, tarfile.encoding, tarfile.errors) else: value = self._decode_pax_field(value, "utf8", "utf8", tarfile.errors) pax_headers[keyword] = value pos += length # Fetch the next header. try: next = self.fromtarfile(tarfile) except HeaderError: raise SubsequentHeaderError("missing or bad subsequent header") # Process GNU sparse information. if "GNU.sparse.map" in pax_headers: # GNU extended sparse format version 0.1. self._proc_gnusparse_01(next, pax_headers) elif "GNU.sparse.size" in pax_headers: # GNU extended sparse format version 0.0. self._proc_gnusparse_00(next, pax_headers, buf) elif pax_headers.get("GNU.sparse.major") == "1" and pax_headers.get("GNU.sparse.minor") == "0": # GNU extended sparse format version 1.0. self._proc_gnusparse_10(next, pax_headers, tarfile) if self.type in (XHDTYPE, SOLARIS_XHDTYPE): # Patch the TarInfo object with the extended header info. next._apply_pax_info(pax_headers, tarfile.encoding, tarfile.errors) next.offset = self.offset if "size" in pax_headers: # If the extended header replaces the size field, # we need to recalculate the offset where the next # header starts. offset = next.offset_data if next.isreg() or next.type not in SUPPORTED_TYPES: offset += next._block(next.size) tarfile.offset = offset return next
Process an extended or global header as described in POSIX.1-2008.
10
468
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _proc_pax(self, tarfile): # Read the header information. buf = tarfile.fileobj.read(self._block(self.size)) # A pax header stores supplemental information for either # the following file (extended) or all following files # (global). if self.type == XGLTYPE: pax_headers = tarfile.pax_headers else: pax_headers = tarfile.pax_headers.copy() # Check if the pax header contains a hdrcharset field. This tells us # the encoding of the path, linkpath, uname and gname fields. Normally, # these fields are UTF-8 encoded but since POSIX.1-2008 tar # implementations are allowed to store them as raw binary strings if # the translation to UTF-8 fails. match = re.search(br"\d+ hdrcharset=([^\n]+)\n", buf) if match is not None: pax_headers["hdrcharset"] = match.group(1).decode("utf8") # For the time being, we don't care about anything other than "BINARY". # The only other value that is currently allowed by the standard is # "ISO-IR 10646 2000 UTF-8" in other words UTF-8. hdrcharset = pax_headers.get("hdrcharset") if hdrcharset == "BINARY": encoding = tarfile.encoding else: encoding = "utf8" # Parse pax header information. A record looks like that: # "%d %s=%s\n" % (length, keyword, value). length is the size # of the complete record including the length field itself and # the newline. keyword and value are both UTF-8 encoded strings. regex = re.compile(br"(\d+) ([^=]+)=") pos = 0 while True: match = regex.match(buf, pos) if not match: break length, keyword = match.groups() length = int(length) value = buf[match.end(2) + 1:match.start(1) + length - 1] # Normally, we could just use "utf8" as the encoding and "strict" # as the error handler, but we better not take the risk. For # example, GNU tar <= 1.23 is known to store filenames it cannot # translate to UTF-8 as raw strings (unfortunately without a # hdrcharset=BINARY header). # We first try the strict standard encoding, and if that fails we # fall back on the user's encoding and error handler. keyword = self._decode_pax_field(keyword, "utf8", "utf8", tarfile.errors) if keyword in PAX_NAME_FIELDS: value = self._decode_pax_field(value, encoding, tarfile.encoding, tarfile.errors) else: value = self._decode_pax_field(value, "utf8", "utf8", tarfile.errors) pax_headers[keyword] = value pos += length # Fetch the next header. try: next = self.fromtarfile(tarfile) except HeaderError: raise SubsequentHeaderError("missing or bad subsequent header") # Process GNU sparse information. if "GNU.sparse.map" in pax_headers: # GNU extended sparse format version 0.1. self._proc_gnusparse_01(next, pax_headers) elif "GNU.sparse.size" in pax_headers: # GNU extended sparse format version 0.0. self._proc_gnusparse_00(next, pax_headers, buf) elif pax_headers.get("GNU.sparse.major") == "1" and pax_headers.get("GNU.sparse.minor") == "0": # GNU extended sparse format version 1.0. self._proc_gnusparse_10(next, pax_headers, tarfile) if self.type in (XHDTYPE, SOLARIS_XHDTYPE): # Patch the TarInfo object with the extended header info. next._apply_pax_info(pax_headers, tarfile.encoding, tarfile.errors) next.offset = self.offset if "size" in pax_headers: # If the extended header replaces the size field, # we need to recalculate the offset where the next # header starts. offset = next.offset_data if next.isreg() or next.type not in SUPPORTED_TYPES: offset += next._block(next.size) tarfile.offset = offset return next ``` ###Assistant : Process an extended or global header as described in POSIX.1-2008.
1,305
def get_global_params(): GlobalParams = namedtuple('GlobalParams', [ 'drop_connect_rate', 'width_coefficient', 'depth_coefficient', 'depth_divisor', 'image_size' ]) global_params = GlobalParams( drop_connect_rate=0.3, width_coefficient=1.2, depth_coefficient=1.4, depth_divisor=8, image_size=64) return global_params
The fllowing are efficientnetb3's arch superparams, but to fit for scene text recognition task, the resolution(image_size) here is changed from 300 to 64.
23
22
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_global_params(): GlobalParams = namedtuple('GlobalParams', [ 'drop_connect_rate', 'width_coefficient', 'depth_coefficient', 'depth_divisor', 'image_size' ]) global_params = GlobalParams( drop_connect_rate=0.3, width_coefficient=1.2, depth_coefficient=1.4, depth_divisor=8, image_size=64) return global_params ``` ###Assistant : The fllowing are efficientnetb3's arch superparams, but to fit for scene text recognition task, the resolution(image_size) here is changed from 300 to 64.
1,306
def _create_closed(cls, vertices): v = _to_unmasked_float_array(vertices) return cls(np.concatenate([v, v[:1]]), closed=True)
Create a closed polygonal path going through *vertices*. Unlike ``Path(..., closed=True)``, *vertices* should **not** end with an entry for the CLOSEPATH; this entry is added by `._create_closed`.
27
10
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _create_closed(cls, vertices): v = _to_unmasked_float_array(vertices) return cls(np.concatenate([v, v[:1]]), closed=True) ``` ###Assistant : Create a closed polygonal path going through *vertices*. Unlike ``Path(..., closed=True)``, *vertices* should **not** end with an entry for the CLOSEPATH; this entry is added by `._create_closed`.
1,307
def test_predict_proba(loss, global_random_seed): n_samples = 20 y_true, raw_prediction = random_y_true_raw_prediction( loss=loss, n_samples=n_samples, y_bound=(-100, 100), raw_bound=(-5, 5), seed=global_random_seed, ) if hasattr(loss, "predict_proba"): proba = loss.predict_proba(raw_prediction) assert proba.shape == (n_samples, loss.n_classes) assert np.sum(proba, axis=1) == approx(1, rel=1e-11) if hasattr(loss, "gradient_proba"): for grad, proba in ( (None, None), (None, np.empty_like(raw_prediction)), (np.empty_like(raw_prediction), None), (np.empty_like(raw_prediction), np.empty_like(raw_prediction)), ): grad, proba = loss.gradient_proba( y_true=y_true, raw_prediction=raw_prediction, sample_weight=None, gradient_out=grad, proba_out=proba, ) assert proba.shape == (n_samples, loss.n_classes) assert np.sum(proba, axis=1) == approx(1, rel=1e-11) assert_allclose( grad, loss.gradient( y_true=y_true, raw_prediction=raw_prediction, sample_weight=None, gradient_out=None, ), ) @pytest.mark.parametrize("loss", ALL_LOSSES) @pytest.mark.parametrize("sample_weight", [None, "range"]) @pytest.mark.parametrize("dtype", (np.float32, np.float64)) @pytest.mark.parametrize("order", ("C", "F"))
Test that predict_proba and gradient_proba work as expected.
8
93
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_predict_proba(loss, global_random_seed): n_samples = 20 y_true, raw_prediction = random_y_true_raw_prediction( loss=loss, n_samples=n_samples, y_bound=(-100, 100), raw_bound=(-5, 5), seed=global_random_seed, ) if hasattr(loss, "predict_proba"): proba = loss.predict_proba(raw_prediction) assert proba.shape == (n_samples, loss.n_classes) assert np.sum(proba, axis=1) == approx(1, rel=1e-11) if hasattr(loss, "gradient_proba"): for grad, proba in ( (None, None), (None, np.empty_like(raw_prediction)), (np.empty_like(raw_prediction), None), (np.empty_like(raw_prediction), np.empty_like(raw_prediction)), ): grad, proba = loss.gradient_proba( y_true=y_true, raw_prediction=raw_prediction, sample_weight=None, gradient_out=grad, proba_out=proba, ) assert proba.shape == (n_samples, loss.n_classes) assert np.sum(proba, axis=1) == approx(1, rel=1e-11) assert_allclose( grad, loss.gradient( y_true=y_true, raw_prediction=raw_prediction, sample_weight=None, gradient_out=None, ), ) @pytest.mark.parametrize("loss", ALL_LOSSES) @pytest.mark.parametrize("sample_weight", [None, "range"]) @pytest.mark.parametrize("dtype", (np.float32, np.float64)) @pytest.mark.parametrize("order", ("C", "F")) ``` ###Assistant : Test that predict_proba and gradient_proba work as expected.
1,308
def get_late_shipments(scorecard): return get_total_shipments(scorecard) - get_on_time_shipments(scorecard)
Gets the number of late shipments (counting each item) in the period (based on Purchase Receipts vs POs)
18
6
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_late_shipments(scorecard): return get_total_shipments(scorecard) - get_on_time_shipments(scorecard) ``` ###Assistant : Gets the number of late shipments (counting each item) in the period (based on Purchase Receipts vs POs)
1,309
def eval_loss(self, targets, predictions): eval_loss = 0 for of_name, of_obj in self.output_features.items(): of_eval_loss = of_obj.eval_loss(targets[of_name], predictions[of_name]) eval_loss += of_obj.loss["weight"] * of_eval_loss additional_loss = 0 additional_losses = self.losses() if additional_losses: additional_loss = torch.sum(torch.stack(additional_losses)) # other losses return eval_loss, additional_loss
Computes all evaluation losses for the model given targets and predictions. Args: targets: A dictionary of target names to target tensors. predictions: A dictionary of output names to output tensors. Returns: A tuple of loss values for eval losses and additional losses.
42
38
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def eval_loss(self, targets, predictions): eval_loss = 0 for of_name, of_obj in self.output_features.items(): of_eval_loss = of_obj.eval_loss(targets[of_name], predictions[of_name]) eval_loss += of_obj.loss["weight"] * of_eval_loss additional_loss = 0 additional_losses = self.losses() if additional_losses: additional_loss = torch.sum(torch.stack(additional_losses)) # other losses return eval_loss, additional_loss ``` ###Assistant : Computes all evaluation losses for the model given targets and predictions. Args: targets: A dictionary of target names to target tensors. predictions: A dictionary of output names to output tensors. Returns: A tuple of loss values for eval losses and additional losses.
1,310
def partition_query(self, query, limit, offset): return ( ( f"SELECT * FROM ({query}) AS _ ORDER BY(SELECT NULL)" + f" OFFSET {offset} ROWS FETCH NEXT {limit} ROWS ONLY" ) if self._dialect_is_microsoft_sql() else f"SELECT * FROM ({query}) LIMIT {limit} OFFSET {offset}" )
Get a query that partitions the original `query`. Parameters ---------- query : str The SQL query to get a partition. limit : int The size of the partition. offset : int Where the partition begins. Returns ------- str
38
40
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def partition_query(self, query, limit, offset): return ( ( f"SELECT * FROM ({query}) AS _ ORDER BY(SELECT NULL)" + f" OFFSET {offset} ROWS FETCH NEXT {limit} ROWS ONLY" ) if self._dialect_is_microsoft_sql() else f"SELECT * FROM ({query}) LIMIT {limit} OFFSET {offset}" ) ``` ###Assistant : Get a query that partitions the original `query`. Parameters ---------- query : str The SQL query to get a partition. limit : int The size of the partition. offset : int Where the partition begins. Returns ------- str
1,311
def test_parameter_ends_with__in__or__isnull(self): # When it ends with '__in' ----------------------------------------- modeladmin = DecadeFilterBookAdminParameterEndsWith__In(Book, site) request = self.request_factory.get("/", {"decade__in": "the 90s"}) request.user = self.alfred changelist = modeladmin.get_changelist_instance(request) # Make sure the correct queryset is returned queryset = changelist.get_queryset(request) self.assertEqual(list(queryset), [self.bio_book]) # Make sure the correct choice is selected filterspec = changelist.get_filters(request)[0][0] self.assertEqual(filterspec.title, "publication decade") choices = list(filterspec.choices(changelist)) self.assertEqual(choices[2]["display"], "the 1990's") self.assertIs(choices[2]["selected"], True) self.assertEqual(choices[2]["query_string"], "?decade__in=the+90s") # When it ends with '__isnull' --------------------------------------- modeladmin = DecadeFilterBookAdminParameterEndsWith__Isnull(Book, site) request = self.request_factory.get("/", {"decade__isnull": "the 90s"}) request.user = self.alfred changelist = modeladmin.get_changelist_instance(request) # Make sure the correct queryset is returned queryset = changelist.get_queryset(request) self.assertEqual(list(queryset), [self.bio_book]) # Make sure the correct choice is selected filterspec = changelist.get_filters(request)[0][0] self.assertEqual(filterspec.title, "publication decade") choices = list(filterspec.choices(changelist)) self.assertEqual(choices[2]["display"], "the 1990's") self.assertIs(choices[2]["selected"], True) self.assertEqual(choices[2]["query_string"], "?decade__isnull=the+90s")
A SimpleListFilter's parameter name is not mistaken for a model field if it ends with '__isnull' or '__in' (#17091).
19
122
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_parameter_ends_with__in__or__isnull(self): # When it ends with '__in' ----------------------------------------- modeladmin = DecadeFilterBookAdminParameterEndsWith__In(Book, site) request = self.request_factory.get("/", {"decade__in": "the 90s"}) request.user = self.alfred changelist = modeladmin.get_changelist_instance(request) # Make sure the correct queryset is returned queryset = changelist.get_queryset(request) self.assertEqual(list(queryset), [self.bio_book]) # Make sure the correct choice is selected filterspec = changelist.get_filters(request)[0][0] self.assertEqual(filterspec.title, "publication decade") choices = list(filterspec.choices(changelist)) self.assertEqual(choices[2]["display"], "the 1990's") self.assertIs(choices[2]["selected"], True) self.assertEqual(choices[2]["query_string"], "?decade__in=the+90s") # When it ends with '__isnull' --------------------------------------- modeladmin = DecadeFilterBookAdminParameterEndsWith__Isnull(Book, site) request = self.request_factory.get("/", {"decade__isnull": "the 90s"}) request.user = self.alfred changelist = modeladmin.get_changelist_instance(request) # Make sure the correct queryset is returned queryset = changelist.get_queryset(request) self.assertEqual(list(queryset), [self.bio_book]) # Make sure the correct choice is selected filterspec = changelist.get_filters(request)[0][0] self.assertEqual(filterspec.title, "publication decade") choices = list(filterspec.choices(changelist)) self.assertEqual(choices[2]["display"], "the 1990's") self.assertIs(choices[2]["selected"], True) self.assertEqual(choices[2]["query_string"], "?decade__isnull=the+90s") ``` ###Assistant : A SimpleListFilter's parameter name is not mistaken for a model field if it ends with '__isnull' or '__in' (#17091).
1,312
def v4_int_to_packed(address): try: return address.to_bytes(4, 'big') except OverflowError: raise ValueError("Address negative or too large for IPv4")
Represent an address as 4 packed bytes in network (big-endian) order. Args: address: An integer representation of an IPv4 IP address. Returns: The integer address packed as 4 bytes in network (big-endian) order. Raises: ValueError: If the integer is negative or too large to be an IPv4 IP address.
49
16
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def v4_int_to_packed(address): try: return address.to_bytes(4, 'big') except OverflowError: raise ValueError("Address negative or too large for IPv4") ``` ###Assistant : Represent an address as 4 packed bytes in network (big-endian) order. Args: address: An integer representation of an IPv4 IP address. Returns: The integer address packed as 4 bytes in network (big-endian) order. Raises: ValueError: If the integer is negative or too large to be an IPv4 IP address.
1,313
def test_delete_alias_not_allowed(self) -> None: self._create_alias(self.admin_user) self.get_failure( self.handler.delete_association( create_requester(self.test_user), self.room_alias ), synapse.api.errors.AuthError, )
A user that doesn't meet the expected guidelines cannot delete an alias.
12
12
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_delete_alias_not_allowed(self) -> None: self._create_alias(self.admin_user) self.get_failure( self.handler.delete_association( create_requester(self.test_user), self.room_alias ), synapse.api.errors.AuthError, ) ``` ###Assistant : A user that doesn't meet the expected guidelines cannot delete an alias.
1,314
def trigintegrate(f, x, conds='piecewise'): pat, a, n, m = _pat_sincos(x) f = f.rewrite('sincos') M = f.match(pat) if M is None: return n, m = M[n], M[m] if n.is_zero and m.is_zero: return x zz = x if n.is_zero else S.Zero a = M[a] if n.is_odd or m.is_odd: u = _u n_, m_ = n.is_odd, m.is_odd # take smallest n or m -- to choose simplest substitution if n_ and m_: # Make sure to choose the positive one # otherwise an incorrect integral can occur. if n < 0 and m > 0: m_ = True n_ = False elif m < 0 and n > 0: n_ = True m_ = False # Both are negative so choose the smallest n or m # in absolute value for simplest substitution. elif (n < 0 and m < 0): n_ = n > m m_ = not (n > m) # Both n and m are odd and positive else: n_ = (n < m) # NB: careful here, one of the m_ = not (n < m) # conditions *must* be true # n m u=C (n-1)/2 m # S(x) * C(x) dx --> -(1-u^2) * u du if n_: ff = -(1 - u**2)**((n - 1)/2) * u**m uu = cos(a*x) # n m u=S n (m-1)/2 # S(x) * C(x) dx --> u * (1-u^2) du elif m_: ff = u**n * (1 - u**2)**((m - 1)/2) uu = sin(a*x) fi = integrate(ff, u) # XXX cyclic deps fx = fi.subs(u, uu) if conds == 'piecewise': return Piecewise((fx / a, Ne(a, 0)), (zz, True)) return fx / a # n & m are both even # # 2k 2m 2l 2l # we transform S (x) * C (x) into terms with only S (x) or C (x) # # example: # 100 4 100 2 2 100 4 2 # S (x) * C (x) = S (x) * (1-S (x)) = S (x) * (1 + S (x) - 2*S (x)) # # 104 102 100 # = S (x) - 2*S (x) + S (x) # 2k # then S is integrated with recursive formula # take largest n or m -- to choose simplest substitution n_ = (Abs(n) > Abs(m)) m_ = (Abs(m) > Abs(n)) res = S.Zero if n_: # 2k 2 k i 2i # C = (1 - S ) = sum(i, (-) * B(k, i) * S ) if m > 0: for i in range(0, m//2 + 1): res += (S.NegativeOne**i * binomial(m//2, i) * _sin_pow_integrate(n + 2*i, x)) elif m == 0: res = _sin_pow_integrate(n, x) else: # m < 0 , |n| > |m| # / # | # | m n # | cos (x) sin (x) dx = # | # | #/ # / # | # -1 m+1 n-1 n - 1 | m+2 n-2 # ________ cos (x) sin (x) + _______ | cos (x) sin (x) dx # | # m + 1 m + 1 | # / res = (Rational(-1, m + 1) * cos(x)**(m + 1) * sin(x)**(n - 1) + Rational(n - 1, m + 1) * trigintegrate(cos(x)**(m + 2)*sin(x)**(n - 2), x)) elif m_: # 2k 2 k i 2i # S = (1 - C ) = sum(i, (-) * B(k, i) * C ) if n > 0: # / / # | | # | m n | -m n # | cos (x)*sin (x) dx or | cos (x) * sin (x) dx # | | # / / # # |m| > |n| ; m, n >0 ; m, n belong to Z - {0} # n 2 # sin (x) term is expanded here in terms of cos (x), # and then integrated. # for i in range(0, n//2 + 1): res += (S.NegativeOne**i * binomial(n//2, i) * _cos_pow_integrate(m + 2*i, x)) elif n == 0: # / # | # | 1 # | _ _ _ # | m # | cos (x) # / # res = _cos_pow_integrate(m, x) else: # n < 0 , |m| > |n| # / # | # | m n # | cos (x) sin (x) dx = # | # | #/ # / # | # 1 m-1 n+1 m - 1 | m-2 n+2 # _______ cos (x) sin (x) + _______ | cos (x) sin (x) dx # | # n + 1 n + 1 | # / res = (Rational(1, n + 1) * cos(x)**(m - 1)*sin(x)**(n + 1) + Rational(m - 1, n + 1) * trigintegrate(cos(x)**(m - 2)*sin(x)**(n + 2), x)) else: if m == n: ##Substitute sin(2x)/2 for sin(x)cos(x) and then Integrate. res = integrate((sin(2*x)*S.Half)**m, x) elif (m == -n): if n < 0: # Same as the scheme described above. # the function argument to integrate in the end will # be 1, this cannot be integrated by trigintegrate. # Hence use sympy.integrals.integrate. res = (Rational(1, n + 1) * cos(x)**(m - 1) * sin(x)**(n + 1) + Rational(m - 1, n + 1) * integrate(cos(x)**(m - 2) * sin(x)**(n + 2), x)) else: res = (Rational(-1, m + 1) * cos(x)**(m + 1) * sin(x)**(n - 1) + Rational(n - 1, m + 1) * integrate(cos(x)**(m + 2)*sin(x)**(n - 2), x)) if conds == 'piecewise': return Piecewise((res.subs(x, a*x) / a, Ne(a, 0)), (zz, True)) return res.subs(x, a*x) / a
Integrate f = Mul(trig) over x. Examples ======== >>> from sympy import sin, cos, tan, sec >>> from sympy.integrals.trigonometry import trigintegrate >>> from sympy.abc import x >>> trigintegrate(sin(x)*cos(x), x) sin(x)**2/2 >>> trigintegrate(sin(x)**2, x) x/2 - sin(x)*cos(x)/2 >>> trigintegrate(tan(x)*sec(x), x) 1/cos(x) >>> trigintegrate(sin(x)*tan(x), x) -log(sin(x) - 1)/2 + log(sin(x) + 1)/2 - sin(x) References ========== .. [1] http://en.wikibooks.org/wiki/Calculus/Integration_techniques See Also ======== sympy.integrals.integrals.Integral.doit sympy.integrals.integrals.Integral
62
909
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def trigintegrate(f, x, conds='piecewise'): pat, a, n, m = _pat_sincos(x) f = f.rewrite('sincos') M = f.match(pat) if M is None: return n, m = M[n], M[m] if n.is_zero and m.is_zero: return x zz = x if n.is_zero else S.Zero a = M[a] if n.is_odd or m.is_odd: u = _u n_, m_ = n.is_odd, m.is_odd # take smallest n or m -- to choose simplest substitution if n_ and m_: # Make sure to choose the positive one # otherwise an incorrect integral can occur. if n < 0 and m > 0: m_ = True n_ = False elif m < 0 and n > 0: n_ = True m_ = False # Both are negative so choose the smallest n or m # in absolute value for simplest substitution. elif (n < 0 and m < 0): n_ = n > m m_ = not (n > m) # Both n and m are odd and positive else: n_ = (n < m) # NB: careful here, one of the m_ = not (n < m) # conditions *must* be true # n m u=C (n-1)/2 m # S(x) * C(x) dx --> -(1-u^2) * u du if n_: ff = -(1 - u**2)**((n - 1)/2) * u**m uu = cos(a*x) # n m u=S n (m-1)/2 # S(x) * C(x) dx --> u * (1-u^2) du elif m_: ff = u**n * (1 - u**2)**((m - 1)/2) uu = sin(a*x) fi = integrate(ff, u) # XXX cyclic deps fx = fi.subs(u, uu) if conds == 'piecewise': return Piecewise((fx / a, Ne(a, 0)), (zz, True)) return fx / a # n & m are both even # # 2k 2m 2l 2l # we transform S (x) * C (x) into terms with only S (x) or C (x) # # example: # 100 4 100 2 2 100 4 2 # S (x) * C (x) = S (x) * (1-S (x)) = S (x) * (1 + S (x) - 2*S (x)) # # 104 102 100 # = S (x) - 2*S (x) + S (x) # 2k # then S is integrated with recursive formula # take largest n or m -- to choose simplest substitution n_ = (Abs(n) > Abs(m)) m_ = (Abs(m) > Abs(n)) res = S.Zero if n_: # 2k 2 k i 2i # C = (1 - S ) = sum(i, (-) * B(k, i) * S ) if m > 0: for i in range(0, m//2 + 1): res += (S.NegativeOne**i * binomial(m//2, i) * _sin_pow_integrate(n + 2*i, x)) elif m == 0: res = _sin_pow_integrate(n, x) else: # m < 0 , |n| > |m| # / # | # | m n # | cos (x) sin (x) dx = # | # | #/ # / # | # -1 m+1 n-1 n - 1 | m+2 n-2 # ________ cos (x) sin (x) + _______ | cos (x) sin (x) dx # | # m + 1 m + 1 | # / res = (Rational(-1, m + 1) * cos(x)**(m + 1) * sin(x)**(n - 1) + Rational(n - 1, m + 1) * trigintegrate(cos(x)**(m + 2)*sin(x)**(n - 2), x)) elif m_: # 2k 2 k i 2i # S = (1 - C ) = sum(i, (-) * B(k, i) * C ) if n > 0: # / / # | | # | m n | -m n # | cos (x)*sin (x) dx or | cos (x) * sin (x) dx # | | # / / # # |m| > |n| ; m, n >0 ; m, n belong to Z - {0} # n 2 # sin (x) term is expanded here in terms of cos (x), # and then integrated. # for i in range(0, n//2 + 1): res += (S.NegativeOne**i * binomial(n//2, i) * _cos_pow_integrate(m + 2*i, x)) elif n == 0: # / # | # | 1 # | _ _ _ # | m # | cos (x) # / # res = _cos_pow_integrate(m, x) else: # n < 0 , |m| > |n| # / # | # | m n # | cos (x) sin (x) dx = # | # | #/ # / # | # 1 m-1 n+1 m - 1 | m-2 n+2 # _______ cos (x) sin (x) + _______ | cos (x) sin (x) dx # | # n + 1 n + 1 | # / res = (Rational(1, n + 1) * cos(x)**(m - 1)*sin(x)**(n + 1) + Rational(m - 1, n + 1) * trigintegrate(cos(x)**(m - 2)*sin(x)**(n + 2), x)) else: if m == n: ##Substitute sin(2x)/2 for sin(x)cos(x) and then Integrate. res = integrate((sin(2*x)*S.Half)**m, x) elif (m == -n): if n < 0: # Same as the scheme described above. # the function argument to integrate in the end will # be 1, this cannot be integrated by trigintegrate. # Hence use sympy.integrals.integrate. res = (Rational(1, n + 1) * cos(x)**(m - 1) * sin(x)**(n + 1) + Rational(m - 1, n + 1) * integrate(cos(x)**(m - 2) * sin(x)**(n + 2), x)) else: res = (Rational(-1, m + 1) * cos(x)**(m + 1) * sin(x)**(n - 1) + Rational(n - 1, m + 1) * integrate(cos(x)**(m + 2)*sin(x)**(n - 2), x)) if conds == 'piecewise': return Piecewise((res.subs(x, a*x) / a, Ne(a, 0)), (zz, True)) return res.subs(x, a*x) / a ``` ###Assistant : Integrate f = Mul(trig) over x. Examples ======== >>> from sympy import sin, cos, tan, sec >>> from sympy.integrals.trigonometry import trigintegrate >>> from sympy.abc import x >>> trigintegrate(sin(x)*cos(x), x) sin(x)**2/2 >>> trigintegrate(sin(x)**2, x) x/2 - sin(x)*cos(x)/2 >>> trigintegrate(tan(x)*sec(x), x) 1/cos(x) >>> trigintegrate(sin(x)*tan(x), x) -log(sin(x) - 1)/2 + log(sin(x) + 1)/2 - sin(x) References ========== .. [1] http://en.wikibooks.org/wiki/Calculus/Integration_techniques See Also ======== sympy.integrals.integrals.Integral.doit sympy.integrals.integrals.Integral
1,315
def get_pywin32_module_file_attribute(module_name): from PyInstaller.utils.win32 import winutils module = winutils.import_pywin32_module(module_name) return module.__file__
Get the absolute path of the PyWin32 DLL specific to the PyWin32 module with the passed name. On import, each PyWin32 module: * Imports a DLL specific to that module. * Overwrites the values of all module attributes with values specific to that DLL. This includes that module's `__file__` attribute, which then provides the absolute path of that DLL. This function safely imports that module in a PyWin32-aware subprocess and returns the value of that module's `__file__` attribute. Parameters ---------- module_name : str Fully-qualified name of that module. Returns ---------- str Absolute path of that DLL. See Also ---------- `PyInstaller.utils.win32.winutils.import_pywin32_module()` For further details.
103
11
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_pywin32_module_file_attribute(module_name): from PyInstaller.utils.win32 import winutils module = winutils.import_pywin32_module(module_name) return module.__file__ ``` ###Assistant : Get the absolute path of the PyWin32 DLL specific to the PyWin32 module with the passed name. On import, each PyWin32 module: * Imports a DLL specific to that module. * Overwrites the values of all module attributes with values specific to that DLL. This includes that module's `__file__` attribute, which then provides the absolute path of that DLL. This function safely imports that module in a PyWin32-aware subprocess and returns the value of that module's `__file__` attribute. Parameters ---------- module_name : str Fully-qualified name of that module. Returns ---------- str Absolute path of that DLL. See Also ---------- `PyInstaller.utils.win32.winutils.import_pywin32_module()` For further details.
1,316
def render_warning(self, message): context = {"error": message} return render_to_response("sentry/pipeline-provider-error.html", context, self.request)
For situations when we want to display an error without triggering an issue
13
11
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def render_warning(self, message): context = {"error": message} return render_to_response("sentry/pipeline-provider-error.html", context, self.request) ``` ###Assistant : For situations when we want to display an error without triggering an issue
1,317
def test_context_filter_not_labels(self) -> None: event_id = self._send_labelled_messages_in_room() channel = self.make_request( "GET", "/rooms/%s/context/%s?filter=%s" % (self.room_id, event_id, json.dumps(self.FILTER_NOT_LABELS)), access_token=self.tok, ) self.assertEqual(channel.code, 200, channel.result) events_before = channel.json_body["events_before"] self.assertEqual( len(events_before), 1, [event["content"] for event in events_before] ) self.assertEqual( events_before[0]["content"]["body"], "without label", events_before[0] ) events_after = channel.json_body["events_after"] self.assertEqual( len(events_after), 2, [event["content"] for event in events_after] ) self.assertEqual( events_after[0]["content"]["body"], "with wrong label", events_after[0] ) self.assertEqual( events_after[1]["content"]["body"], "with two wrong labels", events_after[1] )
Test that we can filter by the absence of a label on a /context request.
15
66
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_context_filter_not_labels(self) -> None: event_id = self._send_labelled_messages_in_room() channel = self.make_request( "GET", "/rooms/%s/context/%s?filter=%s" % (self.room_id, event_id, json.dumps(self.FILTER_NOT_LABELS)), access_token=self.tok, ) self.assertEqual(channel.code, 200, channel.result) events_before = channel.json_body["events_before"] self.assertEqual( len(events_before), 1, [event["content"] for event in events_before] ) self.assertEqual( events_before[0]["content"]["body"], "without label", events_before[0] ) events_after = channel.json_body["events_after"] self.assertEqual( len(events_after), 2, [event["content"] for event in events_after] ) self.assertEqual( events_after[0]["content"]["body"], "with wrong label", events_after[0] ) self.assertEqual( events_after[1]["content"]["body"], "with two wrong labels", events_after[1] ) ``` ###Assistant : Test that we can filter by the absence of a label on a /context request.
1,318
def _consistent_PT(u, v, graph_params, state_params): G1, G2 = graph_params.G1, graph_params.G2 mapping, reverse_mapping = state_params.mapping, state_params.reverse_mapping for neighbor in G1[u]: if neighbor in mapping: if G1.number_of_edges(u, neighbor) != G2.number_of_edges( v, mapping[neighbor] ): return False for neighbor in G2[v]: if neighbor in reverse_mapping: if G1.number_of_edges(u, reverse_mapping[neighbor]) != G2.number_of_edges( v, neighbor ): return False return True
Checks the consistency of extending the mapping using the current node pair. Parameters ---------- u, v: Graph node The two candidate nodes being examined. graph_params: namedtuple Contains all the Graph-related parameters: G1,G2: NetworkX Graph or MultiGraph instances. The two graphs to check for isomorphism or monomorphism G1_labels,G2_labels: dict The label of every node in G1 and G2 respectively state_params: namedtuple Contains all the State-related parameters: mapping: dict The mapping as extended so far. Maps nodes of G1 to nodes of G2 reverse_mapping: dict The reverse mapping as extended so far. Maps nodes from G2 to nodes of G1. It's basically "mapping" reversed T1, T2: set Ti contains uncovered neighbors of covered nodes from Gi, i.e. nodes that are not in the mapping, but are neighbors of nodes that are. T1_out, T2_out: set Ti_out contains all the nodes from Gi, that are neither in the mapping nor in Ti Returns ------- True if the pair passes all the consistency checks successfully. False otherwise.
162
53
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _consistent_PT(u, v, graph_params, state_params): G1, G2 = graph_params.G1, graph_params.G2 mapping, reverse_mapping = state_params.mapping, state_params.reverse_mapping for neighbor in G1[u]: if neighbor in mapping: if G1.number_of_edges(u, neighbor) != G2.number_of_edges( v, mapping[neighbor] ): return False for neighbor in G2[v]: if neighbor in reverse_mapping: if G1.number_of_edges(u, reverse_mapping[neighbor]) != G2.number_of_edges( v, neighbor ): return False return True ``` ###Assistant : Checks the consistency of extending the mapping using the current node pair. Parameters ---------- u, v: Graph node The two candidate nodes being examined. graph_params: namedtuple Contains all the Graph-related parameters: G1,G2: NetworkX Graph or MultiGraph instances. The two graphs to check for isomorphism or monomorphism G1_labels,G2_labels: dict The label of every node in G1 and G2 respectively state_params: namedtuple Contains all the State-related parameters: mapping: dict The mapping as extended so far. Maps nodes of G1 to nodes of G2 reverse_mapping: dict The reverse mapping as extended so far. Maps nodes from G2 to nodes of G1. It's basically "mapping" reversed T1, T2: set Ti contains uncovered neighbors of covered nodes from Gi, i.e. nodes that are not in the mapping, but are neighbors of nodes that are. T1_out, T2_out: set Ti_out contains all the nodes from Gi, that are neither in the mapping nor in Ti Returns ------- True if the pair passes all the consistency checks successfully. False otherwise.
1,319
def get_variant_values_for(items): attribute_map = {} for attr in frappe.db.sql( % ", ".join(["%s"] * len(items)), tuple(items), as_dict=1, ): attribute_map.setdefault(attr["parent"], {}) attribute_map[attr["parent"]].update({attr["attribute"]: attr["attribute_value"]}) return attribute_map
Returns variant values for items.select parent, attribute, attribute_value from `tabItem Variant Attribute` where parent in (%s)
16
23
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_variant_values_for(items): attribute_map = {} for attr in frappe.db.sql( % ", ".join(["%s"] * len(items)), tuple(items), as_dict=1, ): attribute_map.setdefault(attr["parent"], {}) attribute_map[attr["parent"]].update({attr["attribute"]: attr["attribute_value"]}) return attribute_map ``` ###Assistant : Returns variant values for items.select parent, attribute, attribute_value from `tabItem Variant Attribute` where parent in (%s)
1,320
def test_edit_post_locked_by_self(self): # Lock the snippet self.lock_snippet(self.user) # Try to edit the snippet response = self.client.post( self.get_url("edit"), {"text": "Edited while locked"}, follow=True, ) self.refresh_snippet() # Should not show error message self.assertNotContains( response, f"The {self.model_name} could not be saved as it is locked", ) # Check that the snippet is still locked self.assertTrue(self.snippet.locked) # Check that the snippet is edited self.assertEqual(self.snippet.text, "Edited while locked")
A user can edit a snippet that is locked by themselves.
11
63
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_edit_post_locked_by_self(self): # Lock the snippet self.lock_snippet(self.user) # Try to edit the snippet response = self.client.post( self.get_url("edit"), {"text": "Edited while locked"}, follow=True, ) self.refresh_snippet() # Should not show error message self.assertNotContains( response, f"The {self.model_name} could not be saved as it is locked", ) # Check that the snippet is still locked self.assertTrue(self.snippet.locked) # Check that the snippet is edited self.assertEqual(self.snippet.text, "Edited while locked") ``` ###Assistant : A user can edit a snippet that is locked by themselves.
1,321
def update_metrics(self, targets, predictions): for of_name, of_obj in self.output_features.items(): of_obj.update_metrics(targets[of_name], predictions[of_name]) eval_loss, additional_losses = self.eval_loss(targets, predictions) self.eval_loss_metric.update(eval_loss) self.eval_additional_losses_metrics.update(additional_losses)
Updates the model's metrics given targets and predictions.
8
18
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def update_metrics(self, targets, predictions): for of_name, of_obj in self.output_features.items(): of_obj.update_metrics(targets[of_name], predictions[of_name]) eval_loss, additional_losses = self.eval_loss(targets, predictions) self.eval_loss_metric.update(eval_loss) self.eval_additional_losses_metrics.update(additional_losses) ``` ###Assistant : Updates the model's metrics given targets and predictions.
1,322
def tick_right(self): label = True if 'label1On' in self._major_tick_kw: label = (self._major_tick_kw['label1On'] or self._major_tick_kw['label2On']) self.set_ticks_position('right') # if labels were turned off before this was called # leave them off self.set_tick_params(which='both', labelright=label)
Move ticks and ticklabels (if present) to the right of the Axes.
12
31
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def tick_right(self): label = True if 'label1On' in self._major_tick_kw: label = (self._major_tick_kw['label1On'] or self._major_tick_kw['label2On']) self.set_ticks_position('right') # if labels were turned off before this was called # leave them off self.set_tick_params(which='both', labelright=label) ``` ###Assistant : Move ticks and ticklabels (if present) to the right of the Axes.
1,323
def test_show_message_twice(view, info1, info2, count): view.show_message(info1) view.show_message(info2) assert len(view._messages) == count
Show the exact same message twice -> only one should be shown.
12
11
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_show_message_twice(view, info1, info2, count): view.show_message(info1) view.show_message(info2) assert len(view._messages) == count ``` ###Assistant : Show the exact same message twice -> only one should be shown.
1,324
def __call__(self, bboxes1, bboxes2, mode='iou', is_aligned=False): bboxes1 = get_box_tensor(bboxes1) bboxes2 = get_box_tensor(bboxes2) assert bboxes1.size(-1) in [0, 4, 5] assert bboxes2.size(-1) in [0, 4, 5] if bboxes2.size(-1) == 5: bboxes2 = bboxes2[..., :4] if bboxes1.size(-1) == 5: bboxes1 = bboxes1[..., :4] if self.dtype == 'fp16': # change tensor type to save cpu and cuda memory and keep speed bboxes1 = cast_tensor_type(bboxes1, self.scale, self.dtype) bboxes2 = cast_tensor_type(bboxes2, self.scale, self.dtype) overlaps = bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) if not overlaps.is_cuda and overlaps.dtype == torch.float16: # resume cpu float32 overlaps = overlaps.float() return overlaps return bbox_overlaps(bboxes1, bboxes2, mode, is_aligned)
Calculate IoU between 2D bboxes. Args: bboxes1 (Tensor or :obj:`BaseBoxes`): bboxes have shape (m, 4) in <x1, y1, x2, y2> format, or shape (m, 5) in <x1, y1, x2, y2, score> format. bboxes2 (Tensor or :obj:`BaseBoxes`): bboxes have shape (m, 4) in <x1, y1, x2, y2> format, shape (m, 5) in <x1, y1, x2, y2, score> format, or be empty. If ``is_aligned `` is ``True``, then m and n must be equal. mode (str): "iou" (intersection over union), "iof" (intersection over foreground), or "giou" (generalized intersection over union). is_aligned (bool, optional): If True, then m and n must be equal. Default False. Returns: Tensor: shape (m, n) if ``is_aligned `` is False else shape (m,)
115
94
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def __call__(self, bboxes1, bboxes2, mode='iou', is_aligned=False): bboxes1 = get_box_tensor(bboxes1) bboxes2 = get_box_tensor(bboxes2) assert bboxes1.size(-1) in [0, 4, 5] assert bboxes2.size(-1) in [0, 4, 5] if bboxes2.size(-1) == 5: bboxes2 = bboxes2[..., :4] if bboxes1.size(-1) == 5: bboxes1 = bboxes1[..., :4] if self.dtype == 'fp16': # change tensor type to save cpu and cuda memory and keep speed bboxes1 = cast_tensor_type(bboxes1, self.scale, self.dtype) bboxes2 = cast_tensor_type(bboxes2, self.scale, self.dtype) overlaps = bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) if not overlaps.is_cuda and overlaps.dtype == torch.float16: # resume cpu float32 overlaps = overlaps.float() return overlaps return bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) ``` ###Assistant : Calculate IoU between 2D bboxes. Args: bboxes1 (Tensor or :obj:`BaseBoxes`): bboxes have shape (m, 4) in <x1, y1, x2, y2> format, or shape (m, 5) in <x1, y1, x2, y2, score> format. bboxes2 (Tensor or :obj:`BaseBoxes`): bboxes have shape (m, 4) in <x1, y1, x2, y2> format, shape (m, 5) in <x1, y1, x2, y2, score> format, or be empty. If ``is_aligned `` is ``True``, then m and n must be equal. mode (str): "iou" (intersection over union), "iof" (intersection over foreground), or "giou" (generalized intersection over union). is_aligned (bool, optional): If True, then m and n must be equal. Default False. Returns: Tensor: shape (m, n) if ``is_aligned `` is False else shape (m,)
1,325
def binning(self) -> List[List[str]]: return self._binning_linear_threshold(multiplier=100)
Create bins to split linearly from the lowest to the highest sample value Returns ------- list List of bins of filenames
21
6
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def binning(self) -> List[List[str]]: return self._binning_linear_threshold(multiplier=100) ``` ###Assistant : Create bins to split linearly from the lowest to the highest sample value Returns ------- list List of bins of filenames
1,326
def connect(self, publish_port, connect_callback=None, disconnect_callback=None): raise NotImplementedError
Create a network connection to the the PublishServer or broker.
10
7
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def connect(self, publish_port, connect_callback=None, disconnect_callback=None): raise NotImplementedError ``` ###Assistant : Create a network connection to the the PublishServer or broker.
1,327
def contains(self, g): if not isinstance(g, FreeGroupElement): return False elif self != g.group: return False else: return True
Tests if Free Group element ``g`` belong to self, ``G``. In mathematical terms any linear combination of generators of a Free Group is contained in it. Examples ======== >>> from sympy.combinatorics import free_group >>> f, x, y, z = free_group("x y z") >>> f.contains(x**3*y**2) True
45
18
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def contains(self, g): if not isinstance(g, FreeGroupElement): return False elif self != g.group: return False else: return True ``` ###Assistant : Tests if Free Group element ``g`` belong to self, ``G``. In mathematical terms any linear combination of generators of a Free Group is contained in it. Examples ======== >>> from sympy.combinatorics import free_group >>> f, x, y, z = free_group("x y z") >>> f.contains(x**3*y**2) True
1,328
def get_pledged_security_qty(loan): current_pledges = {} unpledges = frappe._dict( frappe.db.sql( , (loan), ) ) pledges = frappe._dict( frappe.db.sql( , (loan), ) ) for security, qty in pledges.items(): current_pledges.setdefault(security, qty) current_pledges[security] -= unpledges.get(security, 0.0) return current_pledges
SELECT u.loan_security, sum(u.qty) as qty FROM `tabLoan Security Unpledge` up, `tabUnpledge` u WHERE up.loan = %s AND u.parent = up.name AND up.status = 'Approved' GROUP BY u.loan_security SELECT p.loan_security, sum(p.qty) as qty FROM `tabLoan Security Pledge` lp, `tabPledge`p WHERE lp.loan = %s AND p.parent = lp.name AND lp.status = 'Pledged' GROUP BY p.loan_security
53
34
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_pledged_security_qty(loan): current_pledges = {} unpledges = frappe._dict( frappe.db.sql( , (loan), ) ) pledges = frappe._dict( frappe.db.sql( , (loan), ) ) for security, qty in pledges.items(): current_pledges.setdefault(security, qty) current_pledges[security] -= unpledges.get(security, 0.0) return current_pledges ``` ###Assistant : SELECT u.loan_security, sum(u.qty) as qty FROM `tabLoan Security Unpledge` up, `tabUnpledge` u WHERE up.loan = %s AND u.parent = up.name AND up.status = 'Approved' GROUP BY u.loan_security SELECT p.loan_security, sum(p.qty) as qty FROM `tabLoan Security Pledge` lp, `tabPledge`p WHERE lp.loan = %s AND p.parent = lp.name AND lp.status = 'Pledged' GROUP BY p.loan_security
1,329
def get_select(self): select = [] klass_info = None annotations = {} select_idx = 0 for alias, (sql, params) in self.query.extra_select.items(): annotations[alias] = select_idx select.append((RawSQL(sql, params), alias)) select_idx += 1 assert not (self.query.select and self.query.default_cols) if self.query.default_cols: cols = self.get_default_columns() else: # self.query.select is a special case. These columns never go to # any model. cols = self.query.select if cols: select_list = [] for col in cols: select_list.append(select_idx) select.append((col, None)) select_idx += 1 klass_info = { "model": self.query.model, "select_fields": select_list, } for alias, annotation in self.query.annotation_select.items(): annotations[alias] = select_idx select.append((annotation, alias)) select_idx += 1 if self.query.select_related: related_klass_infos = self.get_related_selections(select) klass_info["related_klass_infos"] = related_klass_infos
Return three values: - a list of 3-tuples of (expression, (sql, params), alias) - a klass_info structure, - a dictionary of annotations The (sql, params) is what the expression will produce, and alias is the "AS alias" for the column (possibly None). The klass_info structure contains the following information: - The base model of the query. - Which columns for that model are present in the query (by position of the select clause). - related_klass_infos: [f, klass_info] to descent into The annotations is a dictionary of {'attname': column position} values.
90
101
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_select(self): select = [] klass_info = None annotations = {} select_idx = 0 for alias, (sql, params) in self.query.extra_select.items(): annotations[alias] = select_idx select.append((RawSQL(sql, params), alias)) select_idx += 1 assert not (self.query.select and self.query.default_cols) if self.query.default_cols: cols = self.get_default_columns() else: # self.query.select is a special case. These columns never go to # any model. cols = self.query.select if cols: select_list = [] for col in cols: select_list.append(select_idx) select.append((col, None)) select_idx += 1 klass_info = { "model": self.query.model, "select_fields": select_list, } for alias, annotation in self.query.annotation_select.items(): annotations[alias] = select_idx select.append((annotation, alias)) select_idx += 1 if self.query.select_related: related_klass_infos = self.get_related_selections(select) klass_info["related_klass_infos"] = related_klass_infos ``` ###Assistant : Return three values: - a list of 3-tuples of (expression, (sql, params), alias) - a klass_info structure, - a dictionary of annotations The (sql, params) is what the expression will produce, and alias is the "AS alias" for the column (possibly None). The klass_info structure contains the following information: - The base model of the query. - Which columns for that model are present in the query (by position of the select clause). - related_klass_infos: [f, klass_info] to descent into The annotations is a dictionary of {'attname': column position} values.
1,330
def set_module_collection_mode(self, name, mode): if name is None: name = self.__name__ if mode is None: self._module_collection_mode.pop(name) else: self._module_collection_mode[name] = mode
" Set the package/module collection mode for the specified module name. If `name` is `None`, the hooked module/package name is used. Valid values for `mode` are: `'pyc'`, `'py'`, and `None`.
30
20
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def set_module_collection_mode(self, name, mode): if name is None: name = self.__name__ if mode is None: self._module_collection_mode.pop(name) else: self._module_collection_mode[name] = mode ``` ###Assistant : " Set the package/module collection mode for the specified module name. If `name` is `None`, the hooked module/package name is used. Valid values for `mode` are: `'pyc'`, `'py'`, and `None`.
1,331
def write_file (filename, contents): f = open(filename, "w") try: for line in contents: f.write(line + "\n") finally: f.close()
Create a file with the specified name and write 'contents' (a sequence of strings without line terminators) to it.
19
18
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def write_file (filename, contents): f = open(filename, "w") try: for line in contents: f.write(line + "\n") finally: f.close() ``` ###Assistant : Create a file with the specified name and write 'contents' (a sequence of strings without line terminators) to it.
1,332
async def test_flow_run_policy_is_backwards_compatible(self): empty_new_policy = schemas.core.FlowRunPolicy() # should not raise an error self.OldFlowRunPolicy(**empty_new_policy.dict())
In version 2.1.1 and prior, the FlowRunPolicy schema required two properties, `max_retries` and `retry_delay_seconds`. These properties are deprecated. This test ensures old clients can load new FlowRunPolicySchemas. It can be removed when the corresponding properties are removed.
37
13
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python async def test_flow_run_policy_is_backwards_compatible(self): empty_new_policy = schemas.core.FlowRunPolicy() # should not raise an error self.OldFlowRunPolicy(**empty_new_policy.dict()) ``` ###Assistant : In version 2.1.1 and prior, the FlowRunPolicy schema required two properties, `max_retries` and `retry_delay_seconds`. These properties are deprecated. This test ensures old clients can load new FlowRunPolicySchemas. It can be removed when the corresponding properties are removed.
1,333
def test_help_text_examples_are_contextualized(): rendered_inline = render(spacing_invalid_value("padding", "inline")) assert "widget.styles.padding" in rendered_inline rendered_css = render(spacing_invalid_value("padding", "css")) assert "padding:" in rendered_css
Ensure that if the user is using CSS, they see CSS-specific examples and if they're using inline styles they see inline-specific examples.
22
18
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_help_text_examples_are_contextualized(): rendered_inline = render(spacing_invalid_value("padding", "inline")) assert "widget.styles.padding" in rendered_inline rendered_css = render(spacing_invalid_value("padding", "css")) assert "padding:" in rendered_css ``` ###Assistant : Ensure that if the user is using CSS, they see CSS-specific examples and if they're using inline styles they see inline-specific examples.
1,334
def test_reverse_proxy(tctx, keep_host_header): server = Placeholder(Server) tctx.options.mode = "reverse:http://localhost:8000" tctx.options.connection_strategy = "lazy" tctx.options.keep_host_header = keep_host_header assert ( Playbook(modes.ReverseProxy(tctx), hooks=False) >> DataReceived( tctx.client, b"GET /foo HTTP/1.1\r\n" b"Host: example.com\r\n\r\n" ) << NextLayerHook(Placeholder(NextLayer)) >> reply_next_layer(lambda ctx: http.HttpLayer(ctx, HTTPMode.transparent)) << OpenConnection(server) >> reply(None) << SendData( server, b"GET /foo HTTP/1.1\r\n" b"Host: " + (b"example.com" if keep_host_header else b"localhost:8000") + b"\r\n\r\n", ) >> DataReceived(server, b"HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n") << SendData(tctx.client, b"HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n") ) assert server().address == ("localhost", 8000) @pytest.mark.parametrize("patch", [True, False]) @pytest.mark.parametrize("connection_strategy", ["eager", "lazy"])
Test mitmproxy in reverse proxy mode. - make sure that we connect to the right host - make sure that we respect keep_host_header - make sure that we include non-standard ports in the host header (#4280)
36
80
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_reverse_proxy(tctx, keep_host_header): server = Placeholder(Server) tctx.options.mode = "reverse:http://localhost:8000" tctx.options.connection_strategy = "lazy" tctx.options.keep_host_header = keep_host_header assert ( Playbook(modes.ReverseProxy(tctx), hooks=False) >> DataReceived( tctx.client, b"GET /foo HTTP/1.1\r\n" b"Host: example.com\r\n\r\n" ) << NextLayerHook(Placeholder(NextLayer)) >> reply_next_layer(lambda ctx: http.HttpLayer(ctx, HTTPMode.transparent)) << OpenConnection(server) >> reply(None) << SendData( server, b"GET /foo HTTP/1.1\r\n" b"Host: " + (b"example.com" if keep_host_header else b"localhost:8000") + b"\r\n\r\n", ) >> DataReceived(server, b"HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n") << SendData(tctx.client, b"HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n") ) assert server().address == ("localhost", 8000) @pytest.mark.parametrize("patch", [True, False]) @pytest.mark.parametrize("connection_strategy", ["eager", "lazy"]) ``` ###Assistant : Test mitmproxy in reverse proxy mode. - make sure that we connect to the right host - make sure that we respect keep_host_header - make sure that we include non-standard ports in the host header (#4280)
1,335
def _check_filter_horizontal(self, obj): if not isinstance(obj.filter_horizontal, (list, tuple)): return must_be( "a list or tuple", option="filter_horizontal", obj=obj, id="admin.E018" ) else: return list( chain.from_iterable( self._check_filter_item( obj, field_name, "filter_horizontal[%d]" % index ) for index, field_name in enumerate(obj.filter_horizontal) ) )
Check that filter_horizontal is a sequence of field names.
9
36
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _check_filter_horizontal(self, obj): if not isinstance(obj.filter_horizontal, (list, tuple)): return must_be( "a list or tuple", option="filter_horizontal", obj=obj, id="admin.E018" ) else: return list( chain.from_iterable( self._check_filter_item( obj, field_name, "filter_horizontal[%d]" % index ) for index, field_name in enumerate(obj.filter_horizontal) ) ) ``` ###Assistant : Check that filter_horizontal is a sequence of field names.
1,336
def poly_intersection(poly_det, poly_gt): assert isinstance(poly_det, plg.Polygon) assert isinstance(poly_gt, plg.Polygon) poly_inter = poly_det & poly_gt if len(poly_inter) == 0: return 0, poly_inter return poly_inter.area(), poly_inter
Calculate the intersection area between two polygon. Args: poly_det (Polygon): A polygon predicted by detector. poly_gt (Polygon): A gt polygon. Returns: intersection_area (float): The intersection area between two polygons.
29
24
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def poly_intersection(poly_det, poly_gt): assert isinstance(poly_det, plg.Polygon) assert isinstance(poly_gt, plg.Polygon) poly_inter = poly_det & poly_gt if len(poly_inter) == 0: return 0, poly_inter return poly_inter.area(), poly_inter ``` ###Assistant : Calculate the intersection area between two polygon. Args: poly_det (Polygon): A polygon predicted by detector. poly_gt (Polygon): A gt polygon. Returns: intersection_area (float): The intersection area between two polygons.
1,337
def size(self) -> int: return sum(len(x.data) for x in [*self.answers, *self.authorities, *self.additionals])
Returns the cumulative data size of all resource record sections.
10
12
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def size(self) -> int: return sum(len(x.data) for x in [*self.answers, *self.authorities, *self.additionals]) ``` ###Assistant : Returns the cumulative data size of all resource record sections.
1,338
async def get_job_submission_info(self): jobs = {} fetched_jobs = await self._job_info_client.get_all_jobs() for ( job_submission_id, job_info, ) in fetched_jobs.items(): if job_info is not None: entry = { "job_submission_id": job_submission_id, "status": job_info.status, "message": job_info.message, "error_type": job_info.error_type, "start_time": job_info.start_time, "end_time": job_info.end_time, "metadata": job_info.metadata, "runtime_env": job_info.runtime_env, "entrypoint": job_info.entrypoint, } jobs[job_submission_id] = entry return jobs
Info for Ray job submission. Here a job can have 0 or many drivers.
14
49
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python async def get_job_submission_info(self): jobs = {} fetched_jobs = await self._job_info_client.get_all_jobs() for ( job_submission_id, job_info, ) in fetched_jobs.items(): if job_info is not None: entry = { "job_submission_id": job_submission_id, "status": job_info.status, "message": job_info.message, "error_type": job_info.error_type, "start_time": job_info.start_time, "end_time": job_info.end_time, "metadata": job_info.metadata, "runtime_env": job_info.runtime_env, "entrypoint": job_info.entrypoint, } jobs[job_submission_id] = entry return jobs ``` ###Assistant : Info for Ray job submission. Here a job can have 0 or many drivers.
1,339
def _gen_sieve_array(M, factor_base): sieve_array = [0]*(2*M + 1) for factor in factor_base: if factor.soln1 is None: #The prime does not divides a continue for idx in range((M + factor.soln1) % factor.prime, 2*M, factor.prime): sieve_array[idx] += factor.log_p if factor.prime == 2: continue #if prime is 2 then sieve only with soln_1_p for idx in range((M + factor.soln2) % factor.prime, 2*M, factor.prime): sieve_array[idx] += factor.log_p return sieve_array
Sieve Stage of the Quadratic Sieve. For every prime in the factor_base that does not divide the coefficient `a` we add log_p over the sieve_array such that ``-M <= soln1 + i*p <= M`` and ``-M <= soln2 + i*p <= M`` where `i` is an integer. When p = 2 then log_p is only added using ``-M <= soln1 + i*p <= M``. Parameters ========== M : sieve interval factor_base : factor_base primes
74
65
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _gen_sieve_array(M, factor_base): sieve_array = [0]*(2*M + 1) for factor in factor_base: if factor.soln1 is None: #The prime does not divides a continue for idx in range((M + factor.soln1) % factor.prime, 2*M, factor.prime): sieve_array[idx] += factor.log_p if factor.prime == 2: continue #if prime is 2 then sieve only with soln_1_p for idx in range((M + factor.soln2) % factor.prime, 2*M, factor.prime): sieve_array[idx] += factor.log_p return sieve_array ``` ###Assistant : Sieve Stage of the Quadratic Sieve. For every prime in the factor_base that does not divide the coefficient `a` we add log_p over the sieve_array such that ``-M <= soln1 + i*p <= M`` and ``-M <= soln2 + i*p <= M`` where `i` is an integer. When p = 2 then log_p is only added using ``-M <= soln1 + i*p <= M``. Parameters ========== M : sieve interval factor_base : factor_base primes
1,340
def rescue_docarray(): try: import docarray as docarray __docarray_version__ = docarray.__version__ except AttributeError: # Being here means docarray is not installed correctly, attempt to reinstall it # as recommended by pip https://pip.pypa.io/en/latest/user_guide/#using-pip-from-your-program import subprocess subprocess.check_call( [sys.executable, '-m', 'pip', 'uninstall', '--yes', 'docarray'] ) subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'docarray'])
Upgrading from 2.x to 3.x is broken (https://github.com/jina-ai/jina/issues/4194) This function checks if docarray is broken and if so attempts to rescue it
22
46
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def rescue_docarray(): try: import docarray as docarray __docarray_version__ = docarray.__version__ except AttributeError: # Being here means docarray is not installed correctly, attempt to reinstall it # as recommended by pip https://pip.pypa.io/en/latest/user_guide/#using-pip-from-your-program import subprocess subprocess.check_call( [sys.executable, '-m', 'pip', 'uninstall', '--yes', 'docarray'] ) subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'docarray']) ``` ###Assistant : Upgrading from 2.x to 3.x is broken (https://github.com/jina-ai/jina/issues/4194) This function checks if docarray is broken and if so attempts to rescue it
1,341
def transform(self, X): msg = ( "%(name)s is not fitted. Call fit to set the parameters before" " calling transform" ) check_is_fitted(self, msg=msg) X = self._validate_data(X, accept_sparse="csr", reset=False) check_non_negative(X, "X in AdditiveChi2Sampler.transform") sparse = sp.issparse(X) # zeroth component # 1/cosh = sech # cosh(0) = 1.0 transf = self._transform_sparse if sparse else self._transform_dense return transf(X)
Apply approximate feature map to X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Training data, where `n_samples` is the number of samples and `n_features` is the number of features. Returns ------- X_new : {ndarray, sparse matrix}, \ shape = (n_samples, n_features * (2*sample_steps - 1)) Whether the return value is an array or sparse matrix depends on the type of the input X.
66
55
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def transform(self, X): msg = ( "%(name)s is not fitted. Call fit to set the parameters before" " calling transform" ) check_is_fitted(self, msg=msg) X = self._validate_data(X, accept_sparse="csr", reset=False) check_non_negative(X, "X in AdditiveChi2Sampler.transform") sparse = sp.issparse(X) # zeroth component # 1/cosh = sech # cosh(0) = 1.0 transf = self._transform_sparse if sparse else self._transform_dense return transf(X) ``` ###Assistant : Apply approximate feature map to X. Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) Training data, where `n_samples` is the number of samples and `n_features` is the number of features. Returns ------- X_new : {ndarray, sparse matrix}, \ shape = (n_samples, n_features * (2*sample_steps - 1)) Whether the return value is an array or sparse matrix depends on the type of the input X.
1,342
def get_repository_root(cls, location): # type: (str) -> Optional[str] if cls.is_repository_directory(location): return location return None
Return the "root" (top-level) directory controlled by the vcs, or `None` if the directory is not in any. It is meant to be overridden to implement smarter detection mechanisms for specific vcs. This can do more than is_repository_directory() alone. For example, the Git override checks that Git is actually available.
50
14
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_repository_root(cls, location): # type: (str) -> Optional[str] if cls.is_repository_directory(location): return location return None ``` ###Assistant : Return the "root" (top-level) directory controlled by the vcs, or `None` if the directory is not in any. It is meant to be overridden to implement smarter detection mechanisms for specific vcs. This can do more than is_repository_directory() alone. For example, the Git override checks that Git is actually available.
1,343
def test_it_should_not_read_quotes_stream_if_it_does_not_exist_in_client(oauth_config, configured_catalog): source = SourceHubspot() all_records = list(source.read(logger, config=oauth_config, catalog=configured_catalog, state=None)) records = [record for record in all_records if record.type == Type.RECORD] assert not records
If 'quotes' stream is not in the client, it should skip it.
12
26
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_it_should_not_read_quotes_stream_if_it_does_not_exist_in_client(oauth_config, configured_catalog): source = SourceHubspot() all_records = list(source.read(logger, config=oauth_config, catalog=configured_catalog, state=None)) records = [record for record in all_records if record.type == Type.RECORD] assert not records ``` ###Assistant : If 'quotes' stream is not in the client, it should skip it.
1,344
def get_conda_env_dir(env_name): conda_prefix = os.environ.get("CONDA_PREFIX") if conda_prefix is None: # The caller is neither in a conda env or in (base) env. This is rare # because by default, new terminals start in (base), but we can still # support this case. conda_exe = os.environ.get("CONDA_EXE") if conda_exe is None: raise ValueError( "Cannot find environment variables set by conda. " "Please verify conda is installed." ) # Example: CONDA_EXE=$HOME/anaconda3/bin/python # Strip out /bin/python by going up two parent directories. conda_prefix = str(Path(conda_exe).parent.parent) # There are two cases: # 1. We are in a conda (base) env: CONDA_DEFAULT_ENV=base and # CONDA_PREFIX=$HOME/anaconda3 # 2. We are in a user-created conda env: CONDA_DEFAULT_ENV=$env_name and # CONDA_PREFIX=$HOME/anaconda3/envs/$current_env_name if os.environ.get("CONDA_DEFAULT_ENV") == "base": # Caller's curent environment is (base). # Not recommended by conda, but we can still support it. if env_name == "base": # Desired environment is (base), located at e.g. $HOME/anaconda3 env_dir = conda_prefix else: # Desired environment is user-created, e.g. # $HOME/anaconda3/envs/$env_name env_dir = os.path.join(conda_prefix, "envs", env_name) else: # Now `conda_prefix` should be something like # $HOME/anaconda3/envs/$current_env_name # We want to replace the last component with the desired env name. conda_envs_dir = os.path.split(conda_prefix)[0] env_dir = os.path.join(conda_envs_dir, env_name) if not os.path.isdir(env_dir): raise ValueError( "conda env " + env_name + " not found in conda envs directory. Run `conda env list` to " + "verify the name is correct." ) return env_dir
Find and validate the conda directory for a given conda environment. For example, given the environment name `tf1`, this function checks the existence of the corresponding conda directory, e.g. `/Users/scaly/anaconda3/envs/tf1`, and returns it.
33
226
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_conda_env_dir(env_name): conda_prefix = os.environ.get("CONDA_PREFIX") if conda_prefix is None: # The caller is neither in a conda env or in (base) env. This is rare # because by default, new terminals start in (base), but we can still # support this case. conda_exe = os.environ.get("CONDA_EXE") if conda_exe is None: raise ValueError( "Cannot find environment variables set by conda. " "Please verify conda is installed." ) # Example: CONDA_EXE=$HOME/anaconda3/bin/python # Strip out /bin/python by going up two parent directories. conda_prefix = str(Path(conda_exe).parent.parent) # There are two cases: # 1. We are in a conda (base) env: CONDA_DEFAULT_ENV=base and # CONDA_PREFIX=$HOME/anaconda3 # 2. We are in a user-created conda env: CONDA_DEFAULT_ENV=$env_name and # CONDA_PREFIX=$HOME/anaconda3/envs/$current_env_name if os.environ.get("CONDA_DEFAULT_ENV") == "base": # Caller's curent environment is (base). # Not recommended by conda, but we can still support it. if env_name == "base": # Desired environment is (base), located at e.g. $HOME/anaconda3 env_dir = conda_prefix else: # Desired environment is user-created, e.g. # $HOME/anaconda3/envs/$env_name env_dir = os.path.join(conda_prefix, "envs", env_name) else: # Now `conda_prefix` should be something like # $HOME/anaconda3/envs/$current_env_name # We want to replace the last component with the desired env name. conda_envs_dir = os.path.split(conda_prefix)[0] env_dir = os.path.join(conda_envs_dir, env_name) if not os.path.isdir(env_dir): raise ValueError( "conda env " + env_name + " not found in conda envs directory. Run `conda env list` to " + "verify the name is correct." ) return env_dir ``` ###Assistant : Find and validate the conda directory for a given conda environment. For example, given the environment name `tf1`, this function checks the existence of the corresponding conda directory, e.g. `/Users/scaly/anaconda3/envs/tf1`, and returns it.
1,345
async def test_switch_change_outlet_state(hass, utcnow): helper = await setup_test_component(hass, create_switch_service) await hass.services.async_call( "switch", "turn_on", {"entity_id": "switch.testdevice"}, blocking=True ) helper.async_assert_service_values( ServicesTypes.OUTLET, { CharacteristicsTypes.ON: 1, }, ) await hass.services.async_call( "switch", "turn_off", {"entity_id": "switch.testdevice"}, blocking=True ) helper.async_assert_service_values( ServicesTypes.OUTLET, { CharacteristicsTypes.ON: 0, }, )
Test that we can turn a HomeKit outlet on and off again.
12
39
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python async def test_switch_change_outlet_state(hass, utcnow): helper = await setup_test_component(hass, create_switch_service) await hass.services.async_call( "switch", "turn_on", {"entity_id": "switch.testdevice"}, blocking=True ) helper.async_assert_service_values( ServicesTypes.OUTLET, { CharacteristicsTypes.ON: 1, }, ) await hass.services.async_call( "switch", "turn_off", {"entity_id": "switch.testdevice"}, blocking=True ) helper.async_assert_service_values( ServicesTypes.OUTLET, { CharacteristicsTypes.ON: 0, }, ) ``` ###Assistant : Test that we can turn a HomeKit outlet on and off again.
1,346
def _update_label_position(self, renderer): if not self._autolabelpos: return # get bounding boxes for this axis and any siblings # that have been set by `fig.align_xlabels()` bboxes, bboxes2 = self._get_tick_boxes_siblings(renderer=renderer) x, y = self.label.get_position() if self.label_position == 'bottom': try: spine = self.axes.spines['bottom'] spinebbox = spine.get_window_extent() except KeyError: # use axes if spine doesn't exist spinebbox = self.axes.bbox bbox = mtransforms.Bbox.union(bboxes + [spinebbox]) bottom = bbox.y0 self.label.set_position( (x, bottom - self.labelpad * self.figure.dpi / 72) ) else: try: spine = self.axes.spines['top'] spinebbox = spine.get_window_extent() except KeyError: # use axes if spine doesn't exist spinebbox = self.axes.bbox bbox = mtransforms.Bbox.union(bboxes2 + [spinebbox]) top = bbox.y1 self.label.set_position( (x, top + self.labelpad * self.figure.dpi / 72) )
Update the label position based on the bounding box enclosing all the ticklabels and axis spine
16
111
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _update_label_position(self, renderer): if not self._autolabelpos: return # get bounding boxes for this axis and any siblings # that have been set by `fig.align_xlabels()` bboxes, bboxes2 = self._get_tick_boxes_siblings(renderer=renderer) x, y = self.label.get_position() if self.label_position == 'bottom': try: spine = self.axes.spines['bottom'] spinebbox = spine.get_window_extent() except KeyError: # use axes if spine doesn't exist spinebbox = self.axes.bbox bbox = mtransforms.Bbox.union(bboxes + [spinebbox]) bottom = bbox.y0 self.label.set_position( (x, bottom - self.labelpad * self.figure.dpi / 72) ) else: try: spine = self.axes.spines['top'] spinebbox = spine.get_window_extent() except KeyError: # use axes if spine doesn't exist spinebbox = self.axes.bbox bbox = mtransforms.Bbox.union(bboxes2 + [spinebbox]) top = bbox.y1 self.label.set_position( (x, top + self.labelpad * self.figure.dpi / 72) ) ``` ###Assistant : Update the label position based on the bounding box enclosing all the ticklabels and axis spine
1,347
def poly_union(poly_det, poly_gt): assert isinstance(poly_det, plg.Polygon) assert isinstance(poly_gt, plg.Polygon) area_det = poly_det.area() area_gt = poly_gt.area() area_inters, _ = poly_intersection(poly_det, poly_gt) return area_det + area_gt - area_inters
Calculate the union area between two polygon. Args: poly_det (Polygon): A polygon predicted by detector. poly_gt (Polygon): A gt polygon. Returns: union_area (float): The union area between two polygons.
29
26
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def poly_union(poly_det, poly_gt): assert isinstance(poly_det, plg.Polygon) assert isinstance(poly_gt, plg.Polygon) area_det = poly_det.area() area_gt = poly_gt.area() area_inters, _ = poly_intersection(poly_det, poly_gt) return area_det + area_gt - area_inters ``` ###Assistant : Calculate the union area between two polygon. Args: poly_det (Polygon): A polygon predicted by detector. poly_gt (Polygon): A gt polygon. Returns: union_area (float): The union area between two polygons.
1,348
def test_whitelist_idna_result(self) -> None: config: JsonDict = { "federation_certificate_verification_whitelist": [ "example.com", "*.xn--eckwd4c7c.xn--zckzah", ] } t = TestConfig() t.tls.read_config(config, config_dir_path="", data_dir_path="") cf = FederationPolicyForHTTPS(cast(HomeServerConfig, t)) # Not in the whitelist opts = cf.get_options(b"notexample.com") assert isinstance(opts, SSLClientConnectionCreator) self.assertTrue(opts._verifier._verify_certs) # Caught by the wildcard opts = cf.get_options(idna.encode("テスト.ドメイン.テスト")) assert isinstance(opts, SSLClientConnectionCreator) self.assertFalse(opts._verifier._verify_certs)
The federation certificate whitelist will match on IDNA encoded names.
10
48
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_whitelist_idna_result(self) -> None: config: JsonDict = { "federation_certificate_verification_whitelist": [ "example.com", "*.xn--eckwd4c7c.xn--zckzah", ] } t = TestConfig() t.tls.read_config(config, config_dir_path="", data_dir_path="") cf = FederationPolicyForHTTPS(cast(HomeServerConfig, t)) # Not in the whitelist opts = cf.get_options(b"notexample.com") assert isinstance(opts, SSLClientConnectionCreator) self.assertTrue(opts._verifier._verify_certs) # Caught by the wildcard opts = cf.get_options(idna.encode("テスト.ドメイン.テスト")) assert isinstance(opts, SSLClientConnectionCreator) self.assertFalse(opts._verifier._verify_certs) ``` ###Assistant : The federation certificate whitelist will match on IDNA encoded names.
1,349
def simple_test_rpn(self, x, img_metas): rpn_outs = self(x) proposal_list = self.get_results(*rpn_outs, img_metas=img_metas) return proposal_list
Test without augmentation, only for ``RPNHead`` and its variants, e.g., ``GARPNHead``, etc. Args: x (tuple[Tensor]): Features from the upstream network, each is a 4D-tensor. img_metas (list[dict]): Meta info of each image. Returns: list[Tensor]: Proposals of each image, each item has shape (n, 5), where 5 represent (tl_x, tl_y, br_x, br_y, score).
51
13
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def simple_test_rpn(self, x, img_metas): rpn_outs = self(x) proposal_list = self.get_results(*rpn_outs, img_metas=img_metas) return proposal_list ``` ###Assistant : Test without augmentation, only for ``RPNHead`` and its variants, e.g., ``GARPNHead``, etc. Args: x (tuple[Tensor]): Features from the upstream network, each is a 4D-tensor. img_metas (list[dict]): Meta info of each image. Returns: list[Tensor]: Proposals of each image, each item has shape (n, 5), where 5 represent (tl_x, tl_y, br_x, br_y, score).
1,350
def _get_tick_boxes_siblings(self, renderer): # Get the Grouper keeping track of x or y label groups for this figure. axis_names = [ name for name, axis in self.axes._get_axis_map().items() if name in self.figure._align_label_groups and axis is self] if len(axis_names) != 1: return [], [] axis_name, = axis_names grouper = self.figure._align_label_groups[axis_name] bboxes = [] bboxes2 = [] # If we want to align labels from other Axes: for ax in grouper.get_siblings(self.axes): axis = getattr(ax, f"{axis_name}axis") ticks_to_draw = axis._update_ticks() tlb, tlb2 = axis._get_ticklabel_bboxes(ticks_to_draw, renderer) bboxes.extend(tlb) bboxes2.extend(tlb2) return bboxes, bboxes2
Get the bounding boxes for this `.axis` and its siblings as set by `.Figure.align_xlabels` or `.Figure.align_ylabels`. By default it just gets bboxes for self.
24
85
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _get_tick_boxes_siblings(self, renderer): # Get the Grouper keeping track of x or y label groups for this figure. axis_names = [ name for name, axis in self.axes._get_axis_map().items() if name in self.figure._align_label_groups and axis is self] if len(axis_names) != 1: return [], [] axis_name, = axis_names grouper = self.figure._align_label_groups[axis_name] bboxes = [] bboxes2 = [] # If we want to align labels from other Axes: for ax in grouper.get_siblings(self.axes): axis = getattr(ax, f"{axis_name}axis") ticks_to_draw = axis._update_ticks() tlb, tlb2 = axis._get_ticklabel_bboxes(ticks_to_draw, renderer) bboxes.extend(tlb) bboxes2.extend(tlb2) return bboxes, bboxes2 ``` ###Assistant : Get the bounding boxes for this `.axis` and its siblings as set by `.Figure.align_xlabels` or `.Figure.align_ylabels`. By default it just gets bboxes for self.
1,351
def default(method): method._is_default = True # pylint: disable=protected-access return method
Decorates a method to detect overrides in subclasses.
8
10
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def default(method): method._is_default = True # pylint: disable=protected-access return method ``` ###Assistant : Decorates a method to detect overrides in subclasses.
1,352
def test_help_tooltip(self): st.camera_input("the label", help="help_label") c = self.get_delta_from_queue().new_element.camera_input self.assertEqual(c.help, "help_label")
Test that it can be called using a string for type parameter.
12
10
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_help_tooltip(self): st.camera_input("the label", help="help_label") c = self.get_delta_from_queue().new_element.camera_input self.assertEqual(c.help, "help_label") ``` ###Assistant : Test that it can be called using a string for type parameter.
1,353
async def test_multiple_event_images(hass, auth): subscriber = await async_setup_camera(hass, DEVICE_TRAITS, auth=auth) assert len(hass.states.async_all()) == 1 assert hass.states.get("camera.my_camera") event_timestamp = utcnow() await subscriber.async_receive_event( make_motion_event(event_session_id="event-session-1", timestamp=event_timestamp) ) await hass.async_block_till_done() auth.responses = [ # Fake response from API that returns url image aiohttp.web.json_response(GENERATE_IMAGE_URL_RESPONSE), # Fake response for the image content fetch aiohttp.web.Response(body=IMAGE_BYTES_FROM_EVENT), # Image is refetched after being cleared by expiration alarm aiohttp.web.json_response(GENERATE_IMAGE_URL_RESPONSE), aiohttp.web.Response(body=b"updated image bytes"), ] image = await async_get_image(hass) assert image.content == IMAGE_BYTES_FROM_EVENT next_event_timestamp = event_timestamp + datetime.timedelta(seconds=25) await subscriber.async_receive_event( make_motion_event( event_id="updated-event-id", event_session_id="event-session-2", timestamp=next_event_timestamp, ) ) await hass.async_block_till_done() image = await async_get_image(hass) assert image.content == b"updated image bytes"
Test fallback for an event event image that has been cleaned up on expiration.
14
96
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python async def test_multiple_event_images(hass, auth): subscriber = await async_setup_camera(hass, DEVICE_TRAITS, auth=auth) assert len(hass.states.async_all()) == 1 assert hass.states.get("camera.my_camera") event_timestamp = utcnow() await subscriber.async_receive_event( make_motion_event(event_session_id="event-session-1", timestamp=event_timestamp) ) await hass.async_block_till_done() auth.responses = [ # Fake response from API that returns url image aiohttp.web.json_response(GENERATE_IMAGE_URL_RESPONSE), # Fake response for the image content fetch aiohttp.web.Response(body=IMAGE_BYTES_FROM_EVENT), # Image is refetched after being cleared by expiration alarm aiohttp.web.json_response(GENERATE_IMAGE_URL_RESPONSE), aiohttp.web.Response(body=b"updated image bytes"), ] image = await async_get_image(hass) assert image.content == IMAGE_BYTES_FROM_EVENT next_event_timestamp = event_timestamp + datetime.timedelta(seconds=25) await subscriber.async_receive_event( make_motion_event( event_id="updated-event-id", event_session_id="event-session-2", timestamp=next_event_timestamp, ) ) await hass.async_block_till_done() image = await async_get_image(hass) assert image.content == b"updated image bytes" ``` ###Assistant : Test fallback for an event event image that has been cleaned up on expiration.
1,354
def test_delete_same_room_twice(self) -> None: body = {"new_room_user_id": self.admin_user} # first call to delete room # and do not wait for finish the task first_channel = self.make_request( "DELETE", self.url.encode("ascii"), content=body, access_token=self.admin_user_tok, await_result=False, ) # second call to delete room second_channel = self.make_request( "DELETE", self.url.encode("ascii"), content=body, access_token=self.admin_user_tok, ) self.assertEqual( HTTPStatus.BAD_REQUEST, second_channel.code, msg=second_channel.json_body ) self.assertEqual(Codes.UNKNOWN, second_channel.json_body["errcode"]) self.assertEqual( f"History purge already in progress for {self.room_id}", second_channel.json_body["error"], ) # get result of first call first_channel.await_result() self.assertEqual(200, first_channel.code, msg=first_channel.json_body) self.assertIn("delete_id", first_channel.json_body) # check status after finish the task self._test_result( first_channel.json_body["delete_id"], self.other_user, expect_new_room=True, )
Test that the call for delete a room at second time gives an exception.
14
87
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_delete_same_room_twice(self) -> None: body = {"new_room_user_id": self.admin_user} # first call to delete room # and do not wait for finish the task first_channel = self.make_request( "DELETE", self.url.encode("ascii"), content=body, access_token=self.admin_user_tok, await_result=False, ) # second call to delete room second_channel = self.make_request( "DELETE", self.url.encode("ascii"), content=body, access_token=self.admin_user_tok, ) self.assertEqual( HTTPStatus.BAD_REQUEST, second_channel.code, msg=second_channel.json_body ) self.assertEqual(Codes.UNKNOWN, second_channel.json_body["errcode"]) self.assertEqual( f"History purge already in progress for {self.room_id}", second_channel.json_body["error"], ) # get result of first call first_channel.await_result() self.assertEqual(200, first_channel.code, msg=first_channel.json_body) self.assertIn("delete_id", first_channel.json_body) # check status after finish the task self._test_result( first_channel.json_body["delete_id"], self.other_user, expect_new_room=True, ) ``` ###Assistant : Test that the call for delete a room at second time gives an exception.
1,355
async def _async_refresh_device_detail_by_ids(self, device_ids_list): for device_id in device_ids_list: try: await self._async_refresh_device_detail_by_id(device_id) except asyncio.TimeoutError: _LOGGER.warning( "Timed out calling august api during refresh of device: %s", device_id, ) except (ClientResponseError, CannotConnect) as err: _LOGGER.warning( "Error from august api during refresh of device: %s", device_id, exc_info=err, )
Refresh each device in sequence. This used to be a gather but it was less reliable with august's recent api changes. The august api has been timing out for some devices so we want the ones that it isn't timing out for to keep working.
45
44
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python async def _async_refresh_device_detail_by_ids(self, device_ids_list): for device_id in device_ids_list: try: await self._async_refresh_device_detail_by_id(device_id) except asyncio.TimeoutError: _LOGGER.warning( "Timed out calling august api during refresh of device: %s", device_id, ) except (ClientResponseError, CannotConnect) as err: _LOGGER.warning( "Error from august api during refresh of device: %s", device_id, exc_info=err, ) ``` ###Assistant : Refresh each device in sequence. This used to be a gather but it was less reliable with august's recent api changes. The august api has been timing out for some devices so we want the ones that it isn't timing out for to keep working.
1,356
def masks_to_boxes(masks): if masks.size == 0: return np.zeros((0, 4)) h, w = masks.shape[-2:] y = np.arange(0, h, dtype=np.float32) x = np.arange(0, w, dtype=np.float32) # see https://github.com/pytorch/pytorch/issues/50276 y, x = np.meshgrid(y, x, indexing="ij") x_mask = masks * np.expand_dims(x, axis=0) x_max = x_mask.reshape(x_mask.shape[0], -1).max(-1) x = np.ma.array(x_mask, mask=~(np.array(masks, dtype=bool))) x_min = x.filled(fill_value=1e8) x_min = x_min.reshape(x_min.shape[0], -1).min(-1) y_mask = masks * np.expand_dims(y, axis=0) y_max = y_mask.reshape(x_mask.shape[0], -1).max(-1) y = np.ma.array(y_mask, mask=~(np.array(masks, dtype=bool))) y_min = y.filled(fill_value=1e8) y_min = y_min.reshape(y_min.shape[0], -1).min(-1) return np.stack([x_min, y_min, x_max, y_max], 1) # Copied from transformers.models.detr.feature_extraction_detr.rgb_to_id
Compute the bounding boxes around the provided panoptic segmentation masks. The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions. Returns a [N, 4] tensor, with the boxes in corner (xyxy) format.
44
86
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def masks_to_boxes(masks): if masks.size == 0: return np.zeros((0, 4)) h, w = masks.shape[-2:] y = np.arange(0, h, dtype=np.float32) x = np.arange(0, w, dtype=np.float32) # see https://github.com/pytorch/pytorch/issues/50276 y, x = np.meshgrid(y, x, indexing="ij") x_mask = masks * np.expand_dims(x, axis=0) x_max = x_mask.reshape(x_mask.shape[0], -1).max(-1) x = np.ma.array(x_mask, mask=~(np.array(masks, dtype=bool))) x_min = x.filled(fill_value=1e8) x_min = x_min.reshape(x_min.shape[0], -1).min(-1) y_mask = masks * np.expand_dims(y, axis=0) y_max = y_mask.reshape(x_mask.shape[0], -1).max(-1) y = np.ma.array(y_mask, mask=~(np.array(masks, dtype=bool))) y_min = y.filled(fill_value=1e8) y_min = y_min.reshape(y_min.shape[0], -1).min(-1) return np.stack([x_min, y_min, x_max, y_max], 1) # Copied from transformers.models.detr.feature_extraction_detr.rgb_to_id ``` ###Assistant : Compute the bounding boxes around the provided panoptic segmentation masks. The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions. Returns a [N, 4] tensor, with the boxes in corner (xyxy) format.
1,357
def _roots_with_zeros(p, num_leading_zeros): # Avoid lapack errors when p is all zero p = _where(len(p) == num_leading_zeros, 1.0, p) # Roll any leading zeros to the end & compute the roots roots = _roots_no_zeros(roll(p, -num_leading_zeros)) # Sort zero roots to the end. roots = lax.sort_key_val(roots == 0, roots)[1] # Set roots associated with num_leading_zeros to NaN return _where(arange(roots.size) < roots.size - num_leading_zeros, roots, complex(np.nan, np.nan)) @_wraps(np.roots, lax_description=, extra_params=)
\ Unlike the numpy version of this function, the JAX version returns the roots in a complex array regardless of the values of the roots. Additionally, the jax version of this function adds the ``strip_zeros`` function which must be set to False for the function to be compatible with JIT and other JAX transformations. With ``strip_zeros=False``, if your coefficients have leading zeros, the roots will be padded with NaN values: >>> coeffs = jnp.array([0, 1, 2]) # The default behavior matches numpy and strips leading zeros: >>> jnp.roots(coeffs) DeviceArray([-2.+0.j], dtype=complex64) # With strip_zeros=False, extra roots are set to NaN: >>> jnp.roots(coeffs, strip_zeros=False) DeviceArray([-2. +0.j, nan+nanj], dtype=complex64) strip_zeros : bool, default=True If set to True, then leading zeros in the coefficients will be stripped, similar to :func:`numpy.roots`. If set to False, leading zeros will not be stripped, and undefined roots will be represented by NaN values in the function output. ``strip_zeros`` must be set to ``False`` for the function to be compatible with :func:`jax.jit` and other JAX transformations.
167
68
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _roots_with_zeros(p, num_leading_zeros): # Avoid lapack errors when p is all zero p = _where(len(p) == num_leading_zeros, 1.0, p) # Roll any leading zeros to the end & compute the roots roots = _roots_no_zeros(roll(p, -num_leading_zeros)) # Sort zero roots to the end. roots = lax.sort_key_val(roots == 0, roots)[1] # Set roots associated with num_leading_zeros to NaN return _where(arange(roots.size) < roots.size - num_leading_zeros, roots, complex(np.nan, np.nan)) @_wraps(np.roots, lax_description=, extra_params=) ``` ###Assistant : \ Unlike the numpy version of this function, the JAX version returns the roots in a complex array regardless of the values of the roots. Additionally, the jax version of this function adds the ``strip_zeros`` function which must be set to False for the function to be compatible with JIT and other JAX transformations. With ``strip_zeros=False``, if your coefficients have leading zeros, the roots will be padded with NaN values: >>> coeffs = jnp.array([0, 1, 2]) # The default behavior matches numpy and strips leading zeros: >>> jnp.roots(coeffs) DeviceArray([-2.+0.j], dtype=complex64) # With strip_zeros=False, extra roots are set to NaN: >>> jnp.roots(coeffs, strip_zeros=False) DeviceArray([-2. +0.j, nan+nanj], dtype=complex64) strip_zeros : bool, default=True If set to True, then leading zeros in the coefficients will be stripped, similar to :func:`numpy.roots`. If set to False, leading zeros will not be stripped, and undefined roots will be represented by NaN values in the function output. ``strip_zeros`` must be set to ``False`` for the function to be compatible with :func:`jax.jit` and other JAX transformations.
1,358
def port_monitoring(self) -> int: if GATEWAY_NAME in self._deployment_nodes: return self[GATEWAY_NAME].args.port_monitoring else: return self._common_kwargs.get( 'port_monitoring', __default_port_monitoring__ )
Return if the monitoring is enabled .. # noqa: DAR201
10
16
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def port_monitoring(self) -> int: if GATEWAY_NAME in self._deployment_nodes: return self[GATEWAY_NAME].args.port_monitoring else: return self._common_kwargs.get( 'port_monitoring', __default_port_monitoring__ ) ``` ###Assistant : Return if the monitoring is enabled .. # noqa: DAR201
1,359
def autodiscover_modules(*args, **kwargs): from django.apps import apps register_to = kwargs.get("register_to") for app_config in apps.get_app_configs(): for module_to_search in args: # Attempt to import the app's module. try: if register_to: before_import_registry = copy.copy(register_to._registry) import_module("%s.%s" % (app_config.name, module_to_search)) except Exception: # Reset the registry to the state before the last import # as this import will have to reoccur on the next request and # this could raise NotRegistered and AlreadyRegistered # exceptions (see #8245). if register_to: register_to._registry = before_import_registry # Decide whether to bubble up this error. If the app just # doesn't have the module in question, we can ignore the error # attempting to import it, otherwise we want it to bubble up. if module_has_submodule(app_config.module, module_to_search): raise
Auto-discover INSTALLED_APPS modules and fail silently when not present. This forces an import on them to register any admin bits they may want. You may provide a register_to keyword parameter as a way to access a registry. This register_to object must have a _registry instance variable to access it.
49
117
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def autodiscover_modules(*args, **kwargs): from django.apps import apps register_to = kwargs.get("register_to") for app_config in apps.get_app_configs(): for module_to_search in args: # Attempt to import the app's module. try: if register_to: before_import_registry = copy.copy(register_to._registry) import_module("%s.%s" % (app_config.name, module_to_search)) except Exception: # Reset the registry to the state before the last import # as this import will have to reoccur on the next request and # this could raise NotRegistered and AlreadyRegistered # exceptions (see #8245). if register_to: register_to._registry = before_import_registry # Decide whether to bubble up this error. If the app just # doesn't have the module in question, we can ignore the error # attempting to import it, otherwise we want it to bubble up. if module_has_submodule(app_config.module, module_to_search): raise ``` ###Assistant : Auto-discover INSTALLED_APPS modules and fail silently when not present. This forces an import on them to register any admin bits they may want. You may provide a register_to keyword parameter as a way to access a registry. This register_to object must have a _registry instance variable to access it.
1,360
def ensure_future(coro_or_future, *, loop=None): return _ensure_future(coro_or_future, loop=loop)
Wrap a coroutine or an awaitable in a future. If the argument is a Future, it is returned directly.
19
7
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def ensure_future(coro_or_future, *, loop=None): return _ensure_future(coro_or_future, loop=loop) ``` ###Assistant : Wrap a coroutine or an awaitable in a future. If the argument is a Future, it is returned directly.
1,361
def getpalette(self, rawmode="RGB"): self.load() try: mode = self.im.getpalettemode() except ValueError: return None # no palette if rawmode is None: rawmode = mode return list(self.im.getpalette(mode, rawmode))
Returns the image palette as a list. :param rawmode: The mode in which to return the palette. ``None`` will return the palette in its current mode. :returns: A list of color values [r, g, b, ...], or None if the image has no palette.
44
25
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def getpalette(self, rawmode="RGB"): self.load() try: mode = self.im.getpalettemode() except ValueError: return None # no palette if rawmode is None: rawmode = mode return list(self.im.getpalette(mode, rawmode)) ``` ###Assistant : Returns the image palette as a list. :param rawmode: The mode in which to return the palette. ``None`` will return the palette in its current mode. :returns: A list of color values [r, g, b, ...], or None if the image has no palette.
1,362
def last_executor(self): if len(self.proto_wo_data.routes) > 0: return self.proto_wo_data.routes[-1].executor
Returns the name of the last Executor that has processed this Request :return: the name of the last Executor that processed this Request
23
8
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def last_executor(self): if len(self.proto_wo_data.routes) > 0: return self.proto_wo_data.routes[-1].executor ``` ###Assistant : Returns the name of the last Executor that has processed this Request :return: the name of the last Executor that processed this Request
1,363
def custom_generator_multi_io_temporal(self, sample_weights=None): batch_size = 3 num_samples = 3 iteration = 0 while True: batch_index = iteration * batch_size % num_samples iteration += 1 start = batch_index end = start + batch_size x = [self.x[start:end], self.x[start:end]] y = [self.y1[start:end], self.y2[start:end]] if sample_weights: sw = tf.nest.map_structure( lambda w: w[start:end], sample_weights ) else: sw = None yield x, y, sw
Generator for getting data for temporal multi io model. Args: sample_weights: List of sample_weights. Yields: Tuple of inputs, label, sample weights data.
22
58
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def custom_generator_multi_io_temporal(self, sample_weights=None): batch_size = 3 num_samples = 3 iteration = 0 while True: batch_index = iteration * batch_size % num_samples iteration += 1 start = batch_index end = start + batch_size x = [self.x[start:end], self.x[start:end]] y = [self.y1[start:end], self.y2[start:end]] if sample_weights: sw = tf.nest.map_structure( lambda w: w[start:end], sample_weights ) else: sw = None yield x, y, sw ``` ###Assistant : Generator for getting data for temporal multi io model. Args: sample_weights: List of sample_weights. Yields: Tuple of inputs, label, sample weights data.
1,364
def _just_docs(self): try: for child in self.ast.body: if not isinstance(child, ast.Assign): # allow string constant expressions (these are docstrings) if isinstance(child, ast.Expr) and isinstance(child.value, ast.Constant) and isinstance(child.value.value, str): continue # allowed from __future__ imports if isinstance(child, ast.ImportFrom) and child.module == '__future__': for future_import in child.names: if future_import.name not in self.ACCEPTLIST_FUTURE_IMPORTS: break else: continue return False return True except AttributeError: return False
Module can contain just docs and from __future__ boilerplate
9
61
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _just_docs(self): try: for child in self.ast.body: if not isinstance(child, ast.Assign): # allow string constant expressions (these are docstrings) if isinstance(child, ast.Expr) and isinstance(child.value, ast.Constant) and isinstance(child.value.value, str): continue # allowed from __future__ imports if isinstance(child, ast.ImportFrom) and child.module == '__future__': for future_import in child.names: if future_import.name not in self.ACCEPTLIST_FUTURE_IMPORTS: break else: continue return False return True except AttributeError: return False ``` ###Assistant : Module can contain just docs and from __future__ boilerplate
1,365
def response_validator(self) -> RequestValidator: self.check_reload() assert self._response_validator is not None return self._response_validator
Reload the OpenAPI file if it has been modified after the last time it was read, and then return the openapi_core validator object. Similar to preceding functions. Used for proper access to OpenAPI objects.
34
12
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def response_validator(self) -> RequestValidator: self.check_reload() assert self._response_validator is not None return self._response_validator ``` ###Assistant : Reload the OpenAPI file if it has been modified after the last time it was read, and then return the openapi_core validator object. Similar to preceding functions. Used for proper access to OpenAPI objects.
1,366
def roots(p, *, strip_zeros=True): # ported from https://github.com/numpy/numpy/blob/v1.17.0/numpy/lib/polynomial.py#L168-L251 p = atleast_1d(p) if p.ndim != 1: raise ValueError("Input must be a rank-1 array.") # strip_zeros=False is unsafe because leading zeros aren't removed if not strip_zeros: if p.size > 1: return _roots_no_zeros(p) else: return array([]) if all(p == 0): return array([]) # factor out trivial roots start, end = _nonzero_range(p) # number of trailing zeros = number of roots at 0 trailing_zeros = p.size - end # strip leading and trailing zeros p = p[start:end] if p.size < 2: return zeros(trailing_zeros, p.dtype) else: roots = _roots_no_zeros(p) # combine roots and zero roots roots = hstack((roots, zeros(trailing_zeros, p.dtype))) return roots _POLYFIT_DOC = @_wraps(np.polyfit, lax_description=_POLYFIT_DOC) @partial(jit, static_argnames=('deg', 'rcond', 'full', 'cov'))
\ Unlike NumPy's implementation of polyfit, :py:func:`jax.numpy.polyfit` will not warn on rank reduction, which indicates an ill conditioned matrix Also, it works best on rcond <= 10e-3 values.
28
116
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def roots(p, *, strip_zeros=True): # ported from https://github.com/numpy/numpy/blob/v1.17.0/numpy/lib/polynomial.py#L168-L251 p = atleast_1d(p) if p.ndim != 1: raise ValueError("Input must be a rank-1 array.") # strip_zeros=False is unsafe because leading zeros aren't removed if not strip_zeros: if p.size > 1: return _roots_no_zeros(p) else: return array([]) if all(p == 0): return array([]) # factor out trivial roots start, end = _nonzero_range(p) # number of trailing zeros = number of roots at 0 trailing_zeros = p.size - end # strip leading and trailing zeros p = p[start:end] if p.size < 2: return zeros(trailing_zeros, p.dtype) else: roots = _roots_no_zeros(p) # combine roots and zero roots roots = hstack((roots, zeros(trailing_zeros, p.dtype))) return roots _POLYFIT_DOC = @_wraps(np.polyfit, lax_description=_POLYFIT_DOC) @partial(jit, static_argnames=('deg', 'rcond', 'full', 'cov')) ``` ###Assistant : \ Unlike NumPy's implementation of polyfit, :py:func:`jax.numpy.polyfit` will not warn on rank reduction, which indicates an ill conditioned matrix Also, it works best on rcond <= 10e-3 values.
1,367
def _storage_path(self, local_path): rel_local_path = os.path.relpath(local_path, self.logdir) return os.path.join(self.remote_checkpoint_dir, rel_local_path)
Converts a `local_path` to be based off of `self.remote_checkpoint_dir`.
9
10
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _storage_path(self, local_path): rel_local_path = os.path.relpath(local_path, self.logdir) return os.path.join(self.remote_checkpoint_dir, rel_local_path) ``` ###Assistant : Converts a `local_path` to be based off of `self.remote_checkpoint_dir`.
1,368
def get_time_since_last_update(self) -> float: return time.time() - self.last_update_time
How much time has passed since the last call to update.
11
8
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_time_since_last_update(self) -> float: return time.time() - self.last_update_time ``` ###Assistant : How much time has passed since the last call to update.
1,369
def train_fixbn(self, mode=True, freeze_bn=True, freeze_bn_affine=False): r super(DeepLabv3_plus, self).train(mode) if freeze_bn: print("Freezing Mean/Var of BatchNorm2D.") if freeze_bn_affine: print("Freezing Weight/Bias of BatchNorm2D.") if freeze_bn: for m in self.xception_features.modules(): if isinstance(m, nn.BatchNorm2d): m.eval() if freeze_bn_affine: m.weight.requires_grad = False m.bias.requires_grad = False # for m in self.aspp1.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.aspp2.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.aspp3.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.aspp4.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.global_avg_pool.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.concat_projection_bn1.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.feature_projection_bn1.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False
Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`, etc. Returns: Module: self
38
192
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def train_fixbn(self, mode=True, freeze_bn=True, freeze_bn_affine=False): r super(DeepLabv3_plus, self).train(mode) if freeze_bn: print("Freezing Mean/Var of BatchNorm2D.") if freeze_bn_affine: print("Freezing Weight/Bias of BatchNorm2D.") if freeze_bn: for m in self.xception_features.modules(): if isinstance(m, nn.BatchNorm2d): m.eval() if freeze_bn_affine: m.weight.requires_grad = False m.bias.requires_grad = False # for m in self.aspp1.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.aspp2.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.aspp3.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.aspp4.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.global_avg_pool.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.concat_projection_bn1.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False # for m in self.feature_projection_bn1.modules(): # if isinstance(m, nn.BatchNorm2d): # m.eval() # if freeze_bn_affine: # m.weight.requires_grad = False # m.bias.requires_grad = False ``` ###Assistant : Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`, etc. Returns: Module: self
1,370
def assert_params_all_zeros(module) -> bool: weight_data = module.weight.data is_weight_zero = weight_data.allclose( weight_data.new_zeros(weight_data.size())) if hasattr(module, 'bias') and module.bias is not None: bias_data = module.bias.data is_bias_zero = bias_data.allclose( bias_data.new_zeros(bias_data.size())) else: is_bias_zero = True return is_weight_zero and is_bias_zero
Check if the parameters of the module is all zeros. Args: module (nn.Module): The module to be checked. Returns: bool: Whether the parameters of the module is all zeros.
29
34
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def assert_params_all_zeros(module) -> bool: weight_data = module.weight.data is_weight_zero = weight_data.allclose( weight_data.new_zeros(weight_data.size())) if hasattr(module, 'bias') and module.bias is not None: bias_data = module.bias.data is_bias_zero = bias_data.allclose( bias_data.new_zeros(bias_data.size())) else: is_bias_zero = True return is_weight_zero and is_bias_zero ``` ###Assistant : Check if the parameters of the module is all zeros. Args: module (nn.Module): The module to be checked. Returns: bool: Whether the parameters of the module is all zeros.
1,371
def cleanup_state(self): # type: () -> None for f in self.cleanup_functions: if not callable(f): continue try: if not f(self.socket, self.configuration): log_automotive.info( "Cleanup function %s failed", repr(f)) except (OSError, ValueError, Scapy_Exception) as e: log_automotive.critical("Exception during cleanup: %s", e) self.cleanup_functions = list()
Executes all collected cleanup functions from a traversed path :return: None
11
40
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def cleanup_state(self): # type: () -> None for f in self.cleanup_functions: if not callable(f): continue try: if not f(self.socket, self.configuration): log_automotive.info( "Cleanup function %s failed", repr(f)) except (OSError, ValueError, Scapy_Exception) as e: log_automotive.critical("Exception during cleanup: %s", e) self.cleanup_functions = list() ``` ###Assistant : Executes all collected cleanup functions from a traversed path :return: None
1,372
def test_model_with_two_tabbed_panels_only(self): Publisher.settings_panels = [FieldPanel("name")] Publisher.promote_panels = [FieldPanel("headquartered_in")] warning_1 = checks.Warning( "Publisher.promote_panels will have no effect on modeladmin editing", hint=, obj=Publisher, id="wagtailadmin.W002", ) warning_2 = checks.Warning( "Publisher.settings_panels will have no effect on modeladmin editing", hint=, obj=Publisher, id="wagtailadmin.W002", ) checks_results = self.get_checks_result() self.assertIn(warning_1, checks_results) self.assertIn(warning_2, checks_results) # clean up for future checks delattr(Publisher, "settings_panels") delattr(Publisher, "promote_panels")
Ensure that Publisher uses `panels` instead of `promote_panels`\ or set up an `edit_handler` if you want a tabbed editing interface. There are no default tabs on non-Page models so there will be no\ Promote tab for the promote_panels to render in.Ensure that Publisher uses `panels` instead of `settings_panels`\ or set up an `edit_handler` if you want a tabbed editing interface. There are no default tabs on non-Page models so there will be no\ Settings tab for the settings_panels to render in.
81
55
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_model_with_two_tabbed_panels_only(self): Publisher.settings_panels = [FieldPanel("name")] Publisher.promote_panels = [FieldPanel("headquartered_in")] warning_1 = checks.Warning( "Publisher.promote_panels will have no effect on modeladmin editing", hint=, obj=Publisher, id="wagtailadmin.W002", ) warning_2 = checks.Warning( "Publisher.settings_panels will have no effect on modeladmin editing", hint=, obj=Publisher, id="wagtailadmin.W002", ) checks_results = self.get_checks_result() self.assertIn(warning_1, checks_results) self.assertIn(warning_2, checks_results) # clean up for future checks delattr(Publisher, "settings_panels") delattr(Publisher, "promote_panels") ``` ###Assistant : Ensure that Publisher uses `panels` instead of `promote_panels`\ or set up an `edit_handler` if you want a tabbed editing interface. There are no default tabs on non-Page models so there will be no\ Promote tab for the promote_panels to render in.Ensure that Publisher uses `panels` instead of `settings_panels`\ or set up an `edit_handler` if you want a tabbed editing interface. There are no default tabs on non-Page models so there will be no\ Settings tab for the settings_panels to render in.
1,373
def get_builtin_layer(class_name): if not hasattr(LOCAL, "ALL_OBJECTS"): populate_deserializable_objects() return LOCAL.ALL_OBJECTS.get(class_name)
Returns class if `class_name` is registered, else returns None.
9
9
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_builtin_layer(class_name): if not hasattr(LOCAL, "ALL_OBJECTS"): populate_deserializable_objects() return LOCAL.ALL_OBJECTS.get(class_name) ``` ###Assistant : Returns class if `class_name` is registered, else returns None.
1,374
def abelian_invariants(self): if self.is_trivial: return [] gns = self.generators inv = [] G = self H = G.derived_subgroup() Hgens = H.generators for p in primefactors(G.order()): ranks = [] while True: pows = [] for g in gns: elm = g**p if not H.contains(elm): pows.append(elm) K = PermutationGroup(Hgens + pows) if pows else H r = G.order()//K.order() G = K gns = pows if r == 1: break ranks.append(multiplicity(p, r)) if ranks: pows = [1]*ranks[0] for i in ranks: for j in range(0, i): pows[j] = pows[j]*p inv.extend(pows) inv.sort() return inv
Returns the abelian invariants for the given group. Let ``G`` be a nontrivial finite abelian group. Then G is isomorphic to the direct product of finitely many nontrivial cyclic groups of prime-power order. Explanation =========== The prime-powers that occur as the orders of the factors are uniquely determined by G. More precisely, the primes that occur in the orders of the factors in any such decomposition of ``G`` are exactly the primes that divide ``|G|`` and for any such prime ``p``, if the orders of the factors that are p-groups in one such decomposition of ``G`` are ``p^{t_1} >= p^{t_2} >= ... p^{t_r}``, then the orders of the factors that are p-groups in any such decomposition of ``G`` are ``p^{t_1} >= p^{t_2} >= ... p^{t_r}``. The uniquely determined integers ``p^{t_1} >= p^{t_2} >= ... p^{t_r}``, taken for all primes that divide ``|G|`` are called the invariants of the nontrivial group ``G`` as suggested in ([14], p. 542). Notes ===== We adopt the convention that the invariants of a trivial group are []. Examples ======== >>> from sympy.combinatorics import Permutation, PermutationGroup >>> a = Permutation([0, 2, 1]) >>> b = Permutation([1, 0, 2]) >>> G = PermutationGroup([a, b]) >>> G.abelian_invariants() [2] >>> from sympy.combinatorics import CyclicGroup >>> G = CyclicGroup(7) >>> G.abelian_invariants() [7]
212
90
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def abelian_invariants(self): if self.is_trivial: return [] gns = self.generators inv = [] G = self H = G.derived_subgroup() Hgens = H.generators for p in primefactors(G.order()): ranks = [] while True: pows = [] for g in gns: elm = g**p if not H.contains(elm): pows.append(elm) K = PermutationGroup(Hgens + pows) if pows else H r = G.order()//K.order() G = K gns = pows if r == 1: break ranks.append(multiplicity(p, r)) if ranks: pows = [1]*ranks[0] for i in ranks: for j in range(0, i): pows[j] = pows[j]*p inv.extend(pows) inv.sort() return inv ``` ###Assistant : Returns the abelian invariants for the given group. Let ``G`` be a nontrivial finite abelian group. Then G is isomorphic to the direct product of finitely many nontrivial cyclic groups of prime-power order. Explanation =========== The prime-powers that occur as the orders of the factors are uniquely determined by G. More precisely, the primes that occur in the orders of the factors in any such decomposition of ``G`` are exactly the primes that divide ``|G|`` and for any such prime ``p``, if the orders of the factors that are p-groups in one such decomposition of ``G`` are ``p^{t_1} >= p^{t_2} >= ... p^{t_r}``, then the orders of the factors that are p-groups in any such decomposition of ``G`` are ``p^{t_1} >= p^{t_2} >= ... p^{t_r}``. The uniquely determined integers ``p^{t_1} >= p^{t_2} >= ... p^{t_r}``, taken for all primes that divide ``|G|`` are called the invariants of the nontrivial group ``G`` as suggested in ([14], p. 542). Notes ===== We adopt the convention that the invariants of a trivial group are []. Examples ======== >>> from sympy.combinatorics import Permutation, PermutationGroup >>> a = Permutation([0, 2, 1]) >>> b = Permutation([1, 0, 2]) >>> G = PermutationGroup([a, b]) >>> G.abelian_invariants() [2] >>> from sympy.combinatorics import CyclicGroup >>> G = CyclicGroup(7) >>> G.abelian_invariants() [7]
1,375
def clone_graph_nodes(inputs, outputs): nodes_to_clone = find_nodes_by_inputs_and_outputs(inputs, outputs) cloned_inputs = [] cloned_outputs = [] # We not only need to create copies of Nodes (mimic the calls), also need to # clone keras_tensors to avoid the override of _keras_history attached on the # keras_tensor. The following dict is used to track any keras tensor we cloned # The key is the string ID of the original keras tensor, and value is the # cloned keras_tensor instance. kt_id_mapping = {} for kt_input in tf.nest.flatten(inputs): if kt_input.node.is_input: # For any existing keras_tensor from tf.keras.Input, we leave them as is. cloned_inputs.append(kt_input) kt_id_mapping[id(kt_input)] = kt_input else: # We need to create a new tf.keras.Input for any intermediate keras_tensor cpy = _clone_keras_tensor(kt_input) cloned_input = input_layer_module.Input(tensor=cpy) cloned_inputs.append(cloned_input) kt_id_mapping[id(kt_input)] = cloned_input cloned_inputs = tf.nest.pack_sequence_as(inputs, cloned_inputs) for kt_output in tf.nest.flatten(outputs): cpy = _clone_keras_tensor(kt_output) # We reuse the _keras_history here, which contains the old information. It # is used in the Node constructor to check if the tensor "is_keras_tensor()" # The history will be override by the Node constructor anyway for the # corresponding layer output anyway. cpy._keras_history = ( kt_output._keras_history ) # pylint: disable=protected-access cloned_outputs.append(cpy) kt_id_mapping[id(kt_output)] = cpy cloned_outputs = tf.nest.pack_sequence_as(outputs, cloned_outputs) for node in nodes_to_clone: # Clone any keras_tensors to avoid override of _keras_history # Or reuse an existing keras_tensor if it has already been cloned. output_copy = clone_keras_tensors(node.output_tensors, kt_id_mapping) call_args_copy = clone_keras_tensors(node.call_args, kt_id_mapping) call_kwargs_copy = clone_keras_tensors(node.call_kwargs, kt_id_mapping) # Creating new nodes based on the existing node information. # Node wires itself to inbound and outbound layers. # The Node constructor actually updates this layer's self._inbound_nodes, # sets _keras_history on the outputs, and adds itself to the # `_outbound_nodes` of the layers that produced the inputs to this # layer call. node_module.Node( node.layer, call_args=call_args_copy, call_kwargs=call_kwargs_copy, outputs=output_copy, ) return cloned_inputs, cloned_outputs
Clone the `Node` between the inputs and output tensors. This function is used to create a new functional model from any intermediate keras tensors. The clone of the nodes mimic the behavior of reconstructing the functional graph network by re-executing all the __call__ methods. The cloned nodes will be appended to the layers. Note that a new tf.keras.Inputs will be created for any items in the `inputs` Args: inputs: A nested structure of keras_tensors. outputs: A nested structure of keras_tensors. Returns: A pair of inputs and outputs, with cloned keras_tensors. They can be used to create a new functional model.
100
292
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def clone_graph_nodes(inputs, outputs): nodes_to_clone = find_nodes_by_inputs_and_outputs(inputs, outputs) cloned_inputs = [] cloned_outputs = [] # We not only need to create copies of Nodes (mimic the calls), also need to # clone keras_tensors to avoid the override of _keras_history attached on the # keras_tensor. The following dict is used to track any keras tensor we cloned # The key is the string ID of the original keras tensor, and value is the # cloned keras_tensor instance. kt_id_mapping = {} for kt_input in tf.nest.flatten(inputs): if kt_input.node.is_input: # For any existing keras_tensor from tf.keras.Input, we leave them as is. cloned_inputs.append(kt_input) kt_id_mapping[id(kt_input)] = kt_input else: # We need to create a new tf.keras.Input for any intermediate keras_tensor cpy = _clone_keras_tensor(kt_input) cloned_input = input_layer_module.Input(tensor=cpy) cloned_inputs.append(cloned_input) kt_id_mapping[id(kt_input)] = cloned_input cloned_inputs = tf.nest.pack_sequence_as(inputs, cloned_inputs) for kt_output in tf.nest.flatten(outputs): cpy = _clone_keras_tensor(kt_output) # We reuse the _keras_history here, which contains the old information. It # is used in the Node constructor to check if the tensor "is_keras_tensor()" # The history will be override by the Node constructor anyway for the # corresponding layer output anyway. cpy._keras_history = ( kt_output._keras_history ) # pylint: disable=protected-access cloned_outputs.append(cpy) kt_id_mapping[id(kt_output)] = cpy cloned_outputs = tf.nest.pack_sequence_as(outputs, cloned_outputs) for node in nodes_to_clone: # Clone any keras_tensors to avoid override of _keras_history # Or reuse an existing keras_tensor if it has already been cloned. output_copy = clone_keras_tensors(node.output_tensors, kt_id_mapping) call_args_copy = clone_keras_tensors(node.call_args, kt_id_mapping) call_kwargs_copy = clone_keras_tensors(node.call_kwargs, kt_id_mapping) # Creating new nodes based on the existing node information. # Node wires itself to inbound and outbound layers. # The Node constructor actually updates this layer's self._inbound_nodes, # sets _keras_history on the outputs, and adds itself to the # `_outbound_nodes` of the layers that produced the inputs to this # layer call. node_module.Node( node.layer, call_args=call_args_copy, call_kwargs=call_kwargs_copy, outputs=output_copy, ) return cloned_inputs, cloned_outputs ``` ###Assistant : Clone the `Node` between the inputs and output tensors. This function is used to create a new functional model from any intermediate keras tensors. The clone of the nodes mimic the behavior of reconstructing the functional graph network by re-executing all the __call__ methods. The cloned nodes will be appended to the layers. Note that a new tf.keras.Inputs will be created for any items in the `inputs` Args: inputs: A nested structure of keras_tensors. outputs: A nested structure of keras_tensors. Returns: A pair of inputs and outputs, with cloned keras_tensors. They can be used to create a new functional model.
1,376
def parallel_axis(self, point, frame=None): # circular import issue from sympy.physics.mechanics.functions import inertia_of_point_mass if frame is None: frame = self.frame return self.central_inertia.express(frame) + inertia_of_point_mass( self.mass, self.masscenter.pos_from(point), frame)
Returns the inertia dyadic of the body with respect to another point. Parameters ========== point : sympy.physics.vector.Point The point to express the inertia dyadic about. frame : sympy.physics.vector.ReferenceFrame The reference frame used to construct the dyadic. Returns ======= inertia : sympy.physics.vector.Dyadic The inertia dyadic of the rigid body expressed about the provided point.
53
26
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def parallel_axis(self, point, frame=None): # circular import issue from sympy.physics.mechanics.functions import inertia_of_point_mass if frame is None: frame = self.frame return self.central_inertia.express(frame) + inertia_of_point_mass( self.mass, self.masscenter.pos_from(point), frame) ``` ###Assistant : Returns the inertia dyadic of the body with respect to another point. Parameters ========== point : sympy.physics.vector.Point The point to express the inertia dyadic about. frame : sympy.physics.vector.ReferenceFrame The reference frame used to construct the dyadic. Returns ======= inertia : sympy.physics.vector.Dyadic The inertia dyadic of the rigid body expressed about the provided point.
1,377
def test_class_weight_does_not_contains_more_classses(): tree = DecisionTreeClassifier(class_weight={0: 1, 1: 10, 2: 20}) # Does not raise tree.fit([[0, 0, 1], [1, 0, 1], [1, 2, 0]], [0, 0, 1])
Check that class_weight can contain more labels than in y. Non-regression test for #22413
14
26
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_class_weight_does_not_contains_more_classses(): tree = DecisionTreeClassifier(class_weight={0: 1, 1: 10, 2: 20}) # Does not raise tree.fit([[0, 0, 1], [1, 0, 1], [1, 2, 0]], [0, 0, 1]) ``` ###Assistant : Check that class_weight can contain more labels than in y. Non-regression test for #22413
1,378
def parse(self, state): # type: (ParserState) -> str if state.mode == ParserMode.PARSE: path = AnyParser().parse(state) if not os.path.isfile(path): raise ParserError(f'Not a file: {path}') else: path = '' with state.delimit(PATH_DELIMITER, required=False) as boundary: # type: ParserBoundary while boundary.ready: directory = path or '.' try: with os.scandir(directory) as scan: # type: t.Iterator[os.DirEntry] choices = [f'{item.name}{PATH_DELIMITER}' if item.is_dir() else item.name for item in scan] except OSError: choices = [] if not path: choices.append(PATH_DELIMITER) # allow absolute paths choices.append('../') # suggest relative paths part = RelativePathNameParser(choices).parse(state) path += f'{part}{boundary.match or ""}' return path
Parse the input from the given state and return the result.
11
89
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def parse(self, state): # type: (ParserState) -> str if state.mode == ParserMode.PARSE: path = AnyParser().parse(state) if not os.path.isfile(path): raise ParserError(f'Not a file: {path}') else: path = '' with state.delimit(PATH_DELIMITER, required=False) as boundary: # type: ParserBoundary while boundary.ready: directory = path or '.' try: with os.scandir(directory) as scan: # type: t.Iterator[os.DirEntry] choices = [f'{item.name}{PATH_DELIMITER}' if item.is_dir() else item.name for item in scan] except OSError: choices = [] if not path: choices.append(PATH_DELIMITER) # allow absolute paths choices.append('../') # suggest relative paths part = RelativePathNameParser(choices).parse(state) path += f'{part}{boundary.match or ""}' return path ``` ###Assistant : Parse the input from the given state and return the result.
1,379
def layout(self) -> Layout: # self.log("I", self._inline_styles) # self.log("C", self._css_styles) # self.log("S", self.styles) assert self.styles.layout return self.styles.layout # @layout.setter # def layout(self, new_value: Layout) -> None: # # self.styles.layout = new_value
Convenience property for accessing ``self.styles.layout``. Returns: The Layout associated with this view Convenience property setter for setting ``view.styles.layout``. # Args: # new_value: # Returns: # None #
27
31
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def layout(self) -> Layout: # self.log("I", self._inline_styles) # self.log("C", self._css_styles) # self.log("S", self.styles) assert self.styles.layout return self.styles.layout # @layout.setter # def layout(self, new_value: Layout) -> None: # # self.styles.layout = new_value ``` ###Assistant : Convenience property for accessing ``self.styles.layout``. Returns: The Layout associated with this view Convenience property setter for setting ``view.styles.layout``. # Args: # new_value: # Returns: # None #
1,380
def timeout_message(self, message): future = self.send_future_map.pop(message, None) # In a race condition the message might have been sent by the time # we're timing it out. Make sure the future is not None if future is not None: del self.send_timeout_map[message] if future.attempts < future.tries: future.attempts += 1 log.info( "SaltReqTimeoutError, retrying. (%s/%s)", future.attempts, future.tries, ) self.send( message, timeout=future.timeout, tries=future.tries, future=future, ) else: future.set_exception(SaltReqTimeoutError("Message timed out"))
Handle a message timeout by removing it from the sending queue and informing the caller :raises: SaltReqTimeoutError
17
64
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def timeout_message(self, message): future = self.send_future_map.pop(message, None) # In a race condition the message might have been sent by the time # we're timing it out. Make sure the future is not None if future is not None: del self.send_timeout_map[message] if future.attempts < future.tries: future.attempts += 1 log.info( "SaltReqTimeoutError, retrying. (%s/%s)", future.attempts, future.tries, ) self.send( message, timeout=future.timeout, tries=future.tries, future=future, ) else: future.set_exception(SaltReqTimeoutError("Message timed out")) ``` ###Assistant : Handle a message timeout by removing it from the sending queue and informing the caller :raises: SaltReqTimeoutError
1,381
def test_not_all_records(self, requests_mock, authenticator, config, responses): expected_output = [ {"id": 1, "updated_at": "2018-01-02T00:00:00Z"}, {"id": 2, "updated_at": "2018-02-02T00:00:00Z"}, {"id": 2, "updated_at": "2018-02-02T00:00:00Z"}, # duplicate {"id": 3, "updated_at": "2018-03-02T00:00:00Z"}, {"id": 3, "updated_at": "2018-03-02T00:00:00Z"}, # duplicate {"id": 4, "updated_at": "2019-01-03T00:00:00Z"}, {"id": 4, "updated_at": "2019-01-03T00:00:00Z"}, # duplicate {"id": 5, "updated_at": "2019-02-03T00:00:00Z"}, {"id": 5, "updated_at": "2019-02-03T00:00:00Z"}, # duplicate {"id": 6, "updated_at": "2019-03-03T00:00:00Z"}, ] # INT value of page number where the switch state should be triggered. # in this test case values from: 1 - 4, assuming we want to switch state on this page. ticket_paginate_limit = 2 # This parameter mocks the "per_page" parameter in the API Call result_return_limit = 1 # Create test_stream instance. test_stream = Tickets(authenticator=authenticator, config=config) test_stream.ticket_paginate_limit = ticket_paginate_limit test_stream.result_return_limit = result_return_limit # Mocking Request for response in responses: requests_mock.register_uri( "GET", response["url"], json=response["json"], headers=response.get("headers", {}), ) records = list(test_stream.read_records(sync_mode=SyncMode.full_refresh)) # We're expecting 6 records to return from the tickets_stream assert records == expected_output
TEST 1 - not all records are retrieved During test1 the tickets_stream changes the state of parameters on page: 2, by updating the params: `params["order_by"] = "updated_at"` `params["updated_since"] = last_record` continues to fetch records from the source, using new cycle, and so on. NOTE: After switch of the state on ticket_paginate_limit = 2, is this example, we will experience the records duplication, because of the last_record state, starting at the point where we stoped causes the duplication of the output. The solution for this is to add at least 1 second to the last_record state. The DBT normalization should handle this for the end user, so the duplication issue is not a blocker in such cases. Main pricipal here is: airbyte is at-least-once delivery, but skipping records is data loss.
130
152
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_not_all_records(self, requests_mock, authenticator, config, responses): expected_output = [ {"id": 1, "updated_at": "2018-01-02T00:00:00Z"}, {"id": 2, "updated_at": "2018-02-02T00:00:00Z"}, {"id": 2, "updated_at": "2018-02-02T00:00:00Z"}, # duplicate {"id": 3, "updated_at": "2018-03-02T00:00:00Z"}, {"id": 3, "updated_at": "2018-03-02T00:00:00Z"}, # duplicate {"id": 4, "updated_at": "2019-01-03T00:00:00Z"}, {"id": 4, "updated_at": "2019-01-03T00:00:00Z"}, # duplicate {"id": 5, "updated_at": "2019-02-03T00:00:00Z"}, {"id": 5, "updated_at": "2019-02-03T00:00:00Z"}, # duplicate {"id": 6, "updated_at": "2019-03-03T00:00:00Z"}, ] # INT value of page number where the switch state should be triggered. # in this test case values from: 1 - 4, assuming we want to switch state on this page. ticket_paginate_limit = 2 # This parameter mocks the "per_page" parameter in the API Call result_return_limit = 1 # Create test_stream instance. test_stream = Tickets(authenticator=authenticator, config=config) test_stream.ticket_paginate_limit = ticket_paginate_limit test_stream.result_return_limit = result_return_limit # Mocking Request for response in responses: requests_mock.register_uri( "GET", response["url"], json=response["json"], headers=response.get("headers", {}), ) records = list(test_stream.read_records(sync_mode=SyncMode.full_refresh)) # We're expecting 6 records to return from the tickets_stream assert records == expected_output ``` ###Assistant : TEST 1 - not all records are retrieved During test1 the tickets_stream changes the state of parameters on page: 2, by updating the params: `params["order_by"] = "updated_at"` `params["updated_since"] = last_record` continues to fetch records from the source, using new cycle, and so on. NOTE: After switch of the state on ticket_paginate_limit = 2, is this example, we will experience the records duplication, because of the last_record state, starting at the point where we stoped causes the duplication of the output. The solution for this is to add at least 1 second to the last_record state. The DBT normalization should handle this for the end user, so the duplication issue is not a blocker in such cases. Main pricipal here is: airbyte is at-least-once delivery, but skipping records is data loss.
1,382
def run_from_argv(self, argv): self.test_runner = get_command_line_option(argv, "--testrunner") super().run_from_argv(argv)
Pre-parse the command line to extract the value of the --testrunner option. This allows a test runner to define additional command line arguments.
23
8
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def run_from_argv(self, argv): self.test_runner = get_command_line_option(argv, "--testrunner") super().run_from_argv(argv) ``` ###Assistant : Pre-parse the command line to extract the value of the --testrunner option. This allows a test runner to define additional command line arguments.
1,383
def test_new_processing_issue(self, mock_func): notification = NewProcessingIssuesActivityNotification( Activity( project=self.project, user=self.user, type=ActivityType.NEW_PROCESSING_ISSUES, data={ "issues": get_issues_data(), "reprocessing_active": True, }, ) ) with self.tasks(): notification.send() attachment, text = get_attachment() assert ( text == f"Processing issues on <http://testserver/settings/{self.organization.slug}/projects/{self.project.slug}/processing-issues/|{self.project.slug}>" ) assert ( attachment["text"] == f"Some events failed to process in your project {self.project.slug}" ) assert ( attachment["footer"] == f"{self.project.slug} | <http://testserver/settings/account/notifications/workflow/?referrer=new_processing_issues_activity-slack-user|Notification Settings>" )
Test that a Slack message is sent with the expected payload when an issue is held back in reprocessing
19
57
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_new_processing_issue(self, mock_func): notification = NewProcessingIssuesActivityNotification( Activity( project=self.project, user=self.user, type=ActivityType.NEW_PROCESSING_ISSUES, data={ "issues": get_issues_data(), "reprocessing_active": True, }, ) ) with self.tasks(): notification.send() attachment, text = get_attachment() assert ( text == f"Processing issues on <http://testserver/settings/{self.organization.slug}/projects/{self.project.slug}/processing-issues/|{self.project.slug}>" ) assert ( attachment["text"] == f"Some events failed to process in your project {self.project.slug}" ) assert ( attachment["footer"] == f"{self.project.slug} | <http://testserver/settings/account/notifications/workflow/?referrer=new_processing_issues_activity-slack-user|Notification Settings>" ) ``` ###Assistant : Test that a Slack message is sent with the expected payload when an issue is held back in reprocessing
1,384
def load_array(data_arrays, batch_size, is_train=True): dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle=is_train)
Construct a PyTorch data iterator. Defined in :numref:`sec_linear_concise`
8
11
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def load_array(data_arrays, batch_size, is_train=True): dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle=is_train) ``` ###Assistant : Construct a PyTorch data iterator. Defined in :numref:`sec_linear_concise`
1,385
def is_permanent_redirect(self): return "location" in self.headers and self.status_code in ( codes.moved_permanently, codes.permanent_redirect, )
True if this Response one of the permanent versions of redirect.
11
13
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def is_permanent_redirect(self): return "location" in self.headers and self.status_code in ( codes.moved_permanently, codes.permanent_redirect, ) ``` ###Assistant : True if this Response one of the permanent versions of redirect.
1,386
def _estimate_step_duration(self, current, now): if current: # there are a few special scenarios here: # 1) somebody is calling the progress bar without ever supplying # step 1 # 2) somebody is calling the progress bar and supplies step one # multiple times, e.g. as part of a finalizing call # in these cases, we just fall back to the simple calculation if self._time_after_first_step is not None and current > 1: time_per_unit = (now - self._time_after_first_step) / ( current - 1 ) else: time_per_unit = (now - self._start) / current if current == 1: self._time_after_first_step = now return time_per_unit else: return 0
Estimate the duration of a single step. Given the step number `current` and the corresponding time `now` this function returns an estimate for how long a single step takes. If this is called before one step has been completed (i.e. `current == 0`) then zero is given as an estimate. The duration estimate ignores the duration of the (assumed to be non-representative) first step for estimates when more steps are available (i.e. `current>1`). Args: current: Index of current step. now: The current time. Returns: Estimate of the duration of a single step.
92
102
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _estimate_step_duration(self, current, now): if current: # there are a few special scenarios here: # 1) somebody is calling the progress bar without ever supplying # step 1 # 2) somebody is calling the progress bar and supplies step one # multiple times, e.g. as part of a finalizing call # in these cases, we just fall back to the simple calculation if self._time_after_first_step is not None and current > 1: time_per_unit = (now - self._time_after_first_step) / ( current - 1 ) else: time_per_unit = (now - self._start) / current if current == 1: self._time_after_first_step = now return time_per_unit else: return 0 ``` ###Assistant : Estimate the duration of a single step. Given the step number `current` and the corresponding time `now` this function returns an estimate for how long a single step takes. If this is called before one step has been completed (i.e. `current == 0`) then zero is given as an estimate. The duration estimate ignores the duration of the (assumed to be non-representative) first step for estimates when more steps are available (i.e. `current>1`). Args: current: Index of current step. now: The current time. Returns: Estimate of the duration of a single step.
1,387
def _propagate_index_objs(self, axis=None): self._filter_empties() if axis is None or axis == 0: cum_row_lengths = np.cumsum([0] + self._row_lengths) if axis is None or axis == 1: cum_col_widths = np.cumsum([0] + self._column_widths) if axis is None:
Synchronize labels by applying the index object for specific `axis` to the `self._partitions` lazily. Adds `set_axis` function to call-queue of each partition from `self._partitions` to apply new axis. Parameters ---------- axis : int, default: None The axis to apply to. If it's None applies to both axes.
47
34
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _propagate_index_objs(self, axis=None): self._filter_empties() if axis is None or axis == 0: cum_row_lengths = np.cumsum([0] + self._row_lengths) if axis is None or axis == 1: cum_col_widths = np.cumsum([0] + self._column_widths) if axis is None: ``` ###Assistant : Synchronize labels by applying the index object for specific `axis` to the `self._partitions` lazily. Adds `set_axis` function to call-queue of each partition from `self._partitions` to apply new axis. Parameters ---------- axis : int, default: None The axis to apply to. If it's None applies to both axes.
1,388
def test_query_devices_remote_no_sync(self) -> None: remote_user_id = "@test:other" local_user_id = "@test:test" remote_master_key = "85T7JXPFBAySB/jwby4S3lBPTqY3+Zg53nYuGmu1ggY" remote_self_signing_key = "QeIiFEjluPBtI7WQdG365QKZcFs9kqmHir6RBD0//nQ" self.hs.get_federation_client().query_client_keys = mock.Mock( return_value=defer.succeed( { "device_keys": {remote_user_id: {}}, "master_keys": { remote_user_id: { "user_id": remote_user_id, "usage": ["master"], "keys": {"ed25519:" + remote_master_key: remote_master_key}, }, }, "self_signing_keys": { remote_user_id: { "user_id": remote_user_id, "usage": ["self_signing"], "keys": { "ed25519:" + remote_self_signing_key: remote_self_signing_key }, } }, } ) ) e2e_handler = self.hs.get_e2e_keys_handler() query_result = self.get_success( e2e_handler.query_devices( { "device_keys": {remote_user_id: []}, }, timeout=10, from_user_id=local_user_id, from_device_id="some_device_id", ) ) self.assertEqual(query_result["failures"], {}) self.assertEqual( query_result["master_keys"], { remote_user_id: { "user_id": remote_user_id, "usage": ["master"], "keys": {"ed25519:" + remote_master_key: remote_master_key}, }, }, ) self.assertEqual( query_result["self_signing_keys"], { remote_user_id: { "user_id": remote_user_id, "usage": ["self_signing"], "keys": { "ed25519:" + remote_self_signing_key: remote_self_signing_key }, } }, )
Tests that querying keys for a remote user that we don't share a room with returns the cross signing keys correctly.
21
114
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def test_query_devices_remote_no_sync(self) -> None: remote_user_id = "@test:other" local_user_id = "@test:test" remote_master_key = "85T7JXPFBAySB/jwby4S3lBPTqY3+Zg53nYuGmu1ggY" remote_self_signing_key = "QeIiFEjluPBtI7WQdG365QKZcFs9kqmHir6RBD0//nQ" self.hs.get_federation_client().query_client_keys = mock.Mock( return_value=defer.succeed( { "device_keys": {remote_user_id: {}}, "master_keys": { remote_user_id: { "user_id": remote_user_id, "usage": ["master"], "keys": {"ed25519:" + remote_master_key: remote_master_key}, }, }, "self_signing_keys": { remote_user_id: { "user_id": remote_user_id, "usage": ["self_signing"], "keys": { "ed25519:" + remote_self_signing_key: remote_self_signing_key }, } }, } ) ) e2e_handler = self.hs.get_e2e_keys_handler() query_result = self.get_success( e2e_handler.query_devices( { "device_keys": {remote_user_id: []}, }, timeout=10, from_user_id=local_user_id, from_device_id="some_device_id", ) ) self.assertEqual(query_result["failures"], {}) self.assertEqual( query_result["master_keys"], { remote_user_id: { "user_id": remote_user_id, "usage": ["master"], "keys": {"ed25519:" + remote_master_key: remote_master_key}, }, }, ) self.assertEqual( query_result["self_signing_keys"], { remote_user_id: { "user_id": remote_user_id, "usage": ["self_signing"], "keys": { "ed25519:" + remote_self_signing_key: remote_self_signing_key }, } }, ) ``` ###Assistant : Tests that querying keys for a remote user that we don't share a room with returns the cross signing keys correctly.
1,389
def _matches_get_other_nodes(dictionary, nodes, node_ind): ind_node = nodes[node_ind] return [ind for ind in dictionary if nodes[ind] == ind_node]
Find other wildcards that may have already been matched.
9
17
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _matches_get_other_nodes(dictionary, nodes, node_ind): ind_node = nodes[node_ind] return [ind for ind in dictionary if nodes[ind] == ind_node] ``` ###Assistant : Find other wildcards that may have already been matched.
1,390
def save_pretrained(self, save_directory): for attribute_name in self.attributes: attribute = getattr(self, attribute_name) # Include the processor class in the attribute config so this processor can then be reloaded with the # `AutoProcessor` API. if hasattr(attribute, "_set_processor_class"): attribute._set_processor_class(self.__class__.__name__) attribute.save_pretrained(save_directory)
Saves the attributes of this processor (feature extractor, tokenizer...) in the specified directory so that it can be reloaded using the [`~ProcessorMixin.from_pretrained`] method. <Tip> This class method is simply calling [`~feature_extraction_utils.FeatureExtractionMixin.save_pretrained`] and [`~tokenization_utils_base.PreTrainedTokenizer.save_pretrained`]. Please refer to the docstrings of the methods above for more information. </Tip> Args: save_directory (`str` or `os.PathLike`): Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist).
74
37
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def save_pretrained(self, save_directory): for attribute_name in self.attributes: attribute = getattr(self, attribute_name) # Include the processor class in the attribute config so this processor can then be reloaded with the # `AutoProcessor` API. if hasattr(attribute, "_set_processor_class"): attribute._set_processor_class(self.__class__.__name__) attribute.save_pretrained(save_directory) ``` ###Assistant : Saves the attributes of this processor (feature extractor, tokenizer...) in the specified directory so that it can be reloaded using the [`~ProcessorMixin.from_pretrained`] method. <Tip> This class method is simply calling [`~feature_extraction_utils.FeatureExtractionMixin.save_pretrained`] and [`~tokenization_utils_base.PreTrainedTokenizer.save_pretrained`]. Please refer to the docstrings of the methods above for more information. </Tip> Args: save_directory (`str` or `os.PathLike`): Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist).
1,391
def _user_has_module_perms(user, app_label): for backend in auth.get_backends(): if not hasattr(backend, "has_module_perms"): continue try: if backend.has_module_perms(user, app_label): return True except PermissionDenied: return False return False
Backend can raise `PermissionDenied` to short-circuit permission checking.
8
24
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _user_has_module_perms(user, app_label): for backend in auth.get_backends(): if not hasattr(backend, "has_module_perms"): continue try: if backend.has_module_perms(user, app_label): return True except PermissionDenied: return False return False ``` ###Assistant : Backend can raise `PermissionDenied` to short-circuit permission checking.
1,392
def get_default_address(out, name): shipping_addresses = frappe.db.sql( , (name), as_dict=1, ) if shipping_addresses: for out.shipping_address in shipping_addresses: if out.shipping_address.is_shipping_address: return out.shipping_address out.shipping_address = shipping_addresses[0] return out.shipping_address @frappe.whitelist()
SELECT parent, (SELECT is_shipping_address FROM tabAddress a WHERE a.name=dl.parent) AS is_shipping_address FROM `tabDynamic Link` dl WHERE dl.link_doctype="Customer" AND dl.link_name=%s AND dl.parenttype = "Address"
23
26
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_default_address(out, name): shipping_addresses = frappe.db.sql( , (name), as_dict=1, ) if shipping_addresses: for out.shipping_address in shipping_addresses: if out.shipping_address.is_shipping_address: return out.shipping_address out.shipping_address = shipping_addresses[0] return out.shipping_address @frappe.whitelist() ``` ###Assistant : SELECT parent, (SELECT is_shipping_address FROM tabAddress a WHERE a.name=dl.parent) AS is_shipping_address FROM `tabDynamic Link` dl WHERE dl.link_doctype="Customer" AND dl.link_name=%s AND dl.parenttype = "Address"
1,393
def force_list(elements=None, to_tuple=False): ctor = list if to_tuple is True: ctor = tuple return ctor() if elements is None else ctor(elements) \ if type(elements) in [list, set, tuple] else ctor([elements])
Makes sure `elements` is returned as a list, whether `elements` is a single item, already a list, or a tuple. Args: elements (Optional[any]): The inputs as single item, list, or tuple to be converted into a list/tuple. If None, returns empty list/tuple. to_tuple (bool): Whether to use tuple (instead of list). Returns: Union[list,tuple]: All given elements in a list/tuple depending on `to_tuple`'s value. If elements is None, returns an empty list/tuple.
71
30
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def force_list(elements=None, to_tuple=False): ctor = list if to_tuple is True: ctor = tuple return ctor() if elements is None else ctor(elements) \ if type(elements) in [list, set, tuple] else ctor([elements]) ``` ###Assistant : Makes sure `elements` is returned as a list, whether `elements` is a single item, already a list, or a tuple. Args: elements (Optional[any]): The inputs as single item, list, or tuple to be converted into a list/tuple. If None, returns empty list/tuple. to_tuple (bool): Whether to use tuple (instead of list). Returns: Union[list,tuple]: All given elements in a list/tuple depending on `to_tuple`'s value. If elements is None, returns an empty list/tuple.
1,394
def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device): if order == 3: K = steps // 3 + 1 if steps % 3 == 0: orders = [3, ] * (K - 2) + [2, 1] elif steps % 3 == 1: orders = [3, ] * (K - 1) + [1] else: orders = [3, ] * (K - 1) + [2] elif order == 2: if steps % 2 == 0: K = steps // 2 orders = [2, ] * K else: K = steps // 2 + 1 orders = [2, ] * (K - 1) + [1] elif order == 1: K = 1 orders = [1, ] * steps else: raise ValueError("'order' must be '1' or '2' or '3'.") if skip_type == 'logSNR': # To reproduce the results in DPM-Solver paper timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device) else: timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[ torch.cumsum(torch.tensor([0, ] + orders)).to(device)] return timesteps_outer, orders
Get the order of each step for sampling by the singlestep DPM-Solver. We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast". Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is: - If order == 1: We take `steps` of DPM-Solver-1 (i.e. DDIM). - If order == 2: - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling. - If steps % 2 == 0, we use K steps of DPM-Solver-2. - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1. - If order == 3: - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1. - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1. - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2. ============================================ Args: order: A `int`. The max order for the solver (2 or 3). steps: A `int`. The total number of function evaluations (NFE). skip_type: A `str`. The type for the spacing of the time steps. We support three types: - 'logSNR': uniform logSNR for the time steps. - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) t_T: A `float`. The starting time of the sampling (default is T). t_0: A `float`. The ending time of the sampling (default is epsilon). device: A torch device. Returns: orders: A list of the solver order of each step.
309
159
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device): if order == 3: K = steps // 3 + 1 if steps % 3 == 0: orders = [3, ] * (K - 2) + [2, 1] elif steps % 3 == 1: orders = [3, ] * (K - 1) + [1] else: orders = [3, ] * (K - 1) + [2] elif order == 2: if steps % 2 == 0: K = steps // 2 orders = [2, ] * K else: K = steps // 2 + 1 orders = [2, ] * (K - 1) + [1] elif order == 1: K = 1 orders = [1, ] * steps else: raise ValueError("'order' must be '1' or '2' or '3'.") if skip_type == 'logSNR': # To reproduce the results in DPM-Solver paper timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device) else: timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[ torch.cumsum(torch.tensor([0, ] + orders)).to(device)] return timesteps_outer, orders ``` ###Assistant : Get the order of each step for sampling by the singlestep DPM-Solver. We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast". Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is: - If order == 1: We take `steps` of DPM-Solver-1 (i.e. DDIM). - If order == 2: - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling. - If steps % 2 == 0, we use K steps of DPM-Solver-2. - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1. - If order == 3: - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1. - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1. - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2. ============================================ Args: order: A `int`. The max order for the solver (2 or 3). steps: A `int`. The total number of function evaluations (NFE). skip_type: A `str`. The type for the spacing of the time steps. We support three types: - 'logSNR': uniform logSNR for the time steps. - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) t_T: A `float`. The starting time of the sampling (default is T). t_0: A `float`. The ending time of the sampling (default is epsilon). device: A torch device. Returns: orders: A list of the solver order of each step.
1,395
def _aligned_zeros(shape, dtype=float, order="C", align=None): dtype = np.dtype(dtype) if dtype == np.dtype(object): # Can't do this, fall back to standard allocation (which # should always be sufficiently aligned) if align is not None: raise ValueError("object array alignment not supported") return np.zeros(shape, dtype=dtype, order=order) if align is None: align = dtype.alignment if not hasattr(shape, '__len__'): shape = (shape,) size = functools.reduce(operator.mul, shape) * dtype.itemsize buf = np.empty(size + 2*align + 1, np.uint8) ptr = buf.__array_interface__['data'][0] offset = ptr % align if offset != 0: offset = align - offset if (ptr % (2*align)) == 0: offset += align # Note: slices producing 0-size arrays do not necessarily change # data pointer --- so we use and allocate size+1 buf = buf[offset:offset+size+1][:-1] buf.fill(0) data = np.ndarray(shape, dtype, buf, order=order) return data
Allocate a new ndarray with aligned memory. The ndarray is guaranteed *not* aligned to twice the requested alignment. Eg, if align=4, guarantees it is not aligned to 8. If align=None uses dtype.alignment.
32
129
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def _aligned_zeros(shape, dtype=float, order="C", align=None): dtype = np.dtype(dtype) if dtype == np.dtype(object): # Can't do this, fall back to standard allocation (which # should always be sufficiently aligned) if align is not None: raise ValueError("object array alignment not supported") return np.zeros(shape, dtype=dtype, order=order) if align is None: align = dtype.alignment if not hasattr(shape, '__len__'): shape = (shape,) size = functools.reduce(operator.mul, shape) * dtype.itemsize buf = np.empty(size + 2*align + 1, np.uint8) ptr = buf.__array_interface__['data'][0] offset = ptr % align if offset != 0: offset = align - offset if (ptr % (2*align)) == 0: offset += align # Note: slices producing 0-size arrays do not necessarily change # data pointer --- so we use and allocate size+1 buf = buf[offset:offset+size+1][:-1] buf.fill(0) data = np.ndarray(shape, dtype, buf, order=order) return data ``` ###Assistant : Allocate a new ndarray with aligned memory. The ndarray is guaranteed *not* aligned to twice the requested alignment. Eg, if align=4, guarantees it is not aligned to 8. If align=None uses dtype.alignment.
1,396
def smart_resize(x, size, interpolation='bilinear'): if len(size) != 2: raise ValueError('Expected `size` to be a tuple of 2 integers, ' f'but got: {size}.') img = tf.convert_to_tensor(x) if img.shape.rank is not None: if img.shape.rank < 3 or img.shape.rank > 4: raise ValueError( 'Expected an image array with shape `(height, width, channels)`, ' 'or `(batch_size, height, width, channels)`, but ' f'got input with incorrect rank, of shape {img.shape}.') shape = tf.shape(img) height, width = shape[-3], shape[-2] target_height, target_width = size if img.shape.rank is not None: static_num_channels = img.shape[-1] else: static_num_channels = None crop_height = tf.cast( tf.cast(width * target_height, 'float32') / target_width, 'int32') crop_width = tf.cast( tf.cast(height * target_width, 'float32') / target_height, 'int32') # Set back to input height / width if crop_height / crop_width is not smaller. crop_height = tf.minimum(height, crop_height) crop_width = tf.minimum(width, crop_width) crop_box_hstart = tf.cast( tf.cast(height - crop_height, 'float32') / 2, 'int32') crop_box_wstart = tf.cast(tf.cast(width - crop_width, 'float32') / 2, 'int32') if img.shape.rank == 4: crop_box_start = tf.stack([0, crop_box_hstart, crop_box_wstart, 0]) crop_box_size = tf.stack([-1, crop_height, crop_width, -1]) else: crop_box_start = tf.stack([crop_box_hstart, crop_box_wstart, 0]) crop_box_size = tf.stack([crop_height, crop_width, -1]) img = tf.slice(img, crop_box_start, crop_box_size) img = tf.image.resize(images=img, size=size, method=interpolation) # Apparent bug in resize_images_v2 may cause shape to be lost if img.shape.rank is not None: if img.shape.rank == 4: img.set_shape((None, None, None, static_num_channels)) if img.shape.rank == 3: img.set_shape((None, None, static_num_channels)) if isinstance(x, np.ndarray): return img.numpy() return img @keras_export('keras.utils.array_to_img', 'keras.preprocessing.image.array_to_img')
Resize images to a target size without aspect ratio distortion. TensorFlow image datasets typically yield images that have each a different size. However, these images need to be batched before they can be processed by Keras layers. To be batched, images need to share the same height and width. You could simply do: ```python size = (200, 200) ds = ds.map(lambda img: tf.image.resize(img, size)) ``` However, if you do this, you distort the aspect ratio of your images, since in general they do not all have the same aspect ratio as `size`. This is fine in many cases, but not always (e.g. for GANs this can be a problem). Note that passing the argument `preserve_aspect_ratio=True` to `resize` will preserve the aspect ratio, but at the cost of no longer respecting the provided target size. Because `tf.image.resize` doesn't crop images, your output images will still have different sizes. This calls for: ```python size = (200, 200) ds = ds.map(lambda img: smart_resize(img, size)) ``` Your output images will actually be `(200, 200)`, and will not be distorted. Instead, the parts of the image that do not fit within the target size get cropped out. The resizing process is: 1. Take the largest centered crop of the image that has the same aspect ratio as the target size. For instance, if `size=(200, 200)` and the input image has size `(340, 500)`, we take a crop of `(340, 340)` centered along the width. 2. Resize the cropped image to the target size. In the example above, we resize the `(340, 340)` crop to `(200, 200)`. Args: x: Input image or batch of images (as a tensor or NumPy array). Must be in format `(height, width, channels)` or `(batch_size, height, width, channels)`. size: Tuple of `(height, width)` integer. Target size. interpolation: String, interpolation to use for resizing. Defaults to `'bilinear'`. Supports `bilinear`, `nearest`, `bicubic`, `area`, `lanczos3`, `lanczos5`, `gaussian`, `mitchellcubic`. Returns: Array with shape `(size[0], size[1], channels)`. If the input image was a NumPy array, the output is a NumPy array, and if it was a TF tensor, the output is a TF tensor.
348
228
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def smart_resize(x, size, interpolation='bilinear'): if len(size) != 2: raise ValueError('Expected `size` to be a tuple of 2 integers, ' f'but got: {size}.') img = tf.convert_to_tensor(x) if img.shape.rank is not None: if img.shape.rank < 3 or img.shape.rank > 4: raise ValueError( 'Expected an image array with shape `(height, width, channels)`, ' 'or `(batch_size, height, width, channels)`, but ' f'got input with incorrect rank, of shape {img.shape}.') shape = tf.shape(img) height, width = shape[-3], shape[-2] target_height, target_width = size if img.shape.rank is not None: static_num_channels = img.shape[-1] else: static_num_channels = None crop_height = tf.cast( tf.cast(width * target_height, 'float32') / target_width, 'int32') crop_width = tf.cast( tf.cast(height * target_width, 'float32') / target_height, 'int32') # Set back to input height / width if crop_height / crop_width is not smaller. crop_height = tf.minimum(height, crop_height) crop_width = tf.minimum(width, crop_width) crop_box_hstart = tf.cast( tf.cast(height - crop_height, 'float32') / 2, 'int32') crop_box_wstart = tf.cast(tf.cast(width - crop_width, 'float32') / 2, 'int32') if img.shape.rank == 4: crop_box_start = tf.stack([0, crop_box_hstart, crop_box_wstart, 0]) crop_box_size = tf.stack([-1, crop_height, crop_width, -1]) else: crop_box_start = tf.stack([crop_box_hstart, crop_box_wstart, 0]) crop_box_size = tf.stack([crop_height, crop_width, -1]) img = tf.slice(img, crop_box_start, crop_box_size) img = tf.image.resize(images=img, size=size, method=interpolation) # Apparent bug in resize_images_v2 may cause shape to be lost if img.shape.rank is not None: if img.shape.rank == 4: img.set_shape((None, None, None, static_num_channels)) if img.shape.rank == 3: img.set_shape((None, None, static_num_channels)) if isinstance(x, np.ndarray): return img.numpy() return img @keras_export('keras.utils.array_to_img', 'keras.preprocessing.image.array_to_img') ``` ###Assistant : Resize images to a target size without aspect ratio distortion. TensorFlow image datasets typically yield images that have each a different size. However, these images need to be batched before they can be processed by Keras layers. To be batched, images need to share the same height and width. You could simply do: ```python size = (200, 200) ds = ds.map(lambda img: tf.image.resize(img, size)) ``` However, if you do this, you distort the aspect ratio of your images, since in general they do not all have the same aspect ratio as `size`. This is fine in many cases, but not always (e.g. for GANs this can be a problem). Note that passing the argument `preserve_aspect_ratio=True` to `resize` will preserve the aspect ratio, but at the cost of no longer respecting the provided target size. Because `tf.image.resize` doesn't crop images, your output images will still have different sizes. This calls for: ```python size = (200, 200) ds = ds.map(lambda img: smart_resize(img, size)) ``` Your output images will actually be `(200, 200)`, and will not be distorted. Instead, the parts of the image that do not fit within the target size get cropped out. The resizing process is: 1. Take the largest centered crop of the image that has the same aspect ratio as the target size. For instance, if `size=(200, 200)` and the input image has size `(340, 500)`, we take a crop of `(340, 340)` centered along the width. 2. Resize the cropped image to the target size. In the example above, we resize the `(340, 340)` crop to `(200, 200)`. Args: x: Input image or batch of images (as a tensor or NumPy array). Must be in format `(height, width, channels)` or `(batch_size, height, width, channels)`. size: Tuple of `(height, width)` integer. Target size. interpolation: String, interpolation to use for resizing. Defaults to `'bilinear'`. Supports `bilinear`, `nearest`, `bicubic`, `area`, `lanczos3`, `lanczos5`, `gaussian`, `mitchellcubic`. Returns: Array with shape `(size[0], size[1], channels)`. If the input image was a NumPy array, the output is a NumPy array, and if it was a TF tensor, the output is a TF tensor.
1,397
def gather(tensor): if AcceleratorState().distributed_type == DistributedType.TPU: return _tpu_gather(tensor, name="accelerate.utils.gather") elif AcceleratorState().distributed_type in [DistributedType.DEEPSPEED, DistributedType.MULTI_GPU]: return _gpu_gather(tensor) elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU: return _cpu_gather(tensor) else: return tensor
Recursively gather tensor in a nested list/tuple/dictionary of tensors from all devices. Args: tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`): The data to gather. Returns: The same data structure as :obj:`tensor` with all tensors sent to the proper device.
37
25
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def gather(tensor): if AcceleratorState().distributed_type == DistributedType.TPU: return _tpu_gather(tensor, name="accelerate.utils.gather") elif AcceleratorState().distributed_type in [DistributedType.DEEPSPEED, DistributedType.MULTI_GPU]: return _gpu_gather(tensor) elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU: return _cpu_gather(tensor) else: return tensor ``` ###Assistant : Recursively gather tensor in a nested list/tuple/dictionary of tensors from all devices. Args: tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`): The data to gather. Returns: The same data structure as :obj:`tensor` with all tensors sent to the proper device.
1,398
def speed_metrics(split, start_time, num_samples=None, num_steps=None): runtime = time.time() - start_time result = {f"{split}_runtime": round(runtime, 4)} if num_samples is not None: samples_per_second = num_samples / runtime result[f"{split}_samples_per_second"] = round(samples_per_second, 3) if num_steps is not None: steps_per_second = num_steps / runtime result[f"{split}_steps_per_second"] = round(steps_per_second, 3) return result
Measure and return speed performance metrics. This function requires a time snapshot `start_time` before the operation to be measured starts and this function should be run immediately after the operation to be measured has completed. Args: - split: name to prefix metric (like train, eval, test...) - start_time: operation start time - num_samples: number of samples processed
57
45
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def speed_metrics(split, start_time, num_samples=None, num_steps=None): runtime = time.time() - start_time result = {f"{split}_runtime": round(runtime, 4)} if num_samples is not None: samples_per_second = num_samples / runtime result[f"{split}_samples_per_second"] = round(samples_per_second, 3) if num_steps is not None: steps_per_second = num_steps / runtime result[f"{split}_steps_per_second"] = round(steps_per_second, 3) return result ``` ###Assistant : Measure and return speed performance metrics. This function requires a time snapshot `start_time` before the operation to be measured starts and this function should be run immediately after the operation to be measured has completed. Args: - split: name to prefix metric (like train, eval, test...) - start_time: operation start time - num_samples: number of samples processed
1,399
def parse_date_fields(year_col, month_col, day_col) -> npt.NDArray[np.object_]: warnings.warn( , # noqa: E501 FutureWarning, stacklevel=find_stack_level(), ) year_col = _maybe_cast(year_col) month_col = _maybe_cast(month_col) day_col = _maybe_cast(day_col) return parsing.try_parse_year_month_day(year_col, month_col, day_col)
Parse columns with years, months and days into a single date column. .. deprecated:: 1.2 Use pd.to_datetime({"year": year_col, "month": month_col, "day": day_col}) instead to get a Pandas Series. Use ser = pd.to_datetime({"year": year_col, "month": month_col, "day": day_col}) and np.array([s.to_pydatetime() for s in ser]) instead to get a Numpy array.
49
27
Python
###User : Below is a Python method which does a task. Create a documentation for the below code : ```Python def parse_date_fields(year_col, month_col, day_col) -> npt.NDArray[np.object_]: warnings.warn( , # noqa: E501 FutureWarning, stacklevel=find_stack_level(), ) year_col = _maybe_cast(year_col) month_col = _maybe_cast(month_col) day_col = _maybe_cast(day_col) return parsing.try_parse_year_month_day(year_col, month_col, day_col) ``` ###Assistant : Parse columns with years, months and days into a single date column. .. deprecated:: 1.2 Use pd.to_datetime({"year": year_col, "month": month_col, "day": day_col}) instead to get a Pandas Series. Use ser = pd.to_datetime({"year": year_col, "month": month_col, "day": day_col}) and np.array([s.to_pydatetime() for s in ser]) instead to get a Numpy array.