text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: --install-hook not working on windows
Body: `ploomber nb --install-hook` is not working on windows, [I found here](https://stackoverflow.com/questions/20609816/git-pre-commit-hook-is-not-running-on-windows) that the problem may be the shebang but I followed the recommendations and the CI is still broken.
Does anyone with a windows machine want to give it a try?
Tasks:
- [ ] Fix git hook
- [ ] remove xfail flag in the unit test
- [ ] delete warning in the documentation saying it doesn't work on windows
Code: https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/nb.py
Docs: https://github.com/ploomber/ploomber/blob/master/doc/user-guide/editors.rst#using-git-hooks | 0easy
|
Title: I want to use the 'response_class' property.
Body: hello.
Looking at the CRUDGenerator's `__init__()` argument, I noticed that there is no property to specify the response_class.
I want to use a fast JSON conversion library like [ORJson](https://github.com/ijl/orjson), what should I do? | 0easy
|
Title: Fix `get_stacktrace`
Body: ### Summary
The result of `traceback.format_exception` already includes newlines. We don't need add it.
```diff
diff --git a/mlflow/utils/exception_utils.py b/mlflow/utils/exception_utils.py
index a73f266c3..8f73853db 100644
--- a/mlflow/utils/exception_utils.py
+++ b/mlflow/utils/exception_utils.py
@@ -9,6 +9,6 @@ def get_stacktrace(error):
tb = traceback.format_exception(error.__class__, error, error.__traceback__)
else:
tb = traceback.format_exception(error)
- return (msg + "\n\n".join(tb)).strip()
+ return (msg + "".join(tb)).strip()
except Exception:
return msg
```
The message generated by this function currently looks like this:
```
AttributeError("'list' object has no attribute 'columns'")Traceback (most recent call last):
File "/Users/harutaka.kawamura/Desktop/repositories/mlflow/mlflow/utils/_capture_modules.py", line 166, in load_model_and_predict
model.predict(input_example, params=params)
File "/Users/harutaka.kawamura/Desktop/repositories/mlflow/mlflow/openai/__init__.py", line 785, in predict
return self._predict_chat(data, params or {})
File "/Users/harutaka.kawamura/Desktop/repositories/mlflow/mlflow/openai/__init__.py", line 703, in _predict_chat
messages_list = self.format_completions(self.get_params_list(data))
File "/Users/harutaka.kawamura/Desktop/repositories/mlflow/mlflow/openai/__init__.py", line 665, in get_params_list
if variable in data.columns:
```
but it should look like:
```
AttributeError("'list' object has no attribute 'columns'")
Traceback (most recent call last):
File "/Users/harutaka.kawamura/Desktop/repositories/mlflow/mlflow/utils/_capture_modules.py", line 166, in load_model_and_predict
model.predict(input_example, params=params)
File "/Users/harutaka.kawamura/Desktop/repositories/mlflow/mlflow/openai/__init__.py", line 785, in predict
return self._predict_chat(data, params or {})
File "/Users/harutaka.kawamura/Desktop/repositories/mlflow/mlflow/openai/__init__.py", line 703, in _predict_chat
messages_list = self.format_completions(self.get_params_list(data))
File "/Users/harutaka.kawamura/Desktop/repositories/mlflow/mlflow/openai/__init__.py", line 665, in get_params_list
if variable in data.columns:
```
### Notes
- Make sure to open a PR from a **non-master** branch.
- Sign off the commit using the `-s` flag when making a commit:
```sh
git commit -s -m "..."
# ^^ make sure to use this
```
- Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
| 0easy
|
Title: Add docstrings for functions in utils.py
Body: Add docstrings for the two helper functions: `random_varname` and `all_subclasses`. | 0easy
|
Title: bug: KMS get-key-rotation-status output not consistent with official AWS cli
Body: ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The KMS `get-key-rotation-status` output returns a JSON like
```json
{
"KeyRotationEnabled": true
}
```
### Expected Behavior
According to the [official documentation](https://docs.aws.amazon.com/cli/latest/reference/kms/get-key-rotation-status.html#examples), the output structure is as follows:
```json
{
"KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab",
"KeyRotationEnabled": true,
"NextRotationDate": "2024-02-14T18:14:33.587000+00:00",
"RotationPeriodInDays": 365
}
```
The impact is a perpetual drift on my Terraform configuration, because the rotation period is not present on the output, making Terrafrom reconciling that setting in loop.
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
With localstack running on another shell, run the script below:
```bash
export AWS_ACCESS_KEY_ID="test"
export AWS_SECRET_ACCESS_KEY="test"
export AWS_REGION="us-east-1"
export AWS_ENDPOINT_URL="http://localhost:4566"
keyId=$(aws --endpoint-url="$AWS_ENDPOINT_URL" kms create-key --description "test" --query KeyMetadata.KeyId --output text)
aws --endpoint-url="$AWS_ENDPOINT_URL" kms enable-key-rotation --key-id "$keyId" --rotation-period-in-days 120 --output json --no-cli-pager
aws --endpoint-url="$AWS_ENDPOINT_URL" kms get-key-rotation-status --key-id "$keyId" --output json --no-cli-pager
```
### Environment
```markdown
- OS: macOS
- LocalStack:
version: 3.6.1.dev
build date: 2024-08-16
build git hash: 1fafd6da1
```
### Anything else?
_No response_ | 0easy
|
Title: 关于init_weights
Body: 您好,在head文件中的rpn.py里,有import init_weights这个函数,但是没有用到,想问下为什么backbone之后的网络在训练前不需要初始化参数呢? | 0easy
|
Title: Deprecate `name` argument of `TestSuite.from_model`
Body: It's a bit odd to be able to configure the name of the creates suite when using `TestSuite.from_model` but no other attributes like `doc`. It doesn't make sense to support configuring others, because it's easy to do that after the suite is created either by setting attributes normally or by using the convenient `TestSuite.config` method like `TestSuite.from_model(model).config(name='X', doc='Y')`. Let's deprecate the name argument to make API more uniform. | 0easy
|
Title: Remove usage of buffer(0) as trick to make geometries valid
Body: In our overlay code we have some places where we ensure geometries are valid. In the first place the input geometries, but for difference and intersection also the resulting geometries.
Historically we have been doing that with the `buffer(0)` trick, but in https://github.com/geopandas/geopandas/pull/2939 we already replaced one such example with `make_valid()` instead, which is faster and more robust, and just meant for making geometries valid, instead of being a trick.
But we have some remaining cases:
Making the input valid:
https://github.com/geopandas/geopandas/blob/321df2b588a41e8e87f740e828b65c2a87cd9542/geopandas/tools/overlay.py#L293-L307
The intersection result:
https://github.com/geopandas/geopandas/blob/321df2b588a41e8e87f740e828b65c2a87cd9542/geopandas/tools/overlay.py#L37-L39
| 0easy
|
Title: Make changelog browsable
Body: Currently Splinter changelog is available in the Sphinx documentation https://github.com/cobrateam/splinter/tree/master/docs/news
However the changelog is not available in the sidebar http://splinter.readthedocs.io/en/latest/ or easily user discoverable.
I recommend adding a Changelog page in the documentation that indexes all individual changelog entries, making them available easily in Sphinx user browsable documentation.
If you have nothing against this I can make a PR. | 0easy
|
Title: 如何添加底部备案信息
Body: 如何添加底部备案信息 | 0easy
|
Title: `(cd subdir && ls)` gives error
Body: <!--- Provide a general summary of the issue in the Title above -->
<!--- If you have a question along the lines of "How do I do this Bash command in xonsh"
please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html
If you don't find an answer there, please do open an issue! -->
## xonfig
<details>
```
$ xonfig
+------------------+-----------------+
| xonsh | 0.14.3 |
| Python | 3.11.6 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.41 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.17.2 |
| on posix | True |
| on linux | False |
| on darwin | True |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file | [] |
+------------------+-----------------+
```
</details>
## Expected Behavior
`(cd subdir && ls)` should print out the contents of `subdir` assuming a directory by that name exists, without prefixing the names by `subdir/`. If I understand the semantics of `()` correctly, the shell's cwd should remain unchanged.
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
<!--- If part of your bug report is a traceback, please first enter debug mode before triggering the error
To enter debug mode, set the environment variable `XONSH_DEBUG=1` _before_ starting `xonsh`.
On Linux and OSX, an easy way to to do this is to run `env XONSH_DEBUG=1 xonsh` -->
xonsh says `cd: no such file or directory: subdir` and skips the `ls`. But the shell's cwd is switched to `subdir` anyway.
### Traceback (if applicable)
N/A
## Steps to Reproduce
<!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! -->
Start up xonsh and type
```xsh
mkdir subdir
touch subdir/file-a
(cd subdir && ls)
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Translations needed for comparison functionality messages
Body: We are looking for contributors to help translate the following scan-comparison-related messages into various languages. If you are a native speaker of any of the languages listed below, please consider contributing a translation for the following messages(no Google Translate please, only native speaker translations):
```
compare_report_path_filename: "the file-path to store the compare_scan report"
no_scan_to_compare: "the scan_id to be compared not found"
compare_report_saved: "compare results saved in {0}"
build_compare_report: "building compare report"
finish_build_report: "Finished building compare report"
```
Languages needed:
~~Arabic (ar.yaml)~~
~~Bengali (bn.yaml)~~
German (de.yaml)
Greek (el.yaml)
Spanish (es.yaml)
Persian (fa.yaml)
French (fr.yaml)
~~Hindi (hi.yaml)~~
Armenian (hy.yaml)
Indonesian (id.yaml)
Italian (it.yaml)
Hebrew (iw.yaml)
Japanese (ja.yaml)
~~Korean (ko.yaml)~~
Dutch (nl.yaml)
Pashto (ps.yaml)
Portuguese (Brazil) (pt-br.yaml)
Russian (ru.yaml)
Turkish (tr.yaml)
~~Urdu (ur.yaml)~~
Vietnamese (vi.yaml)
~~Chinese (Simplified) (zh-cn.yaml)~~
These YAML files are located here: https://github.com/OWASP/Nettacker/tree/master/nettacker/locale
Feel free to submit a pull request with your translations. Thank you for your contributions!
| 0easy
|
Title: Update default line width to `2` for consistency with Plotly Express
Body: | 0easy
|
Title: Add normalisation options
Body: Hi!
First, thanks for this library which is really awesome!
I have this dataset where I'm analyzing Premie league football since 1992. I want to plot a Ridge plot with each subplot being Yellow_card, red_card, goal, penalty, etc, and see the distribution over a 90mn period to see when these actions happen the most. Of course, there is way more yellow cards than goals and way more goals that red card which results in a plot like this:
<img src="https://i.ibb.co/SVCwQrG/Screenshot-2021-07-13-at-01-29-37.png" alt="Screenshot-2021-07-13-at-01-29-37" border="0">
Is there a way to see everything, I check the parameters and the code but I'm not that good in python to understand everything yet...
Any help? | 0easy
|
Title: can't make selection for DatePickerRange when using placeholder_text
Body: no selection option will appear when using `start_date_placeholder_text` or `end_date_placeholder_text` with `dcc.DatePickerRange`
See doc page for example: [https://dash.plot.ly/dash-core-components/datepickerrange](https://dash.plot.ly/dash-core-components/datepickerrange) | 0easy
|
Title: Run the benchmark with GPT 3.5 over different —steps-config
Body: We have scripts/benchmark.py.
If we run it over more configs and store the results to RESULTS.md we will clearly be able to see what works and what does not.
Would also be great to let the script ask for “did it work?” after each run and record the output to a markdown table like benchmark/RESULTS.md (and maybe append it with some metadata to that file!) | 0easy
|
Title: Marketplace - Change the buttons so that they have the same color as the arrows
Body:
### Describe your issue.
For the arrow on the left, when a user first opens the page the default state is the left button should be disabled (grey colored) and the right button should be enabled. Basically if there's no more cards on the left side for the user to navigate to, the button should be disabled.
<img width="1387" alt="Screenshot 2024-12-13 at 17 10 22" src="https://github.com/user-attachments/assets/eba087f5-8ab4-447e-8993-4d2e1534bf6a" />
Please check out this video to see how the buttons should interact. Can we do the same exact behaviors for our buttons?
The same hover states, diabled states, default state and click state as these buttons?
https://github.com/user-attachments/assets/bdfd141c-54ac-46aa-a30b-81fa774e68d0
| 0easy
|
Title: Simplify PrepromptsHolder to just be a function
Body: Currently preprompt holders is this:
```
class PrepromptsHolder:
def __init__(self, preprompts_path: Path):
self.preprompts_path = preprompts_path
def get_preprompts(self) -> Dict[str, str]:
preprompts_repo = DiskMemory(self.preprompts_path)
return {file_name: preprompts_repo[file_name] for file_name in preprompts_repo}
```
We could change it to just a function:
```
def get_preprompts(preprompts_path: Path) -> Dict[str, str]:
return dict(DiskMemory(preprompts_path).items())
```
| 0easy
|
Title: Allow model selection for /index and /search
Body: Add a command to set the model to be used for all /index requests and all /search requests. There should be two separate commands to set the models for each, the model should persist between restarts, the default model should be davinci003 and the other models should be the text models available on the openai website | 0easy
|
Title: [BUG] Can't access underlying models in multi-quantile regression
Body: **Describe the bug**
I'm using a multi-quantile forecaster on multivariate target data. E.g, a `CatBoostModel(likelihood='quantile', quantile=[0.01, 0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, 0.99], ...)`.
Darts fits a separate GBT model for each quantile level and each target component. However, these aren't accessible to me.
Suppose my target series has two components and my `CatBoostModel` is called `model`. Then `model.model.estimators_` returns only 2 models (corresponding to quantile 0.99 for each component).
This means that `model.get_multioutput_estimator` and `model.get_estimator` are incapable of returning estimators for any quantile other than 0.99.
Trying to access the models using `model.model` or `model._model_container` or `model._model_container[0.5]` all give a similar error:
```python
{
"name": "RuntimeError",
"message": "scikit-learn estimators should always specify their parameters in the signature of their __init__ (no varargs). <class 'darts.utils.multioutput.MultiOutputRegressor'> with constructor (self, *args, eval_set_name: Optional[str] = None, eval_weight_name: Optional[str] = None, **kwargs) doesn't follow this convention.",
"stack": "---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
File ~/projects/equilibrium/helios/.venv/lib/python3.11/site-packages/IPython/core/formatters.py:347, in BaseFormatter.__call__(self, obj)
345 method = get_real_method(obj, self.print_method)
346 if method is not None:
--> 347 return method()
348 return None
349 else:
File ~/projects/equilibrium/helios/.venv/lib/python3.11/site-packages/sklearn/base.py:693, in BaseEstimator._repr_html_inner(self)
688 def _repr_html_inner(self):
689 \"\"\"This function is returned by the @property `_repr_html_` to make
690 `hasattr(estimator, \"_repr_html_\") return `True` or `False` depending
691 on `get_config()[\"display\"]`.
692 \"\"\"
--> 693 return estimator_html_repr(self)
File ~/projects/equilibrium/helios/.venv/lib/python3.11/site-packages/sklearn/utils/_estimator_html_repr.py:363, in estimator_html_repr(estimator)
361 style_template = Template(_CSS_STYLE)
362 style_with_id = style_template.substitute(id=container_id)
--> 363 estimator_str = str(estimator)
365 # The fallback message is shown by default and loading the CSS sets
366 # div.sk-text-repr-fallback to display: none to hide the fallback message.
367 #
(...)
372 # The reverse logic applies to HTML repr div.sk-container.
373 # div.sk-container is hidden by default and the loading the CSS displays it.
374 fallback_msg = (
375 \"In a Jupyter environment, please rerun this cell to show the HTML\"
376 \" representation or trust the notebook. <br />On GitHub, the\"
377 \" HTML representation is unable to render, please try loading this page\"
378 \" with nbviewer.org.\"
379 )
File ~/projects/equilibrium/helios/.venv/lib/python3.11/site-packages/sklearn/base.py:315, in BaseEstimator.__repr__(self, N_CHAR_MAX)
307 # use ellipsis for sequences with a lot of elements
308 pp = _EstimatorPrettyPrinter(
309 compact=True,
310 indent=1,
311 indent_at_name=True,
312 n_max_elements_to_show=N_MAX_ELEMENTS_TO_SHOW,
313 )
--> 315 repr_ = pp.pformat(self)
317 # Use bruteforce ellipsis when there are a lot of non-blank characters
318 n_nonblank = len(\"\".join(repr_.split()))
File ~/.pyenv/versions/3.11.10/lib/python3.11/pprint.py:161, in PrettyPrinter.pformat(self, object)
159 def pformat(self, object):
160 sio = _StringIO()
--> 161 self._format(object, sio, 0, 0, {}, 0)
162 return sio.getvalue()
File ~/.pyenv/versions/3.11.10/lib/python3.11/pprint.py:178, in PrettyPrinter._format(self, object, stream, indent, allowance, context, level)
176 self._readable = False
177 return
--> 178 rep = self._repr(object, context, level)
179 max_width = self._width - indent - allowance
180 if len(rep) > max_width:
File ~/.pyenv/versions/3.11.10/lib/python3.11/pprint.py:458, in PrettyPrinter._repr(self, object, context, level)
457 def _repr(self, object, context, level):
--> 458 repr, readable, recursive = self.format(object, context.copy(),
459 self._depth, level)
460 if not readable:
461 self._readable = False
File ~/projects/equilibrium/helios/.venv/lib/python3.11/site-packages/sklearn/utils/_pprint.py:189, in _EstimatorPrettyPrinter.format(self, object, context, maxlevels, level)
188 def format(self, object, context, maxlevels, level):
--> 189 return _safe_repr(
190 object, context, maxlevels, level, changed_only=self._changed_only
191 )
File ~/projects/equilibrium/helios/.venv/lib/python3.11/site-packages/sklearn/utils/_pprint.py:440, in _safe_repr(object, context, maxlevels, level, changed_only)
438 recursive = False
439 if changed_only:
--> 440 params = _changed_params(object)
441 else:
442 params = object.get_params(deep=False)
File ~/projects/equilibrium/helios/.venv/lib/python3.11/site-packages/sklearn/utils/_pprint.py:93, in _changed_params(estimator)
89 def _changed_params(estimator):
90 \"\"\"Return dict (param_name: value) of parameters that were given to
91 estimator with non-default values.\"\"\"
---> 93 params = estimator.get_params(deep=False)
94 init_func = getattr(estimator.__init__, \"deprecated_original\", estimator.__init__)
95 init_params = inspect.signature(init_func).parameters
File ~/projects/equilibrium/helios/.venv/lib/python3.11/site-packages/sklearn/base.py:243, in BaseEstimator.get_params(self, deep)
228 \"\"\"
229 Get parameters for this estimator.
230
(...)
240 Parameter names mapped to their values.
241 \"\"\"
242 out = dict()
--> 243 for key in self._get_param_names():
244 value = getattr(self, key)
245 if deep and hasattr(value, \"get_params\") and not isinstance(value, type):
File ~/projects/equilibrium/helios/.venv/lib/python3.11/site-packages/sklearn/base.py:217, in BaseEstimator._get_param_names(cls)
215 for p in parameters:
216 if p.kind == p.VAR_POSITIONAL:
--> 217 raise RuntimeError(
218 \"scikit-learn estimators should always \"
219 \"specify their parameters in the signature\"
220 \" of their __init__ (no varargs).\"
221 \" %s with constructor %s doesn't \"
222 \" follow this convention.\" % (cls, init_signature)
223 )
224 # Extract and sort argument names excluding 'self'
225 return sorted([p.name for p in parameters])
RuntimeError: scikit-learn estimators should always specify their parameters in the signature of their __init__ (no varargs). <class 'darts.utils.multioutput.MultiOutputRegressor'> with constructor (self, *args, eval_set_name: Optional[str] = None, eval_weight_name: Optional[str] = None, **kwargs) doesn't follow this convention."
}
```
**To Reproduce**
```python
import numpy as np
import pandas as pd
from darts import TimeSeries
from darts.models import CatBoostModel
from darts.utils.timeseries_generation import linear_timeseries
# Generate a synthetic multivariate time series
np.random.seed(42)
series1 = linear_timeseries(length=100, start_value=0, end_value=10)
series2 = linear_timeseries(length=100, start_value=10, end_value=0)
multivariate_series = series1.stack(series2)
# Define future covariates (optional, here just using a simple linear trend)
future_covariates = linear_timeseries(length=100, start_value=0, end_value=5)
# Initialize the CatBoostModel with quantile regression
model = CatBoostModel(
lags=12,
lags_future_covariates=[0],
likelihood='quantile',
quantiles=[0.01, 0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, 0.99],
random_state=42
)
# Fit the model
model.fit(multivariate_series, future_covariates=future_covariates)
# This is only 2 instead of 2 * 9
len(model.model.estimators_)
# Both of these give the above error
model.model
model._model_container
```
**Expected behavior**
Ability to access all the underlying estimators.
**System (please complete the following information):**
- Python version: 3.11.10
- darts version: 0.31.0
| 0easy
|
Title: Tox 4 trying to reuse env where it didn't in tox 3
Body: ## Issue
Given this config:
```ini
[testenv:black]
deps =
black==23.3.0
commands = {envpython} -m black . --check
[testenv:tip-black]
deps = black
commands = {[testenv:black]commands}
```
in tox 3.28.0, both executions (via `tox -e <env>`) worked and ran in different environments. We'd get the following line in the output from the `black` env:
```
black run-test: commands[0] | /home/james/c/tox_test/.tox/black/bin/python -m black . --check
```
vs the following in the `tip-black` env:
```
tip-black run-test: commands[0] | /home/james/c/tox_test/.tox/tip-black/bin/python -m black . --check
```
Notice the path differences.
However, in tox 4.6.4, I get this for `black`:
```
black: commands[0]> .tox/black/bin/python -m black . --check
```
and an error for `tip-black`:
```
tip-black: commands[0]> .tox/black/bin/python -m black . --check
tip-black: failed with /home/james/c/tox_test/.tox/black/bin/python (resolves to /home/james/c/tox_test/.tox/black/bin/python) is not allowed, use allowlist_externals to allow it
```
Why is it trying to reuse the `black` environment for `tip-black`? It didn't do that in tox 3.
## Environment
- OS: Ubuntu 23.04
## Full outputs
tox 3.28.0, `black` env:
```console
GLOB sdist-make: /home/james/c/tox_test/setup.py
black inst-nodeps: /home/james/c/tox_test/.tox/.tmp/package/1/UNKNOWN-0.0.0.zip
black installed: black==23.3.0,click==8.1.6,importlib-metadata==6.7.0,mypy-extensions==1.0.0,packaging==23.1,pathspec==0.11.2,platformdirs==3.10.0,tomli==2.0.1,typed-ast==1.5.5,typing_extensions==4.7.1,UNKNOWN @ file:///home/james/c/tox_test/.tox/.tmp/package/1/UNKNOWN-0.0.0.zip,zipp==3.15.0
black run-test-pre: PYTHONHASHSEED='772606141'
black run-test: commands[0] | /home/james/c/tox_test/.tox/black/bin/python -m black . --check
All done! ✨ 🍰 ✨
1 file would be left unchanged.
___________________________________ summary ____________________________________
black: commands succeeded
congratulations :)
```
tox 3.28.0, `tip-black` env:
```console
GLOB sdist-make: /home/james/c/tox_test/setup.py
tip-black create: /home/james/c/tox_test/.tox/tip-black
tip-black installdeps: black
tip-black inst: /home/james/c/tox_test/.tox/.tmp/package/1/UNKNOWN-0.0.0.zip
tip-black installed: black==23.3.0,click==8.1.6,importlib-metadata==6.7.0,mypy-extensions==1.0.0,packaging==23.1,pathspec==0.11.2,platformdirs==3.10.0,tomli==2.0.1,typed-ast==1.5.5,typing_extensions==4.7.1,UNKNOWN @ file:///home/james/c/tox_test/.tox/.tmp/package/1/UNKNOWN-0.0.0.zip,zipp==3.15.0
tip-black run-test-pre: PYTHONHASHSEED='1485184410'
tip-black run-test: commands[0] | /home/james/c/tox_test/.tox/tip-black/bin/python -m black . --check
All done! ✨ 🍰 ✨
1 file would be left unchanged.
___________________________________ summary ____________________________________
tip-black: commands succeeded
congratulations :)
```
tox 4.6.4, `black` env:
```console
black: install_deps> python -I -m pip install black==23.3.0
.pkg: install_requires> python -I -m pip install 'setuptools>=40.8.0' wheel
.pkg: _optional_hooks> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: get_requires_for_build_sdist> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: get_requires_for_build_wheel> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: install_requires_for_build_wheel> python -I -m pip install wheel
.pkg: prepare_metadata_for_build_wheel> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: build_sdist> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
black: install_package> python -I -m pip install --force-reinstall --no-deps /home/james/c/tox_test/.tox/.tmp/package/1/UNKNOWN-0.0.0.tar.gz
black: commands[0]> .tox/black/bin/python -m black . --check
.pkg: _exit> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
black: OK (2.56=setup[2.49]+cmd[0.07] seconds)
congratulations :) (2.60 seconds)
```
tox 4.6.4, `tip-black` env:
```console
tip-black: install_deps> python -I -m pip install black
.pkg: _optional_hooks> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: get_requires_for_build_sdist> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: get_requires_for_build_wheel> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: prepare_metadata_for_build_wheel> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: build_sdist> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
tip-black: install_package> python -I -m pip install --force-reinstall --no-deps /home/james/c/tox_test/.tox/.tmp/package/2/UNKNOWN-0.0.0.tar.gz
tip-black: commands[0]> .tox/black/bin/python -m black . --check
tip-black: failed with /home/james/c/tox_test/.tox/black/bin/python (resolves to /home/james/c/tox_test/.tox/black/bin/python) is not allowed, use allowlist_externals to allow it
.pkg: _exit> python /home/james/.pyenv/versions/3.7.16/envs/cloud-init37/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
tip-black: FAIL code 1 (1.39 seconds)
evaluation failed :( (1.43 seconds)
``` | 0easy
|
Title: Error when using FFT on data with one point
Body: Activating the FFT transform on data with one point raises the following error:
> File ~\anaconda3\lib\site-packages\pyqtgraph\graphicsItems\PlotDataItem.py:1190 in _fourierTransform
uniform = not np.any(np.abs(dx-dx[0]) > (abs(dx[0]) / 1000.))
IndexError: index 0 is out of bounds for axis 0 with size 0
This occurs because `dx = np.diff(x)` is empty when x is of size 1.
A possible solution is to add the following condition `if len(x) == 1: return np.array([0]), abs(y)` in the beginning of _fourierTransform in PlotDataItem.py.
* PyQtGraph version: tested on 0.13.3 but will also raise same error in 0.13.7 has the relevant code has not changed. | 0easy
|
Title: Upgrade IRKernel version to 1.0.x
Body: <!-- Thank you for contributing. These HTML commments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Proposed change
<!-- Use this section to describe the feature you'd like to be added. -->
Update to the latest version of IRKernel https://github.com/IRkernel/IRkernel/releases We are using 0.8.x and there is 1.0.x.
### Who would use this feature?
<!-- Describe the audience for this feature. This information will affect who chooses to work on the feature with you. -->
People who use R through Jupyter notebooks. We haven't had a lot of problems with IRKernel but we are a lot behind so who knows what good new features we are missing out
### How much effort will adding it take?
<!-- Try to estimate how much work adding this feature will require. This information will affect who chooses to work on the feature with you. -->
This could be a 5min job to update the version string in the source code or take a lot longer if things break because of the updated version. Hopefully the second case isn't likely.
### Who can do this work?
<!-- What skills are needed? Who can be recruited to add this feature? This information will affect who chooses to work on the feature with you. -->
Someone familiar with programming in Python and with a bit of experience of R lingo (or someone who wants to learn some of this) in order to understand possible error messages from the upgrade.
| 0easy
|
Title: Reducing GPU memory usage
Body: I'm opening this issue following the discussion on the forum: https://forum.pyro.ai/t/reducing-mcmc-memory-usage/5639/6.
The problem is, not-in-place array copying that happens in `mcmc.run` after the actual sampling might result in an out-of-memory exception even though the sampling itself was successful. First of all, it would be nice if this could be avoided and the arrays could be transferred to CPU before any not-in-place operations.
More generally, the GPU memory can be controlled buy sampling sequentially using `post_warmup_state` and transferring each batch of samples to CPU before running the next one. However, this doesn't seem to work as expected, and the consequent batches require more memory than the first one (see the output for the code below).
```
mcmc_samples = [None] * (n_samples // 1000)
# set up MCMC
self.mcmc = MCMC(kernel, num_warmup=n_warmup, num_samples=1000, num_chains=n_chains)
for i in range((n_samples) // 1000):
print(f"Batch {i+1}")
# run MCMC for 1000 samples
self.mcmc.run(jax.random.PRNGKey(0), self.spliced, self.unspliced)
# store samples transferred to CPU
mcmc_samples[i] = jax.device_put(self.mcmc.get_samples(), jax.devices("cpu")[0])
# reset the mcmc before running the next batch
self.mcmc.post_warmup_state = self.mcmc.last_state
```
the code above results in:
```
Running MCMC in batches of 1000 samples, 2 batches in total.
First batch will include 1000 warmup samples.
Batch 1
sample: 100%|██████████| 2000/2000 [11:18<00:00, 2.95it/s, 1023 steps of size 5.13e-06. acc. prob=0.85]
Batch 2
sample: 100%|██████████| 1000/1000 [05:48<00:00, 2.87it/s, 1023 steps of size 5.13e-06. acc. prob=0.85]
2023-11-24 14:43:23.854505: W external/tsl/tsl/framework/bfc_allocator.cc:485] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.56GiB (rounded to 2750440192)requested by op
```
To summarise,
1. Could not-in-place operations at the end of sampling be optionally transferred to CPU?
2. How should one sample sequentially so that memory usage is not increased in the process? | 0easy
|
Title: protocols.is_parameterized(sympy.pi) currently returns True
Body: `is_parameterized` needs a check for free symbols, not just isinstance(sympy.Basic) | 0easy
|
Title: If multiple keywords match, resolve conflict first using search order
Body: The changes for fixing issue #4366 changed the `Set Library Search Order` functionality in a way that it cannot be used to force keywords to be used from imported resources if a keyword with same name exists in the importing resource file (or a testsuite in the future version?)
We have an old library (develop originally for RF 3.0) that enable us to created mocked user keywords to be used in user keyword “unit tests”. With this we can replace some of the user keywords (in the same resource file or testsuite) in a user keyword under test with a mocks and only concentrate on the functionality on that keyword. Just the default unit test approaches.
Now that local keywords cannot be overridden from the imported resource files, even using Set Library Search Order, this approach has now become impossible.
Related issue.
BuiltIn library does not have keyword for getting currently set search order. This is a bit inconvenient for our library. Fake keywords (and implementing resources/libraries) can be set in different stages, so it is a bit hard to keep track of currently active search order. It would be lot easier if it could be asked from robot itself.
I guess `Set Library Search Order` could be called with some dummy value (or None) to get the current value and then set it back (maybe with some modification) but it would look a bit cumbersome.
(In the original implementation we are using internals from BuiltIn to get this information but this is definitely not something that should be done)
| 0easy
|
Title: Marketplace - Reduce margin from 8px to 2px
Body:
### Describe your issue.

| 0easy
|
Title: send_keys cannot send new line
Body: ```from selenium_driverless.types.by import By
import selenium_driverless.webdriver as webdriver
import asyncio
async def main():
async with webdriver.Chrome(options=webdriver.ChromeOptions()) as driver:
await driver.get('https://demoqa.com/text-box')
textarea = await driver.find_elements(By.XPATH, '//textarea')
await textarea[0].send_keys("Hello\nWorld!")
asyncio.run(main())
```
Running this raises an error
Traceback:
`Traceback (most recent call last):
File "d:\Codes\PY\test_textarea.py", line 14, in <module>
asyncio.run(main())
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete
return future.result()
File "d:\Codes\PY\School\SPARK\test_textarea.py", line 11, in main
await textarea[0].send_keys("Hello\nWorld!")
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium_driverless\types\webelement.py", line 567, in send_keys
await self.__target__.send_keys(text)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium_driverless\types\target.py", line 463, in send_keys
await self.execute_cdp_cmd("Input.dispatchKeyEvent", key_event)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium_driverless\types\target.py", line 1106, in execute_cdp_cmd
result = await self.socket.exec(method=cmd, params=cmd_args, timeout=timeout)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\cdp_socket\socket.py", line 85, in exec
res = await asyncio.wait_for(self._responses[_id], timeout=timeout)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\asyncio\tasks.py", line 445, in wait_for
return fut.result()
cdp_socket.exceptions.CDPError: {'code': -32602, 'message': 'Invalid parameters', 'data': 'Failed to deserialize params.code - BINDINGS: string value expected at position 26'}`
It seems like ENTER key was not implemented.
| 0easy
|
Title: UnicodeEncodeError during install_deps with emoji in git branch name
Body: ## Issue
The installation fails on Windows when there is an emoji in the git branch name.
See [here](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914247). I'm not sure if it's a pip or tox error.
[Here is the branch](https://github.com/MyTooliT/python-can/tree/%F0%9F%AA%9F).
## Environment
Provide at least:
- OS: windows-latest
- packages: cachetools-5.2.0 chardet-5.1.0 colorama-0.4.6 distlib-0.3.6 filelock-3.8.2 packaging-22.0 platformdirs-2.6.0 pluggy-1.0.0 pyproject-api-1.2.1 tomli-2.0.1 tox-4.0.16 virtualenv-20.17.1
## Output of running tox
Provide the output of `tox -rvv`:
```console
Run tox -e gh
tox -e gh
shell: C:\Program Files\PowerShell\7\pwsh.EXE -command ". '{0}'"
env:
PY_COLORS: [1](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:1)
pythonLocation: C:\hostedtoolcache\windows\Python\3.10.9\x64
PKG_CONFIG_PATH: C:\hostedtoolcache\windows\Python\3.10.9\x64/lib/pkgconfig
Python_ROOT_DIR: C:\hostedtoolcache\windows\Python\3.10.9\x64
Python[2](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:2)_ROOT_DIR: C:\hostedtoolcache\windows\Python\[3](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:3).10.9\x6[4](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:4)
Python3_ROOT_DIR: C:\hostedtoolcache\windows\Python\3.10.9\x64
.pkg: remove tox env folder D:\a\python-can\python-can\.tox\.pkg
gh: install_deps> python -I -m pip install coverage==6.[5](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:5).0 coveralls==3.3.1 hypothesis~=[6](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:6).35.0 parameterized~=0.8 pyserial~=3.5 pytest-cov==4.0.0 pytest-timeout==2.0.2 pytest==[7](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:7).1.*,>=7.1.2
gh: internal error
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\site-packages\tox\session\cmd\run\single.py", line 45, in _evaluate
tox_env.setup()
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\site-packages\tox\tox_env\api.py", line 242, in setup
self._setup_env()
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\site-packages\tox\tox_env\python\runner.py", line 99, in _setup_env
self._install_deps()
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\site-packages\tox\tox_env\python\runner.py", line 103, in _install_deps
self._install(requirements_file, PythonRun.__name__, "deps")
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\site-packages\tox\tox_env\api.py", line 96, in _install
self.installer.install(arguments, section, of_type)
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\site-packages\tox\tox_env\python\pip\pip_install.py", line [8](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:8)3, in install
self._install_requirement_file(arguments, section, of_type)
File "C:\hostedtoolcache\windows\Python\3.10.[9](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:9)\x64\lib\site-packages\tox\tox_env\python\pip\pip_install.py", line 111, in _install_requirement_file
self._execute_installer(args, of_type)
File "C:\hostedtoolcache\windows\Python\3.[10](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:10).9\x64\lib\site-packages\tox\tox_env\python\pip\pip_install.py", line 165, in _execute_installer
outcome = self._env.execute(cmd, stdin=StdinSource.OFF, run_id=f"install_{of_type}")
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\site-packages\tox\tox_env\api.py", line 379, in execute
with self.execute_async(cmd, stdin, show, cwd, run_id, executor) as status:
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\contextlib.py", line 142, in __exit__
next(self.gen)
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\site-packages\tox\tox_env\api.py", line 433, in execute_async
self._log_execute(request, execute_status)
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\site-packages\tox\tox_env\api.py", line 439, in _log_execute
self._write_execute_log(self.name, self.env_log_dir / f"{self._log_id}-{request.run_id}.log", request, status)
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\site-packages\tox\tox_env\api.py", line 447, in _write_execute_log
file.write(f"env {env_key}: {env_value}\n")
File "C:\hostedtoolcache\windows\Python\3.10.9\x64\lib\encodings\cp[12](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:13)52.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\U0001fa9f' in position 21: character maps to <undefined>
gh: FAIL code 2 ([13](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:14).27 seconds)
evaluation failed :( ([17](https://github.com/hardbyte/python-can/actions/runs/3748530924/jobs/6369914600#step:5:18).48 seconds)
Error: Process completed with exit code 1.
```
| 0easy
|
Title: Typo in [Strings and the backslashes], missing right parenthesis
Body: In section Strings and the backslashes, for the explanations, 2nd point with one of the examples as follows
> print(repr(r'wt\\"f')
'wt\\\\"f'
I think the right parenthesis is missing here. It should be
print(repr(r'wt\\"f') **)**
'wt\\\\"f'
p.s. I am not sure if others had already bring up this issue, I tried search over existing issues, but I don't find anything related, so i bring it up here. | 0easy
|
Title: pip install kymatio fails on bare environment
Body: We should add torch to requriements.txt, according to @lostanlen . | 0easy
|
Title: torch.distributed的一些问题
Body: torch.distributed新手,问几个问题
1.看程序好像每个核都load了参数,为什么还要主核发布参数呢
2.为什么不直接用DistributedDataParallel
3.可不可以直接all_reduce一下loss,这样不是会快一些吗,唯一的区别应该就是bn层吧?
4.all_reduce参数的时候为什么用sum而不用平均呢 | 0easy
|
Title: Replace the tool of code generator for the gRPC API
Body: /kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
Currently, we are generating codes for the gRPC API from the protocol buffers definition with [`znly/protoc`](https://hub.docker.com/r/znly/protoc/).
However, the image doesn't seem to be maintained since the latest image was published 4 years ago.
So, we should replace the toll with another one.
follow-up: #2140
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
---
<!-- Don't delete this message to encourage users to support your issue! -->
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| 0easy
|
Title: `rio.(Multi)Slider` Add Range Selection Capability to `rio.(Multi)Slider` Component
Body: It would be beneficial to have the ability to select a range in a `Slider`. I'm uncertain whether this functionality should be integrated into the existing rio.Slider() or if it should be implemented as a separate component, considering `SliderChangeEvent` and so on.
### Result:

### Use Case:
This feature is particularly useful in scenarios where a range of values needs to be selected, such as filtering data within a certain range, setting a range of dates, or adjusting parameters that work within a specified interval. By allowing range selection, the MultiSlider component can cater to a broader range of applications and user needs.
### Related Issue:
#45
| 0easy
|
Title: Raise test coverage above 90% for giotto/mapper/utils/_cluster.py
Body: Current test coverage from pytest is 19% | 0easy
|
Title: 后台配置了支付宝当面付 请求返回400 BAD REQUEST
Body: 后台配置了支付宝当面付 请求返回

| 0easy
|
Title: Marketplace - Fix margin above "Top Agents". Change it to 25px between the line and the header
Body:
### Describe your issue.
Fix this margin. Change it to 25px between the line and the header

| 0easy
|
Title: ta.utils.real_body
Body: I'm looking at the real_body function. Should green candles have a negative body and red candles a positive body? I would think this should be reversed.
>df['can'] = ta.utils.candle_color(df.open, df.close)
df['body'] = ta.utils.real_body(df.open, df.close)
df['hl'] = ta.utils.high_low_range(df.high, df.low)
df.tail()
timestamp_______________________open high low close volume vwap can body hl
2020-08-11 15:56:00 1371.78 1373.99 1369.04 1372.47 47773.0 1395.15 1 -0.69 4.95
2020-08-11 15:57:00 1373.73 1374.00 1371.33 1373.74 27772.0 1395.07 1 -0.01 2.67
2020-08-11 15:58:00 1373.52 1375.44 1373.02 1375.37 31786.0 1395.00 1 -1.85 2.42
2020-08-11 15:59:00 1375.82 1375.89 1373.00 1374.00 31357.0 1394.90 -1 1.82 2.89
2020-08-11 16:00:00 1374.09 1374.63 1372.27 1374.39 132845.0 1394.04 1 -0.30 2.36
| 0easy
|
Title: Path normalization (train on Windows and deploy on Linux)
Body: I trained models on Windows, then I tried to use them on Linux, however, I could not load them due to an incorrect path joining. During model loading, I got `learner_path` in the following format `experiments_dir/model_1/100_LightGBM\\learner_fold_0.lightgbm`. The last two slashes were incorrectly concatenated with the rest part of the path. In this regard, I would suggest adding something like `learner_subpath = learner_subpath.replace("\\", "/")` before [this code line](https://github.com/mljar/mljar-supervised/blob/92706af75bd1859805a413768dc261d0572c3e06/supervised/model_framework.py#L590). Though, there is a need to think about opposite cases: when a model is trained on Linux and then is used on Windows.
| 0easy
|
Title: Add Normalizer Estimator
Body: The normalizer estimator scales the samples independently by the sample's norm (l1, l2). Use the IncrementalBasicStatistics
estimator to generate the sum squared data and use it for generating only the l2 version of the normalizer. Investigate where
the new implementation may be low performance and include guards in the code to use Scikit-learn as necessary. The final
deliverable would be to add this estimator to the 'spmd' interfaces which are effective on MPI-enabled supercomputers, this
will use the underlying MPI-enabled mean and variance calculators in IncrementalBasicStatistics. This is an easy difficulty project,
and would be a medium time commitment when combined with other pre-processing projects.
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Normalizer.html | 0easy
|
Title: Redirect to login if not authorized config not working
Body: ## CKAN version
CKAN v2.10.1
## Describe the bug
The` ckan.redirect_to_login_if_not_authorized` configuration option in CKAN is not functioning as expected. This configuration setting is intended to redirect users to the login page when users are not logged in.
https://docs.ckan.org/en/2.10/maintaining/configuration.html#ckan-redirect-to-login-if-not-authorized
| 0easy
|
Title: Update asgi look examples to use aioredis 2
Body: A new major vesion of aioredis was released, and it has some api changes.
This is the changelog https://github.com/aio-libs/aioredis-py/blob/master/CHANGELOG.md | 0easy
|
Title: Datalab issue type for null/missing feature values
Body: [New Datalab issue type](https://docs.cleanlab.ai/master/cleanlab/datalab/guide/custom_issue_manager.html) called something like `null` that checks `features` for rows that are entirely missing / null values (across all columns).
Those rows should get flagged as `is_null_issue`.
The quality score for each row can be the fraction of `features` which are missing in that row.
Make sure this issue check does not waste compute time if it is irrelevant, ie. first check that there even exist any missing values in the `features` at all before proceeding further. | 0easy
|
Title: waitUntil page should link to waitUntil API doc and/or demonstrate use of the timeout param
Body: Thanks for pytest-qt, it is a fantastic plugin. 😊
This is mostly a to-do-eventually for me, but writing it down in case someone else can pick it up earlier.
I was wanting to figure out whether waitUntil had a timeout parameter, but the top result on DDG is the [waitUntil tutorial/demo page](https://pytest-qt.readthedocs.io/en/latest/wait_until.html) ([source](https://github.com/pytest-dev/pytest-qt/blob/6026f8bac834fd1a78776a4326dd27691e89b286/docs/wait_until.rst)), which does not use the `timeout=` kwarg and does not link out to the [API reference page](https://pytest-qt.readthedocs.io/en/latest/reference.html#pytestqt.qtbot.QtBot.waitUntil) ([source](https://github.com/pytest-dev/pytest-qt/blob/6026f8bac834fd1a78776a4326dd27691e89b286/src/pytestqt/qtbot.py#L483-L552)).
Not a big deal, but either of those would have saved me a search. 😊
Thanks again! | 0easy
|
Title: Stuck Musician
Body: Hey Thanks for making this. I am on Mac OS. I followed your instructions on installation and usage up to the end of using docker. Maybe because I'm not a developer, but I don't see how to use this. Is there an app or web page somewhere that I drop files on? Seems like I didn't get the "usage" part of installation and usage section. | 0easy
|
Title: Menu at the homepage is not so friendly
Body: The main menu at the top of page is a little bit strange. The background of page is white and the background of menu is white too. Check out the screenshot.

| 0easy
|
Title: Refactor the data source view
Body: | 0easy
|
Title: Ability to configure topic params in confluent create_topics
Body: Currently if the topic doesn't exists we create it with `num_partitions` and `replication_factor` set to `1` - https://github.com/airtai/faststream/blob/7c069db7ff28bb43451aaa4dbffd84e8c261567a/faststream/confluent/client.py#L438-L452. We should provide a way for users to configure the `num_partitions` and `replication_factor` for each topic if possible.
Based on https://github.com/airtai/faststream/discussions/1821 | 0easy
|
Title: Add a BUSYGROUP reply error
Body: The XGROUP CREATE command can return a BUSYGROUP error when a group already exists: https://redis.io/commands/xgroup
I think the `ReplyError` subclass for matching it would look like this:
```py
class BusyGroupError(ReplyError):
MATCH_REPLY = "BUSYGROUP Consumer Group name already exists"
``` | 0easy
|
Title: [DOC] Follow PEP 257 Docstring conventions
Body: # Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
I propose that we follow the docstring conventions laid out in [PEP 257](https://www.python.org/dev/peps/pep-0257/).
Namely:
>One-liners are for really obvious cases. They should really fit on one line. For example:
```python
def kos_root():
"""Return the pathname of the KOS root directory."""
global _kos_root
if _kos_root: return _kos_root
...
```
and:
>Multi-line docstrings consist of a summary line just like a one-line docstring, followed by a blank line, followed by a more elaborate description.
```python
def complex(real=0.0, imag=0.0):
"""Form a complex number.
Keyword arguments:
real -- the real part (default 0.0)
imag -- the imaginary part (default 0.0)
"""
if imag == 0.0 and real == 0.0:
return complex_zero
...
```
We're using Sphinx, so the docstring will have a different body, but we can still follow the formatting.
Following `pydocstyle`, the first line of the docstring should end in a period and be in imperative mood:
```python
D401: First line should be in imperative mood: 'Do', not 'Does'.
[Docstring] prescribes the function or method's effect as a command:
("Do this", "Return that"), not as a description; e.g. don't write "Returns the pathname ...".
```
Additionally, all tests should have a one line docstring that explains what the test does (for everyone's sake), and what it is testing for. We don't actually have this in the docs, but it is part of my ongoing campaign with #306
The PR for this issue (which I am happy to do), will involve updating the docs to reflect these standards, and updating docstrings throughout the project.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to documentation page](https://ericmjl.github.io/pyjanitor/CONTRIBUTION_TYPES.html)
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/docs/CONTRIBUTION_TYPES.rst)
| 0easy
|
Title: `--snapshot-default-extension` doesn't support pytest 7 `pythonpath`
Body: **Describe the bug**
`--snapshot-default-extension` doesn't support pytest 7's `pythonpath` configuration option, for pytest-only additions to the Python path.
For my project, I'm using `--snapshot-default-extension` so the right extension and serializer are in place, before Syrupy begins its reporting. My Syrupy extensions are for tests only, so they live outside of my src/ folder. Only the src/ folder of my project seems to be on the default Python path. So when running tests, I need to tell Syrupy about my extensions, somehow. I'd love to use the vanilla `pytest` command directly, configured in pyproject.toml, without having to pass a custom `PYTHONPATH` to `pytest` every time.
**To reproduce**
See my branch, [john-kurkowski/syrupy#default-extension-pythonpath](https://github.com/john-kurkowski/syrupy/compare/main..default-extension-pythonpath). In the final commit, https://github.com/john-kurkowski/syrupy/commit/ea9779371583253c03b0bdf47c09ca6f5526d909, switching from modifying `sys.path` to setting pytest's `--pythonpath` breaks 2/3 of the branch's test cases. **EDIT:** pytest's `pythonpath` an INI configuration option, not CLI.
```diff
diff --git a/tests/integration/test_snapshot_option_extension.py b/tests/integration/test_snapshot_option_extension.py
index de8e807..42b2eec 100644
--- a/tests/integration/test_snapshot_option_extension.py
+++ b/tests/integration/test_snapshot_option_extension.py
@@ -26,11 +26,11 @@ def testfile(testdir):
return testdir
-def test_snapshot_default_extension_option_success(monkeypatch, testfile):
- monkeypatch.syspath_prepend(testfile.tmpdir)
-
+def test_snapshot_default_extension_option_success(testfile):
result = testfile.runpytest(
"-v",
+ "--pythonpath",
+ testfile.tmpdir,
"--snapshot-update",
"--snapshot-default-extension",
"extension_file.MySingleFileExtension",
@@ -63,11 +63,11 @@ def test_snapshot_default_extension_option_module_not_found(testfile):
assert result.ret
-def test_snapshot_default_extension_option_member_not_found(monkeypatch, testfile):
- monkeypatch.syspath_prepend(testfile.tmpdir)
-
+def test_snapshot_default_extension_option_member_not_found(testfile):
result = testfile.runpytest(
"-v",
+ "--pythonpath",
+ testfile.tmpdir,
"--snapshot-update",
"--snapshot-default-extension",
"extension_file.DoesNotExistExtension",
```
**Expected behavior**
Tests in my branch should pass.
**Environment:**
- OS: macOS
- Syrupy Version: 4.0.1
- Python Version: 3.11.1
**Workaround**
Set `PYTHONPATH` prior to invoking the pytest CLI.
```sh
PYTHONPATH=path/to/my/extensions/folder pytest --snapshot-default-extension some_module.SomeExtension
```
| 0easy
|
Title: progress bar for feature transformation pipeline
Body: give people a sense of what's taking a long time
have self.is_first_transform = True for all our wrappers
Then have both "Fitting X pipeline component" and "Transforming with X pipeline component", but making sure we only do this the first time | 0easy
|
Title: Add SHAP explanations to each prediction
Body: We use SHAP explanations for models. It will be nice to have explanations for each prediction. | 0easy
|
Title: [UX] Dense CLI outputs for resources not enough
Body: The newly added back logs below is a bit too dense. We can try to dim them and maybe add INDENT_SYMBOL at the front.
```
Try specifying a different CPU count, or add "+" to the end of the CPU count to allow for larger instances.
Try specifying a different memory size, or add "+" to the end of the memory size to allow for larger instances.
```
<img width="1171" alt="Image" src="https://github.com/user-attachments/assets/0809667c-30b8-4154-b1da-bbeeb0a12a6d" /> | 0easy
|
Title: Adding "www." to the start of the url breaks the platform
Body: If you go to [https://www.platform.agpt.co/](https://www.platform.agpt.co/) instead of [https://platform.agpt.co/](https://www.platform.agpt.co/) , the platform will not work at all. You won't be logged in even through you're logged in usually. <br>You can log in, but wont see any Agents in your library.
### Steps to reproduce:
1. Go to [https://www.platform.agpt.co/](https://www.platform.agpt.co/)
2. Observe that you are logged out - log back in
3. Go to your library
4. Observe that you have no Agents in your library - even if you did before.
### Desired Behaviour
[https://www.platform.agpt.co/](https://www.platform.agpt.co/) and [https://platform.agpt.co/](https://www.platform.agpt.co/) are the same website, rather than duplicate pages.
There should be no difference between them, they should both work, and you should be logged in no matter which url you type in.
---
Ahrefs is also showing these as "**Duplicate pages without canonical"** which is hurting our SEO.
<img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/b55bf8d8-964c-4d24-99af-481a9e70bafc/eb6034ac-c059-40a8-a3fb-14df9bcd197d?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi9iNTViZjhkOC05NjRjLTRkMjQtOTlhZi00ODFhOWU3MGJhZmMvZWI2MDM0YWMtYzA1OS00MGE4LWEzZmItMTRkZjliY2QxOTdkIiwiaWF0IjoxNzM2MDE2NzYzLCJleHAiOjMzMzA2NTc2NzYzfQ.TLz_WDkrCQTabQo6tOvzJ-alR2gvDlovdM0n3id1h-I " alt="image.png" width="382" data-linear-height="284" /> | 0easy
|
Title: Add missing `await`
Body: ### Summary
`script` is an async function. `await` is needed to ensure it's finished.
```diff
diff --git a/.github/workflows/advice.yml b/.github/workflows/advice.yml
index 91dfe7379..b86231559 100644
--- a/.github/workflows/advice.yml
+++ b/.github/workflows/advice.yml
@@ -23,4 +23,4 @@ jobs:
const script = require(
`${process.env.GITHUB_WORKSPACE}/.github/workflows/advice.js`
);
- script({ context, github });
+ await script({ context, github });
diff --git a/.github/workflows/closing-pr.yml b/.github/workflows/closing-pr.yml
index 53dae38d8..dc98e124c 100644
--- a/.github/workflows/closing-pr.yml
+++ b/.github/workflows/closing-pr.yml
@@ -28,4 +28,4 @@ jobs:
const script = require(
`${process.env.GITHUB_WORKSPACE}/.github/workflows/closing-pr.js`
);
- script({ context, github });
+ await script({ context, github });
```
### Notes
- Make sure to open a PR from a **non-master** branch.
- Sign off the commit using the `-s` flag when making a commit:
```sh
git commit -s -m "..."
# ^^ make sure to use this
```
- Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
| 0easy
|
Title: Add temperature and other params to connected chats
Body: Currently chats like /code chat don't support adjusting parameters like temperature, top_p, (maybe we can even support the new seed param?) | 0easy
|
Title: Feature: use uv instead of pip to speedup CI requirements installation
Body: | 0easy
|
Title: prompt: Better editing features
Body: Just sharing few ideas
Xonsh Editor:
☐ shortcuts to copy current stdout/stderr and to open them in $EDITOR
☐ clip buffer - https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins/copybuffer
☐ clip current directory - https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins/copydir
☐ command to clip content of a file - https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins/copyfile
☐ create new line by pressing alt+enter as in many editors
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Marketplace - agent page - categories chips, please fix padding
Body:
### Describe your issue.
<img width="597" alt="Screenshot 2024-12-17 at 17 14 13" src="https://github.com/user-attachments/assets/68200f48-9e61-435a-a548-241e995ab4b7" />
Use the typography style "p-ui" from the typography guideline: https://www.figma.com/design/Ll8EOTAVIlNlbfOCqa1fG9/Agent-Store-V2?node-id=2759-9596&t=2JI1c3X9fIXeTTbE-1
**Update font to the following:**
font-family: Geist;
font-size: 16px;
font-weight: 400;
line-height: 24px;
text-align: left;
text-underline-position: from-font;
text-decoration-skip-ink: none;
**Update padding to the following:**
top and bottom padding: 10px
left and right padding: 16px
**Change outline color of chip to the following:**
border: 1px solid var(--neutral-600, #525252)
| 0easy
|
Title: 【疑问】使用这个程序会有封IP,封设备的风险吗
Body: 非常感谢大佬的程序,免费且开源,作为小白很想咨询一下,会因为高频访问主网站而被封IP和封设备吗?(应该不会封号,因为都没有登录),如何避免被这种情况呢? | 0easy
|
Title: Bidirectional RNN
Body: Is there a way to train a bidirectional RNN (like LSTM or GRU) on trax nowadays? | 0easy
|
Title: `cirq.decompose_once` fails when called on `cirq.CZ`
Body: Calling `cirq.decomponse_once` on `cirq.CZ` raises an error:
```python
In [4]: cirq.decompose(cirq.CZ(cirq.q(0), cirq.q(1)))
Out[4]: [cirq.CZ(cirq.LineQubit(0), cirq.LineQubit(1))]
In [5]: cirq.decompose_once(cirq.CZ(cirq.q(0), cirq.q(1)))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[5], line 1
----> 1 cirq.decompose_once(cirq.CZ(cirq.q(0), cirq.q(1)))
File ~/.virtualenvs/pyle/lib/python3.11/site-packages/cirq/protocols/decompose_protocol.py:388, in decompose_once(val, default, flatten, context, *args, **kwargs)
383 if method is None:
384 raise TypeError(
385 f"object of type '{type(val)}' has no _decompose_with_context_ or "
386 f"_decompose_ method."
387 )
--> 388 raise TypeError(
389 f"object of type {type(val)} does have a _decompose_ method, "
390 "but it returned NotImplemented or None."
391 )
TypeError: object of type <class 'cirq.ops.gate_operation.GateOperation'> does have a _decompose_ method, but it returned NotImplemented or None.
```
| 0easy
|
Title: (FLK-D200) One-line docstring should fit on one line with quotes
Body: ## Description
If a docstring fits in a single line (72 character according to PEP8), it is recommended to have the quotes on the same line.
## Occurrences
There is 1 occurrence of this issue in the repository.
See all occurrences on DeepSource → [deepsource.io/gh/scanapi/scanapi/issue/FLK-D200/occurrences/](https://deepsource.io/gh/scanapi/scanapi/issue/FLK-D200/occurrences/)
| 0easy
|
Title: Bug while running the merge profile list notebook
Body: **General Information:**
**Describe the bug:**
`dataprofiler.profilers.utils` has changed to this `dataprofiler.profilers.profiler_utils`
```
ModuleNotFoundError Traceback (most recent call last)
Cell In[1], line 11
10 import dataprofiler as dp
---> 11 from dataprofiler.profilers.utils import merge_profile_list
12 except ImportError:
ModuleNotFoundError: No module named 'dataprofiler.profilers.utils'
During handling of the above exception, another exception occurred:
ModuleNotFoundError Traceback (most recent call last)
Cell In[1], line 14
12 except ImportError:
13 import dataprofiler as dp
---> 14 from dataprofiler.profilers.utils import merge_profile_list
16 # remove extra tf loggin
17 tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
ModuleNotFoundError: No module named 'dataprofiler.profilers.utils'
```
**To Reproduce:**
Checkout latest (0.10.3) version of the dataprofiler run notebook named `merge_profile_list.ipynb`
**Expected behavior:**
Failure on module import | 0easy
|
Title: BUG: xorbits.numpy.tril cannot handle 1-d array input
Body: ### Describe the bug
xorbits.numpy.tril raises exception when the given input is a 1-d array.
### To Reproduce
```python
np.tril(np.ones(5))
```
### Expected behavior
Output:
```python
array([[1., 0., 0., 0., 0.],
[1., 1., 0., 0., 0.],
[1., 1., 1., 0., 0.],
[1., 1., 1., 1., 0.],
[1., 1., 1., 1., 1.]])
```
### Additional context
```
File ~/Documents/miniconda3/envs/prod/lib/python3.9/site-packages/xorbits/_mars/tensor/datasource/tri.py:58, in TensorTri.tile(cls, op)
56 out_chunks = []
57 for out_idx in itertools.product(*[range(len(s)) for s in nsplits]):
---> 58 i, j = out_idx[-2:]
59 ld_pos = cum_size[-2][i] - 1, cum_size[-1][j] - nsplits[-1][j]
60 ru_pos = cum_size[-2][i] - nsplits[-2][i], cum_size[-1][j] - 1
ValueError: not enough values to unpack (expected 2, got 1)
``` | 0easy
|
Title: Revamp conversation starters
Body: The starters make the conversation feel a bit forced sometimes. For example, with the main starter, the bot always asks a question to the user on its response during some conversations. Generally, we want to edit the main starter to make conversations feel more natural and cohesive | 0easy
|
Title: Allow --debug flag that prints all the commands sent to openai
Body: | 0easy
|
Title: Resend GPT3 prompt if original prompt message gets edited
Body: If a prompt gets edited by the original author, we want to be able to resend the prompt and edit the original response message with the updated response for the updated prompt. | 0easy
|
Title: Implement `MAYBE` in adapter resolution
Body: See https://github.com/betodealmeida/shillelagh/pull/110. | 0easy
|
Title: BuiltIn: New `Reset Log Level` keyword for resetting the log level to the original value
Body: _Opened this request after the discussion under https://github.com/robotframework/robotframework/issues/4919_
I'd like to request adding a new keyword `Reset Log Level` that would always reset the log level to the one set with `--loglevel` argument to robot, or to `INFO` if not set.
It would ease the work as now if I want to revert to the previous log level after a `Set Log Level` keyword, I need to store the previous keyword in an extra variable. | 0easy
|
Title: Django >= 2.0.6 now required for python > 3.3
Body: [This commit](https://github.com/cobrateam/splinter/commit/907208296d1936b2e8abfe21b7109cb0dad9e6a2) ([this part in particular](https://github.com/cobrateam/splinter/commit/907208296d1936b2e8abfe21b7109cb0dad9e6a2#diff-2eeaed663bd0d25b7e608891384b7298R30)) pins the django version to >= 2.0.6 if you're installing on python 3.3. As far as I know, Django 1.11 is still supported on python 3, and this dependency declaration prevents me from using django 1.11 with splinter if I want to be on python 3.3.
I think the correct thing to do here is just to declare a dependency on `django >= 1.7.11` regardless of the python version, since this library supports all django versions above that, and has little to do with the python version. | 0easy
|
Title: `rectangle_perimeter` with NaN params causes fatal memory leak
Body: ### Description:
Following code raises RAM usage to 100% within a few seconds and hangs the (64GB) machine. I had the NaN's due to an invalid bounding box calculation.
### Way to reproduce:
```
import numpy as np
from skimage.draw import rectangle_perimeter
rectangle_perimeter((np.nan, np.nan), (np.nan, np.nan))
```
### Version information:
```Shell
3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Windows-10-10.0.22621-SP0 (I am actually on Windows 11)
scikit-image version: 0.21.0
numpy version: 1.24.3
```
| 0easy
|
Title: Add replace_original/delete_original to WebhookClient (async/sync)
Body: Add functionality to allow updating a message when using the WebhookClient by passing `replace_original=true` to the API.
### Requirements
Currently it is only possible to send a new message using the WebhookClient. The Slack API offers the option to include a boolean `replace_original` flag that will then update the original message instead of posting a new message. It would be great if we could add a similar `WebClient.chat_update` method to the WebhookClient. Some PoC code of what this could look like:
```python
def chat_update(
self,
*,
text: Optional[str] = None,
attachments: Optional[List[Union[Dict[str, any], Attachment]]] = None,
blocks: Optional[List[Union[Dict[str, any], Block]]] = None,
headers: Optional[Dict[str, str]] = None,
) -> WebhookResponse:
"""Performs a Slack API request and returns the result.
:param text: the text message (even when having blocks, setting this as well is recommended as it works as fallback)
:param attachments: a collection of attachments
:param blocks: a collection of Block Kit UI components
:param headers: request headers to append only for this request
:return: API response
"""
return self.send_dict(
body={
"text": text,
"attachments": attachments,
"blocks": blocks,
"replace_original": True,
},
headers=headers,
)
``` | 0easy
|
Title: Graceful shutdown of signal handlers
Body: When signals are added, we should run some sort of a cleanup like this on shutdown to make sure any custom signals are run. This also includes a suggestion that we identify them with a name: `signal-XXXXX`.
```python
async def cleanup(app, _):
for task in asyncio.all_tasks():
if task.get_name().startswith("signal"):
await task
``` | 0easy
|
Title: Update HTTP status code constants wrt RFC 9110
Body: Update our HTTP status code constants (in [`falcon/status_codes.py`](https://github.com/falconry/falcon/blob/master/falcon/status_codes.py)) to conform to [RFC 9110](https://datatracker.ietf.org/doc/html/rfc9110).
While we should be largely up-to-speed, there is at least one known inaccuracy wrt `413 Content Too Large` (see [RFC 9110, Section 15.5.14](https://datatracker.ietf.org/doc/html/rfc9110#name-413-content-too-large)). We still use the older `Payload Too Large`. | 0easy
|
Title: Allow specifying additional files to watch
Body: `rio run` automatically restarts the project when changes to Python files are detected. Some projects also depend on other files however, such as JSON or CSV data files. It would be nice for `rio run` to watch those as well.
There's a couple approaches to solving this:
1. Allow adding "negative" paths to `.rioignore.
2. Add command-line flags for this
3. Allow adding paths to `rio.toml`
| 0easy
|
Title: Japanese: Make "events" in the document even clearer (#412)
Body: We can apply the updates in #415 to the Japanese version of the documents. See also #412
### The page URLs
* https://slack.dev/bolt-python/ja-jp/concepts#basic
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: Add support for pipelined visitor functions
Body: This would allow for a meta-visitor that applies the required visitors in the provided order. This separates the initialization from the steps of the visitors, which is a little more readable.
This could look something like:
```python
visitor_fn_1 = ...
visitor_fn_2 = ...
visitor_fn_3 = ...
visitor_fn = pipeline(
visitor_fn_1,
visitor_fn_2,
visitor_fn_3,
)
block.visit_and_update_expressions(visitor_fn)
```
as a replacement to
```python
visitor_fn_1 = ...
block.visit_and_update_expressions(visitor_fn_1)
visitor_fn_2 = ...
block.visit_and_update_expressions(visitor_fn_2)
visitor_fn_3 = ...
block.visit_and_update_expressions(visitor_fn_3)
``` | 0easy
|
Title: The is operator in the jitted code does not work for structref instance.
Body: ## **Reporting a bug**
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [ ] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
Running the code below:
```python
import platform, sys
import numba as nb
from numba import types, extending
from numba.experimental import structref
print(platform.platform(terse=True))
print(sys.version)
print(nb.__version__)
@structref.register
class NodeType(types.StructRef):
pass
node_type = NodeType([])
class Node(structref.StructRefProxy):
pass
@extending.overload(Node)
def ol_Node():
def impl():
self = structref.new(node_type)
return self
return impl
@nb.njit
def main():
node = Node()
print(node is node)
List = [0]
print(List is List)
Tuple = (0,)
print(Tuple is Tuple)
main()
```
resulted in:
```
Linux-6.1.85+-x86_64-with-glibc2.35
3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0]
0.61.0
False
True
True
```
As above in the title.
I believe True will be printed, and the is operator is a good way to treat all instances as distinct and different.
This is the first issue I've opened, so please let me know if there is anything I have not done or am unclear on. | 0easy
|
Title: Dev branch: TypeError: bind() takes 2 positional arguments but 3 were given
Body: I've upgraded to the development branch and while trying some custom indicator I get:
```
import_dir(ta_dir)
File "/opt/homebrew/lib/python3.9/site-packages/pandas_ta/custom.py", line 198, in import_dir
bind(module_name, _callable, _method_callable)
TypeError: bind() takes 2 positional arguments but 3 were given
```
In custom.py something doesn't look right as here we call bind with 3 arguments:
https://github.com/twopirllc/pandas-ta/blob/development/pandas_ta/custom.py#L198
whereas bind() only has 2:
https://github.com/twopirllc/pandas-ta/blob/development/pandas_ta/custom.py#L14
Perhaps it's an ongoing work but just to let you know! Thanks!
PS: I'm trying to use (and debug) the new vwap indicator. On my side all bands have the same values, but not sure if something is wrong on my side yet. I'm a bit stuck because trying the new vwap on the main branch is missing some methods like `[X] An error occurred when attempting to load module vwapro: No module named 'pandas_ta._typing'` | 0easy
|
Title: Add this example to the docs
Body: Cool example to add to the docs: https://morphocode.com/location-time-urban-data-visualization/#row-1497426575

The data source referenced in the original post returns a 404. After a quick search, this is likely the new equivalent dataset: https://data.melbourne.vic.gov.au/explore/dataset/pedestrian-counting-system-monthly-counts-per-hour/table/?sort=timestamp | 0easy
|
Title: Allow positional argument in ploomber scaffold
Body: ```
ploomber scaffold mynewproject
```
Note: since this requires an update to `ploomber-scaffold`, we'll have to release that first, then pin the version here | 0easy
|
Title: [DOC] Move Setting up environment in PyCharm to its own page
Body: # Context: Move a section to its own page
While tinkering with the Docs, I noticed that `Contributing.html` is quite long now. So much so that it might be a bit daunting for future contributors.
Towards the tail end of that same page, I see that there is a Section (with nice Images) of how to go about setting up the `environment` for PyCharm users.
I would like to propose that we consider moving this section to its own stand-alone page. `Contributing.html` could link to that page and direct those who are using PyCharm to go there.
There are pros and cons to having one long page versus multiple smaller pages. (For that matter, `Getting Started` could be its own page too.) Is there any appetite for moving "Set up env in PyCharm" to be its own page? If yes, just assign it to me. Thanks. | 0easy
|
Title: Verify `pyright` best practices
Body: We've recently replaced mypy with pyright. Would be good to do a general check that we are following best practices, compare the new settings file with the previous mypy settings, increase the strictness as much as possible, etc... | 0easy
|
Title: Remove status messages on P2P persistent connection creation
Body: Currently, the daemon sends a status message to the client after a persistent connection to it is openned. This is a no-op, as the status message is either "ok" or not sent at all. I suggest we remove this feature from the daemon as this might improve startup times and reduce code complexity. | 0easy
|
Title: AttributeError: module 'kornia.augmentation' has no attribute 'RandomTransplantation'
Body: ### Describe the bug
I was trying to use the RandomTransplantation function in my code to augment images using the mask. As follows:
### Reproduction steps
```bash
`
import torch
import kornia.augmentation as K
aug = K.RandomTransplantation(p=1.)
````
```
### Expected behavior
I get the following error:
`AttributeError: module 'kornia.augmentation' has no attribute 'RandomTransplantation'`. Any thoughts on how to fix it? Thanks
### Environment
```shell
wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
```
### Additional context
_No response_ | 0easy
|
Title: feature request: please wrap optax optimizers
Body: The [optax library from Deepmind](https://github.com/deepmind/optax) is the future for jax ecosystem (e.g. it will be adopted by Flax team), and will likely replace [jax.experimental.optimizers](https://jax.readthedocs.io/en/latest/jax.experimental.optimizers.html). It would be good to update https://num.pyro.ai/en/stable/optimizers.html accordingly.
| 0easy
|
Title: Add friendlier error messages
Body: For example, having the installer check if the file exists after downloading, and if it doesn't exist, exit with an error stating the file was downloaded but something deleted it. Check your AV. | 0easy
|
Title: Add python 3.9 to test suite / tox
Body: The tox.ini file and the github actions need a 3.9 test runner added to cover latest stable version of python.
Maybe 3.10 if it's working with dependencies as well? | 0easy
|
Title: Allow default format selections on import, export, and action screens
Body: It would be useful to allow specifying a default format on the import and export screens as well as the actions. On some of the screens it will auto-select if you only have one option, but the Actions do not work that way. If you're always using a specific format (or usually even) it gets to be a lot of clicking going back to Yaml or Json or what have you from the menu.
| 0easy
|
Title: Make Custom Indexes feature backed by pinecone
Body: Currently, custom indexes are all saved in memory and in files in the local directory where the bot runs. We need to make all these indexes backed by a vector store, in our case, pinecone. | 0easy
|
Title: `--legacy-output` does not work with Rebot when combining, merging or filtering results
Body: Because robot scripts are not always stable, due to infrastructural problems, we use a script to perform failing tests with a maximum of 3 times. Until tests are run ok... or eventually fail.
This will generate several output.xml which are combined into a single output.xml (report.html and log.html) using rebot.
Until Robotframework v6.1.1 this was working fine. I started to use version 7.0 and used the option --legacyoutput with both robot and rebot.
When I combine MORE THEN one output,xl, rebot will fail:
`
rebot --legacyoutput --merge --outputdir <some outputdir> --output output.xml firstoutput.xml secondoutput.xml
[ ERROR ] Unexpected error: AttributeError: 'NoneType' object has no attribute 'isoformat'
Traceback (most recent call last):
File "C:\Python311\Lib\site-packages\robot\utils\application.py", line 81, in _execute
rc = self.main(arguments, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\robot\rebot.py", line 340, in main
rc = ResultWriter(*datasources).write_results(settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\robot\reporting\resultwriter.py", line 57, in write_results
self._write_output(results.result, settings.output, settings.legacy_output)
File "C:\Python311\Lib\site-packages\robot\reporting\resultwriter.py", line 71, in _write_output
self._write('Output', result.save, path, legacy_output)
File "C:\Python311\Lib\site-packages\robot\reporting\resultwriter.py", line 84, in _write
writer(path, *args)
File "C:\Python311\Lib\site-packages\robot\result\executionresult.py", line 154, in save
self.visit(writer(target, rpa=self.rpa))
File "C:\Python311\Lib\site-packages\robot\result\executionresult.py", line 168, in visit
visitor.visit_result(self)
File "C:\Python311\Lib\site-packages\robot\result\visitor.py", line 44, in visit_result
result.suite.visit(self)
File "C:\Python311\Lib\site-packages\robot\model\testsuite.py", line 420, in visit
visitor.visit_suite(self)
File "C:\Python311\Lib\site-packages\robot\model\visitor.py", line 131, in visit_suite
suite.suites.visit(self)
File "C:\Python311\Lib\site-packages\robot\model\itemlist.py", line 102, in visit
item.visit(visitor) # type: ignore
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\robot\model\testsuite.py", line 420, in visit
visitor.visit_suite(self)
File "C:\Python311\Lib\site-packages\robot\model\visitor.py", line 131, in visit_suite
suite.suites.visit(self)
File "C:\Python311\Lib\site-packages\robot\model\itemlist.py", line 102, in visit
item.visit(visitor) # type: ignore
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\robot\model\testsuite.py", line 420, in visit
visitor.visit_suite(self)
File "C:\Python311\Lib\site-packages\robot\model\visitor.py", line 131, in visit_suite
suite.suites.visit(self)
File "C:\Python311\Lib\site-packages\robot\model\itemlist.py", line 102, in visit
item.visit(visitor) # type: ignore
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\robot\model\testsuite.py", line 420, in visit
visitor.visit_suite(self)
File "C:\Python311\Lib\site-packages\robot\model\visitor.py", line 135, in visit_suite
self.end_suite(suite)
File "C:\Python311\Lib\site-packages\robot\output\xmllogger.py", line 375, in end_suite
self._write_status(suite)
File "C:\Python311\Lib\site-packages\robot\output\xmllogger.py", line 444, in _write_status
'starttime': self._datetime_to_timestamp(item.start_time),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\robot\output\xmllogger.py", line 432, in _datetime_to_timestamp
return dt.isoformat(' ', timespec='milliseconds').replace('-', '')
^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'isoformat'
`
Is it something I'm doing wrong ? I found it confusing that the documentation & release notes mention the option "--legacy-output", but both robot.exe --help and rebot --help, only have the option "--legacyoutput" (without -). | 0easy
|
Title: Small issue in docs
Body: I think there's a minor issue with this snippet in the docs at https://uplink.readthedocs.io/en/stable/user/quickstart.html#response-and-error-handling
```
def raise_for_status(response):
"""Checks whether or not the response was successful."""
if 200 <= response.status <= 299:
raise UnsuccessfulRequest(response.url)
# Pass through the response.
return response
```
I think condition of `if` describes a success case rather than a failure. Also if response is `response` of requests library it should probably be `response.status_code` (Not sure whether `response.status` also works or not. I don't think it does.) | 0easy
|
Title: Deprecate GeoSeries.almost_equals (deprecated in shapely)
Body: The `almost_equals` method in Shapely is deprecated (and will be removed in a future version): https://github.com/shapely/shapely/blob/6c896a7fb0348f46630e4045e1fdc3e54212dc42/shapely/geometry/base.py#L666-L669
So we should probably follow suite in geopandas as well and deprecate our almost_equals method in favor of its alias `equals_exact` | 0easy
|
Title: Can I know what is the size of the Kinetics 400 dataset used to reproduce the result in this repo?
Body: There are many links in Kinetics that have expired. As as result, everyone might not be using the same Kinetics dataset. As a reference, the statistics of the Kinetics dataset used in PySlowFast can be found here, https://github.com/facebookresearch/video-nonlocal-net/blob/master/DATASET.md. However, I cannot seem to find similar information for gluoncv. Will you guys be sharing the statistics and the dataset used? I need the complete dataset to reproduce the result. | 0easy
|
Title: Allauth template elements
Body: Looking into #4691 I stumbled upon the [elements](https://github.com/pennersr/django-allauth/tree/main/allauth/templates/allauth/elements) directory.
Since 0.58.0 (2023-10-26) it is possible to override just the elements instead of overriding the templates completely: https://docs.allauth.org/en/latest/common/templates.html#styling-the-existing-templates
This feature lowers our effort to keep up-to-date with `allauth` but I'm not sure if we can achieve the same quality. | 0easy
|
Title: Bug with List of response_models taken from Serializer
Body: ### Description
When trying to use List of generated `response_model` model as response_model in route, you get in response
```json
{
"code": 400,
"detail": "Validation error",
"fields": [
{
"name": "response",
"message": "Value is not a valid list"
}
]
}
```
### What I Did
```python
from typing import List
from fastapi import APIRouter, Depends
from fastapi_contrib.pagination import Pagination
from fastapi_contrib.serializers import openapi
from fastapi_contrib.serializers.common import Serializer
@openapi.patch
class MySerializer(Serializer):
...
router = APIRouter()
@router.get("/", response_model=List[MySerializer.response_model])
async def lst(pagination: Pagination = Depends()):
return await pagination.paginate(MySerializer)
```
| 0easy
|
Title: Default to read-only field on import if a single Resource or file format is defined
Body: **Describe the bug**
Export allows for single elements to be pre-selected (added in #1671):

However this behaviour is not supported on the import page:

**To Reproduce**
Import / Export specified model
**Versions (please complete the following information):**
- Django Import Export: 4.0
- Python 3.11
- Django 4.2
<s>Perhaps if there is only ever one resource and file format, this page is redundant and can be skipped?</s>
This doesn't apply because user still has to choose file to import.
| 0easy
|
Title: Generate multiple nb products from a single task (e.g. ipynb, html, and pdf)
Body: Currently the nb product is limited to a single file.
It'll be useful for users to output multiple reports for instance:
```
- source: fit.py
product:
nb: output/nb.ipynb
html-report: output/report.html
``` | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.