text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: `CDL_INSIDE` indicator needs to be rescaled by factor 100 for consistency reasons
Body: pandas-ta 0.3.14b0
TA-Lib 0.4.27
## Problem Description
The current implementation of cdl_z() often creates low values that are higher than the open, close or high and high values that are lower than the open, close or low.
## Example
```python
data_candles = data.ta.cdl_pattern(name="all")
>>> data_candles['CDL_INSIDE'].max()
1
>>> data_candles['CDL_INSIDE'].min()
-1
# every other indicator
>>> data_candles['CDL_HAMMER'].max()
100
```
## Expected Behavior
'CDL_INSIDE' should return +100 and -100 like all other indicators. Alternative: Each indicator returns +1 and/or -1. | 0easy
|
Title: Duplicated CI executions in GH Actions
Body: Just noticed that we got duplicated test executions (for pull_request and push). Something might be changed from the GH Actions side and that could be a really quick adjustment. So it's not a bug, but a feature that we want to disable.

Good first issue, help wanted from anyone willing to contribute. | 0easy
|
Title: Support all possible fields for RichTextElementParts.Date
Body: The `slack_sdk.models.blocks.RichTextElementsParts.Date` type supports `timestamp`, but the actual "date" rich text element supports other fields like `url`, `fallback`, `format`. Can this be extended to properly support all available fields?
documentation: https://api.slack.com/reference/block-kit/blocks#rich_text:~:text=date-,The%20following%20are%20the%20properties%20of%20the,object%20type,-in%20the
block builder example: https://app.slack.com/block-kit-builder/TG4KUE8JV#%7B%22blocks%22:%5B%7B%22type%22:%22rich_text%22,%22elements%22:%5B%7B%22type%22:%22rich_text_section%22,%22elements%22:%5B%7B%22type%22:%22date%22,%22timestamp%22:1720710212,%22format%22:%22%7Bdate_num%7D%20at%20%7Btime%7D%22,%22fallback%22:%22timey%22%7D%5D%7D%5D%7D%5D%7D
vs actual type in slack_sdk library:
https://github.com/slackapi/python-slack-sdk/blob/main/slack_sdk/models/blocks/block_elements.py#L2097-L2112
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [x] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: Leledc Exhaustion Bar indicator (Request)
Body: This Pinescript code is really easy but I didn't even manage to set up a really simple for loop I guess because I was too tired :D Anyways
Can someone help me to convert this pine code to python?
```python
maj_qual=input(6),maj_len=input(30)
min_qual=input(5),min_len=input(5)
maj=input(true,title="Show Major")
min=input(true,title="Show Minor")
lele(qual,len)=>
bindex=nz(bindex[1],0)
sindex=nz(sindex[1],0)
ret=0
if (close>close[4])
bindex:=bindex + 1
if(close<close[4])
sindex:=sindex + 1
if (bindex>qual) and (close<open) and high>=highest(high,len)
bindex:=0
ret:=-1
if ((sindex>qual) and (close>open) and (low<= lowest(low,len)))
sindex:=0
ret:=1
return=ret
major=lele(maj_qual,maj_len)
minor=lele(min_qual,min_len)
```
Thank you ! | 0easy
|
Title: Make `TestSuite.source` attribute `pathlib.Path` instance
Body: Currently `TestSuite.source` is a string, but [pathlib.Path](https://docs.python.org/3/library/pathlib.html) would be more convenient for users of this API. More importantly, processing file system paths during parsing is easier by using `Path` instances instead of strings. It wouldn't make sense to convert them to strings when the `TestSuite` is created.
This is potentially backwards incompatible if someone has used `source` as a string like `source.endswith('.robot')`. There generally shouldn't be such needs and converting the source to a string is easy as well. Importantly, manipulating `source` using `os.path` functions will work the same way as earlier. I'll mark this `backwards incompatible` so that it gets mentioned in the release notes, but I don't consider this so severe that it needed to wait for a major release. | 0easy
|
Title: Adding types information on the API surface.
Body: Adding types on the public API surface would allow us to do some runtime type checking later on and would allow user's IDE to have more info for static analysis.
The functions/signatures to type are the ones listed here https://github.com/keras-team/autokeras/blob/master/autokeras/__init__.py
For the context, see #856 where I add some type information on a ImageClassifier.
This issue can be considered easy to solve (good first issue), it's just long to do because the public API surface is big.
I'll do it but some help is welcome :) make sure to make small pull requests. | 0easy
|
Title: Apply code formatting to code examples in our docs
Body: Currently our code examples are not formatted using `black` or linted in any way.
* Investigate what mkdocs extensions there are to do this and what they would do (e.g. they might run `ruff` or `black`)
* Find a good solution and apply it! | 0easy
|
Title: [benchmark] Add Augly and Kornia libraries to benchmark
Body: - [x] Kornia
- [x] Augly | 0easy
|
Title: Add invoke command to configure a git hook
Body: PRs often fail due to flake8 errors, [git hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks) allow to run some code before running `git push`, we should have a command that lets users easily install a hook that runs flake8
We have a [tasks.py](https://github.com/ploomber/ploomber/blob/master/tasks.py) that has one-off commands, we can add something there. e.g,
```
invoke install-git-hook
``` | 0easy
|
Title: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
Body: ```
scrapy/utils/misc.py:249: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
value is None or isinstance(value, ast.NameConstant) and value.value is None
```
Also, if I understand it correctly `isinstance(value, ast.NameConstant)` is always False, as `ast.NameConstant()` returns an `ast.Constant` instance, so I wonder if this case is covered by a test. | 0easy
|
Title: Exception: SNlM0e value not found. Double-check __Secure-1PSID value or pass it as token='xxxxx'.
Body:
Please make sure to check for more efficient package management. *Please prioritize checking existing issues first. *
@dsdanielpark I tried every ways possible but getting a exception: Exception: SNlM0e value not found. Double-check __Secure-1PSID value or pass it as token='xxxxx'.
i checked the 1PSID value, I tried with other account, cleared browser and cookies .
Earlier this was working well but from 16 days its showing same error.
Please Try to solve this problem. | 0easy
|
Title: Adjust test_svd for randomized SVD
Body: Currently the test for backend implementation of SVD also runs for the randomized_svd which has a lower accuracy - we can adjust the precision for the randomized version specifically. | 0easy
|
Title: Figure 5. Channels of depthwise correlation output in conv4
Body: 怎么可视化(Figure 5. Channels of depthwise correlation output in conv4)?
Channels of depthwise correlation output in conv4 在代码中是指哪一个的输出?
请问有谁知道的吗?谢谢 | 0easy
|
Title: ignore_basepython_conflict option is not in effect in tox4
Body: ## Issue
Seems ignore_basepython_conflict option is not in effect in tox4. Setting it up to True or False doesn't change anything if envlist contains py310 but python3 points to python3.8.
## Environment
Provide at least:
- OS: Linux Mint 20.3, running in VirtualBox
- `pip list` of the host Python where `tox` is installed:
```console
Package Version
--------------------- --------------------
appdirs 1.4.3
apt-clone 0.2.1
apturl 0.5.2
astunparse 1.6.3
attrs 22.1.0
autopage 0.5.1
beautifulsoup4 4.8.2
blinker 1.4
Brlapi 0.7.0
ccsm 0.9.14.1
certifi 2019.11.28
chardet 3.0.4
Click 7.0
cliff 4.0.0
cmd2 2.4.2
colorama 0.4.3
command-not-found 0.3
compizconfig-python 0.9.14.1
configobj 5.0.6
coverage 6.4.4
cryptography 2.8
cupshelpers 1.0
dbus-python 1.2.16
ddt 1.6.0
defer 1.0.6
distlib 0.3.0
distro 1.4.0
entrypoints 0.3
execnet 1.9.0
extras 1.0.0
filelock 3.0.12
fixtures 4.0.1
flake8 6.0.0
future 0.18.2
glob2 0.7
grpcio 1.16.1
hammett 0.9.3
httplib2 0.14.0
idna 2.8
ifaddr 0.1.6
IMDbPY 6.8
importlib-metadata 4.12.0
iniconfig 1.1.1
junit-xml 1.8
keyring 18.0.1
launchpadlib 1.10.13
lazr.restfulclient 0.14.2
lazr.uri 1.0.3
linecache2 1.0.0
logutils 0.3.5
louis 3.12.0
lxml 4.5.0
Mako 1.1.0
MarkupSafe 1.1.0
mccabe 0.7.0
more-itertools 4.2.0
mutmut 2.4.1
netaddr 0.7.19
netifaces 0.10.4
oauthlib 3.1.0
onboard 1.4.1
packaging 20.3
parso 0.8.3
pbr 5.4.5
pexpect 4.6.0
Pillow 7.0.0
pip 20.0.2
pluggy 0.13.0
pony 0.7.16
prettytable 3.4.1
protobuf 3.6.1
psutil 5.5.1
py 1.11.0
pycairo 1.16.2
pycodestyle 2.10.0
pycrypto 2.6.1
pycups 1.9.73
pycurl 7.43.0.2
pyflakes 3.0.1
Pygments 2.3.1
PyGObject 3.36.0
PyICU 2.4.2
PyJWT 1.7.1
pymacaroons 0.13.0
PyNaCl 1.3.0
pyparsing 2.4.6
pyparted 3.11.2
pyperclip 1.8.2
pytest 7.1.3
pytest-cov 3.0.0
pytest-forked 1.4.0
pytest-html 3.1.1
pytest-metadata 2.0.2
pytest-xdist 2.5.0
python-apt 2.0.0+ubuntu0.20.4.8
python-debian 0.1.36ubuntu1
python-magic 0.4.16
python-subunit 1.4.0
python-xapp 2.2.1
python-xlib 0.23
pyxdg 0.26
PyYAML 5.3.1
reportlab 3.5.34
requests 2.22.0
requests-file 1.4.3
requests-unixsocket 0.2.0
SecretStorage 2.3.1
setproctitle 1.1.10
setuptools 57.4.0
simplejson 3.16.0
six 1.14.0
soupsieve 1.9.5
stestr 4.0.0
stevedore 4.0.0
systemd-python 234
testresources 2.0.0
testtools 2.5.0
tldextract 2.2.1
toml 0.10.0
tomli 2.0.1
tox 3.13.2
traceback2 1.4.0
ubuntu-drivers-common 0.0.0
ufw 0.36
Unidecode 1.1.1
unittest2 1.1.0
urllib3 1.25.8
vboxapi 1.0
virtualenv 20.0.17
voluptuous 0.13.1
wadllib 1.3.3
wcwidth 0.2.5
wheel 0.34.2
xkit 0.0.0
youtube-dl 2021.4.26
zipp 1.0.0
```
## Output of running tox
Provide the output of `tox -rvv`:
```console
using tox.ini: /home/dk/tox/pythonProject/tox.ini (pid 84121)
removing /home/dk/tox/pythonProject/.tox/log
could not satisfy requires MissingDependency(<Requirement('tox>=3.18.0')>)
using tox-3.13.2 from /usr/local/lib/python3.8/dist-packages/tox/__init__.py (pid 84121)
.tox start: getenv /home/dk/tox/pythonProject/.tox/.tox
.tox cannot reuse: -r flag
.tox recreate: /home/dk/tox/pythonProject/.tox/.tox
/usr/bin/python3 (/usr/bin/python3) is {'executable': '/usr/bin/python3', 'name': 'python', 'version_info': [3, 8, 10, 'final', 0], 'version': '3.8.10 (default, Nov 14 2022, 12:59:47) \n[GCC 9.4.0]', 'is_64': True, 'sysplatform': 'linux'}
.tox uses /usr/bin/python3
removing /home/dk/tox/pythonProject/.tox/.tox
setting PATH=/home/dk/tox/pythonProject/.tox/.tox/bin:/home/dk/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
[84125] /home/dk/tox/pythonProject/.tox$ /usr/bin/python3 -m virtualenv --no-download --python /usr/bin/python3 .tox
created virtual environment CPython3.8.10.final.0-64 in 120ms
creator CPython3Posix(dest=/home/dk/tox/pythonProject/.tox/.tox, clear=False, global=False)
seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, pkg_resources=latest, via=copy, app_data_dir=/home/dk/.local/share/virtualenv/seed-app-data/v1.0.1.debian.1)
activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator
.tox installdeps: tox >= 3.18.0
setting PATH=/home/dk/tox/pythonProject/.tox/.tox/bin:/home/dk/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
[84134] /home/dk/tox/pythonProject$ /home/dk/tox/pythonProject/.tox/.tox/bin/python -m pip install 'tox >= 3.18.0'
Collecting tox>=3.18.0
Using cached tox-4.0.14-py3-none-any.whl (143 kB)
Collecting packaging>=22
Using cached packaging-22.0-py3-none-any.whl (42 kB)
Collecting colorama>=0.4.6
Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting tomli>=2.0.1; python_version < "3.11"
Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
Collecting cachetools>=5.2
Using cached cachetools-5.2.0-py3-none-any.whl (9.3 kB)
Collecting virtualenv>=20.17.1
Using cached virtualenv-20.17.1-py3-none-any.whl (8.8 MB)
Collecting filelock>=3.8.2
Using cached filelock-3.8.2-py3-none-any.whl (10 kB)
Collecting chardet>=5.1
Using cached chardet-5.1.0-py3-none-any.whl (199 kB)
Collecting platformdirs>=2.6
Using cached platformdirs-2.6.0-py3-none-any.whl (14 kB)
Collecting pluggy>=1
Using cached pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
Collecting pyproject-api>=1.2.1
Using cached pyproject_api-1.2.1-py3-none-any.whl (11 kB)
Collecting distlib<1,>=0.3.6
Using cached distlib-0.3.6-py2.py3-none-any.whl (468 kB)
Installing collected packages: packaging, colorama, tomli, cachetools, platformdirs, distlib, filelock, virtualenv, chardet, pluggy, pyproject-api, tox
Successfully installed cachetools-5.2.0 chardet-5.1.0 colorama-0.4.6 distlib-0.3.6 filelock-3.8.2 packaging-22.0 platformdirs-2.6.0 pluggy-1.0.0 pyproject-api-1.2.1 tomli-2.0.1 tox-4.0.14 virtualenv-20.17.1
.tox finish: getenv /home/dk/tox/pythonProject/.tox/.tox after 3.19 seconds
.tox start: finishvenv
write config to /home/dk/tox/pythonProject/.tox/.tox/.tox-config1 as 'cd74d88a9a263f1797fd10436370f4cf /usr/bin/python3\n3.13.2 0 1 0\n00000000000000000000000000000000 tox >= 3.18.0'
.tox finish: finishvenv after 0.01 seconds
.tox start: provision
[84140] /home/dk/tox/pythonProject$ /home/dk/tox/pythonProject/.tox/.tox/bin/python -m tox -rvv
py310: 143 I find interpreter for spec PythonSpec(major=3) [virtualenv/discovery/builtin.py:56]
py310: 143 D discover exe for PythonInfo(spec=CPython3.8.10.final.0-64, exe=/home/dk/tox/pythonProject/.tox/.tox/bin/python, platform=linux, version='3.8.10 (default, Nov 14 2022, 12:59:47) \n[GCC 9.4.0]', encoding_fs_io=utf-8-utf-8) in /usr [virtualenv/discovery/py_info.py:437]
py310: 143 D filesystem is case-sensitive [virtualenv/info.py:24]
py310: 144 D got python info of /usr/bin/python3.8 from /home/dk/.local/share/virtualenv/py_info/1/df0893f56f349688326838aaeea0de204df53a132722cbd565e54b24a8fec5f6.json [virtualenv/app_data/via_disk_folder.py:129]
py310: 144 I proposed PythonInfo(spec=CPython3.8.10.final.0-64, system=/usr/bin/python3.8, exe=/home/dk/tox/pythonProject/.tox/.tox/bin/python, platform=linux, version='3.8.10 (default, Nov 14 2022, 12:59:47) \n[GCC 9.4.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
py310: 144 D accepted PythonInfo(spec=CPython3.8.10.final.0-64, system=/usr/bin/python3.8, exe=/home/dk/tox/pythonProject/.tox/.tox/bin/python, platform=linux, version='3.8.10 (default, Nov 14 2022, 12:59:47) \n[GCC 9.4.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
py310: 172 I create virtual environment via CPython3Posix(dest=/home/dk/tox/pythonProject/.tox/.tox/py310, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:48]
py310: 173 D create folder /home/dk/tox/pythonProject/.tox/.tox/py310/bin [virtualenv/util/path/_sync.py:9]
py310: 173 D create folder /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages [virtualenv/util/path/_sync.py:9]
py310: 173 D write /home/dk/tox/pythonProject/.tox/.tox/py310/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
py310: 173 D home = /usr/bin [virtualenv/create/pyenv_cfg.py:34]
py310: 173 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
py310: 173 D version_info = 3.8.10.final.0 [virtualenv/create/pyenv_cfg.py:34]
py310: 173 D virtualenv = 20.17.1 [virtualenv/create/pyenv_cfg.py:34]
py310: 173 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
py310: 173 D base-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
py310: 173 D base-exec-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
py310: 173 D base-executable = /usr/bin/python3.8 [virtualenv/create/pyenv_cfg.py:34]
py310: 173 D symlink /usr/bin/python3.8 to /home/dk/tox/pythonProject/.tox/.tox/py310/bin/python [virtualenv/util/path/_sync.py:28]
py310: 174 D create virtualenv import hook file /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:89]
py310: 174 D create /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:92]
py310: 174 D ============================== target debug ============================== [virtualenv/run/session.py:50]
py310: 174 D debug via /home/dk/tox/pythonProject/.tox/.tox/py310/bin/python /home/dk/tox/pythonProject/.tox/.tox/lib/python3.8/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:197]
py310: 174 D {
"sys": {
"executable": "/home/dk/tox/pythonProject/.tox/.tox/py310/bin/python",
"_base_executable": "/home/dk/tox/pythonProject/.tox/.tox/py310/bin/python",
"prefix": "/home/dk/tox/pythonProject/.tox/.tox/py310",
"base_prefix": "/usr",
"real_prefix": null,
"exec_prefix": "/home/dk/tox/pythonProject/.tox/.tox/py310",
"base_exec_prefix": "/usr",
"path": [
"/usr/lib/python38.zip",
"/usr/lib/python3.8",
"/usr/lib/python3.8/lib-dynload",
"/home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages"
],
"meta_path": [
"<class '_virtualenv._Finder'>",
"<class '_frozen_importlib.BuiltinImporter'>",
"<class '_frozen_importlib.FrozenImporter'>",
"<class '_frozen_importlib_external.PathFinder'>"
],
"fs_encoding": "utf-8",
"io_encoding": "utf-8"
},
"version": "3.8.10 (default, Nov 14 2022, 12:59:47) \n[GCC 9.4.0]",
"makefile_filename": "/usr/lib/python3.8/config-3.8-x86_64-linux-gnu/Makefile",
"os": "<module 'os' from '/usr/lib/python3.8/os.py'>",
"site": "<module 'site' from '/usr/lib/python3.8/site.py'>",
"datetime": "<module 'datetime' from '/usr/lib/python3.8/datetime.py'>",
"math": "<module 'math' (built-in)>",
"json": "<module 'json' from '/usr/lib/python3.8/json/__init__.py'>"
} [virtualenv/run/session.py:51]
py310: 201 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/dk/.local/share/virtualenv) [virtualenv/run/session.py:55]
py310: 203 D got embed update of distribution pip from /home/dk/.local/share/virtualenv/wheel/3.8/embed/3/pip.json [virtualenv/app_data/via_disk_folder.py:129]
py310: 207 D got embed update of distribution setuptools from /home/dk/.local/share/virtualenv/wheel/3.8/embed/3/setuptools.json [virtualenv/app_data/via_disk_folder.py:129]
py310: 208 D got embed update of distribution wheel from /home/dk/.local/share/virtualenv/wheel/3.8/embed/3/wheel.json [virtualenv/app_data/via_disk_folder.py:129]
py310: 209 D install pip from wheel /home/dk/tox/pythonProject/.tox/.tox/lib/python3.8/site-packages/virtualenv/seed/wheels/embed/pip-22.3.1-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
py310: 209 D install setuptools from wheel /home/dk/tox/pythonProject/.tox/.tox/lib/python3.8/site-packages/virtualenv/seed/wheels/embed/setuptools-65.6.3-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
py310: 210 D install wheel from wheel /home/dk/tox/pythonProject/.tox/.tox/lib/python3.8/site-packages/virtualenv/seed/wheels/embed/wheel-0.38.4-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
py310: 211 D copy /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools-65.6.3.virtualenv to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/setuptools-65.6.3.virtualenv [virtualenv/util/path/_sync.py:36]
py310: 212 D copy directory /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/_distutils_hack to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:36]
py310: 212 D copy directory /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/pip [virtualenv/util/path/_sync.py:36]
py310: 214 D copy /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.virtualenv to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/wheel-0.38.4.virtualenv [virtualenv/util/path/_sync.py:36]
py310: 215 D copy /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/distutils-precedence.pth to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:36]
py310: 215 D copy directory /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.dist-info to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/wheel-0.38.4.dist-info [virtualenv/util/path/_sync.py:36]
py310: 218 D copy directory /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/pkg_resources to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/pkg_resources [virtualenv/util/path/_sync.py:36]
py310: 224 D copy directory /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/wheel [virtualenv/util/path/_sync.py:36]
py310: 241 D generated console scripts wheel wheel3 wheel-3.8 wheel3.8 [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
py310: 246 D copy directory /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools-65.6.3.dist-info to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/setuptools-65.6.3.dist-info [virtualenv/util/path/_sync.py:36]
py310: 248 D copy directory /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/setuptools [virtualenv/util/path/_sync.py:36]
py310: 293 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
py310: 313 D copy /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip-22.3.1.virtualenv to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/pip-22.3.1.virtualenv [virtualenv/util/path/_sync.py:36]
py310: 313 D copy directory /home/dk/.local/share/virtualenv/wheel/3.8/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip-22.3.1.dist-info to /home/dk/tox/pythonProject/.tox/.tox/py310/lib/python3.8/site-packages/pip-22.3.1.dist-info [virtualenv/util/path/_sync.py:36]
py310: 314 D generated console scripts pip-3.8 pip3 pip3.8 pip [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
py310: 315 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:61]
py310: 315 D write /home/dk/tox/pythonProject/.tox/.tox/py310/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
py310: 316 D home = /usr/bin [virtualenv/create/pyenv_cfg.py:34]
py310: 316 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
py310: 316 D version_info = 3.8.10.final.0 [virtualenv/create/pyenv_cfg.py:34]
py310: 316 D virtualenv = 20.17.1 [virtualenv/create/pyenv_cfg.py:34]
py310: 316 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
py310: 316 D base-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
py310: 316 D base-exec-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
py310: 316 D base-executable = /usr/bin/python3.8 [virtualenv/create/pyenv_cfg.py:34]
py310: 323 W install_deps> python -I -m pip install stestr -r /home/dk/tox/pythonProject/test-requirements.txt [tox/tox_env/api.py:417]
Collecting stestr
Using cached stestr-4.0.1-py3-none-any.whl (117 kB)
Collecting pip~=22.2.1
Using cached pip-22.2.2-py3-none-any.whl (2.0 MB)
Collecting attrs~=22.1.0
Using cached attrs-22.1.0-py2.py3-none-any.whl (58 kB)
Collecting wheel~=0.37.1
Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB)
Collecting setuptools~=63.3.0
Using cached setuptools-63.3.0-py3-none-any.whl (1.2 MB)
Collecting packaging~=21.3
Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting pyparsing~=3.0.9
Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB)
Collecting zipp~=3.8.1
Using cached zipp-3.8.1-py3-none-any.whl (5.6 kB)
Collecting future~=0.18.2
Using cached future-0.18.2-py3-none-any.whl
Collecting pbr~=5.10.0
Using cached pbr-5.10.0-py2.py3-none-any.whl (112 kB)
Collecting fixtures~=4.0.1
Using cached fixtures-4.0.1-py3-none-any.whl
Collecting testtools~=2.5.0
Using cached testtools-2.5.0-py3-none-any.whl (181 kB)
Collecting six~=1.16.0
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting virtualenv~=20.16.5
Using cached virtualenv-20.16.7-py3-none-any.whl (8.8 MB)
Collecting wcwidth~=0.2.5
Using cached wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
Collecting cmd2~=2.4.2
Using cached cmd2-2.4.2-py3-none-any.whl (147 kB)
Collecting pyperclip~=1.8.2
Using cached pyperclip-1.8.2-py3-none-any.whl
Collecting PyYAML~=6.0
Using cached PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (701 kB)
Collecting cliff~=4.0.0
Using cached cliff-4.0.0-py3-none-any.whl (80 kB)
Collecting stevedore~=4.0.0
Using cached stevedore-4.0.2-py3-none-any.whl (50 kB)
Collecting autopage~=0.5.1
Using cached autopage-0.5.1-py3-none-any.whl (29 kB)
Collecting prettytable~=3.4.1
Using cached prettytable-3.4.1-py3-none-any.whl (26 kB)
Collecting extras~=1.0.0
Using cached extras-1.0.0-py2.py3-none-any.whl (7.3 kB)
Collecting python-subunit~=1.4.0
Using cached python_subunit-1.4.2-py3-none-any.whl (106 kB)
Collecting voluptuous~=0.13.1
Using cached voluptuous-0.13.1-py3-none-any.whl (29 kB)
Collecting distlib<1,>=0.3.6
Using cached distlib-0.3.6-py2.py3-none-any.whl (468 kB)
Collecting platformdirs<3,>=2.4
Using cached platformdirs-2.6.0-py3-none-any.whl (14 kB)
Collecting filelock<4,>=3.4.1
Using cached filelock-3.8.2-py3-none-any.whl (10 kB)
Collecting importlib-metadata>=4.4
Using cached importlib_metadata-5.2.0-py3-none-any.whl (21 kB)
Installing collected packages: wcwidth, voluptuous, pyperclip, extras, distlib, zipp, wheel, six, setuptools, PyYAML, pyparsing, prettytable, platformdirs, pip, pbr, future, filelock, autopage, attrs, virtualenv, stevedore, packaging, importlib-metadata, fixtures, cmd2, testtools, cliff, python-subunit, stestr
Attempting uninstall: wheel
Found existing installation: wheel 0.38.4
Uninstalling wheel-0.38.4:
Successfully uninstalled wheel-0.38.4
Attempting uninstall: setuptools
Found existing installation: setuptools 65.6.3
Uninstalling setuptools-65.6.3:
Successfully uninstalled setuptools-65.6.3
Attempting uninstall: pip
Found existing installation: pip 22.3.1
Uninstalling pip-22.3.1:
Successfully uninstalled pip-22.3.1
Successfully installed PyYAML-6.0 attrs-22.1.0 autopage-0.5.1 cliff-4.0.0 cmd2-2.4.2 distlib-0.3.6 extras-1.0.0 filelock-3.8.2 fixtures-4.0.1 future-0.18.2 importlib-metadata-5.2.0 packaging-21.3 pbr-5.10.0 pip-22.2.2 platformdirs-2.6.0 prettytable-3.4.1 pyparsing-3.0.9 pyperclip-1.8.2 python-subunit-1.4.2 setuptools-63.3.0 six-1.16.0 stestr-4.0.1 stevedore-4.0.2 testtools-2.5.0 virtualenv-20.16.7 voluptuous-0.13.1 wcwidth-0.2.5 wheel-0.37.1 zipp-3.8.1
py310: 6772 I exit 0 (6.45 seconds) /home/dk/tox/pythonProject> python -I -m pip install stestr -r /home/dk/tox/pythonProject/test-requirements.txt pid=84153 [tox/execute/api.py:275]
py310: 6773 W commands[0]> stestr run [tox/tox_env/api.py:417]
{0} tests.unit.test_calc.CalcTestCase.test_calc_all [0.000381s] ... ok
{1} tests.unit.test_calc.CalcTestCase.test_calc_raises [0.000295s] ... ok
{1} tests.unit.test_calc.CalcTestCase.test_calc_sub [0.000226s] ... ok
{2} tests.unit.test_calc.CalcTestCase.test_calc_add [0.000174s] ... ok
{2} tests.unit.test_calc.CalcTestCase.test_calc_mul [0.000045s] ... ok
{3} tests.unit.test_calc.CalcTestCase.test_calc_div [0.000322s] ... ok
{3} tests.unit.test_calc.CalcTestCase.test_calc_div_zero [0.000054s] ... ok
======
Totals
======
Ran: 7 tests in 0.0131 sec.
- Passed: 7
- Skipped: 0
- Expected Fail: 0
- Unexpected Success: 0
- Failed: 0
Sum of execute time for each test: 0.0015 sec.
==============
Worker Balance
==============
- Worker 0 (1 tests) => 0:00:00.000381
- Worker 1 (2 tests) => 0:00:00.000838
- Worker 2 (2 tests) => 0:00:00.000395
- Worker 3 (2 tests) => 0:00:00.000817
py310: 7575 I exit 0 (0.80 seconds) /home/dk/tox/pythonProject> stestr run pid=84244 [tox/execute/api.py:275]
py310: OK ✔ in 7.44 seconds
pep8: 7579 I find interpreter for spec PythonSpec(major=3) [virtualenv/discovery/builtin.py:56]
pep8: 7579 I proposed PythonInfo(spec=CPython3.8.10.final.0-64, system=/usr/bin/python3.8, exe=/home/dk/tox/pythonProject/.tox/.tox/bin/python, platform=linux, version='3.8.10 (default, Nov 14 2022, 12:59:47) \n[GCC 9.4.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
pep8: 7579 D accepted PythonInfo(spec=CPython3.8.10.final.0-64, system=/usr/bin/python3.8, exe=/home/dk/tox/pythonProject/.tox/.tox/bin/python, platform=linux, version='3.8.10 (default, Nov 14 2022, 12:59:47) \n[GCC 9.4.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
[...cut...]
pep8: 10529 I exit 0 (0.35 seconds) /home/dk/tox/pythonProject> flake8 pid=84329 [tox/execute/api.py:275]
py310: OK (7.44=setup[6.64]+cmd[0.80] seconds)
pep8: OK (2.95=setup[2.60]+cmd[0.35] seconds)
congratulations :) (10.45 seconds)
.tox finish: provision after 10.59 seconds
```
## Minimal example
tox.ini:
```console
[tox]
envlist = py310,pep8
minversion = 3.18.0
skipsdist = True
ignore_basepython_conflict = False
[testenv]
basepython = python3
setenv = OS_STDOUT_CAPTURE=1
OS_STDERR_CAPTURE=1
OS_TEST_TIMEOUT=60
usedevelop = True
allowlist_externals =
stestr
deps =
stestr
-r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
[testenv:pep8]
envdir = {toxworkdir}/lint
deps =
flake8
flake8-import-order==0.18.1 # LGPLv3
pylint==2.5.3 # GPLv2
commands=
# If it is easier to add a check via a shell script, consider adding it in this file
flake8
[testenv:venv]
deps =
-r{toxinidir}/test-requirements.txt
commands = {posargs}
[flake8]
# E126 continuation line over-indented for hanging indent
# E128 continuation line under-indented for visual indent
# H405 multi line docstring summary not separated with an empty line
# I202 Additional newline in a group of imports
# N530 direct neutron imports not allowed
# TODO(amotoki) check the following new rules should be fixed or ignored
# E731 do not assign a lambda expression, use a def
# W504 line break after binary operator
ignore = E126,E128,E731,I202,H405,N530,W504
# H106: Don't put vim configuration in source files
# H203: Use assertIs(Not)None to check for None
# H204: Use assert(Not)Equal to check for equality
# H205: Use assert(Greater|Less)(Equal) for comparison
# H904: Delay string interpolations at logging calls
enable-extensions=H106,H203,H204,H205,H904
show-source = true
exclude = ./.*,build,dist,doc
import-order-style = pep8
```
Pythons:
```console
$ ls -la /usr/bin/python*
lrwxrwxrwx 1 root root 7 Apr 15 2020 /usr/bin/python -> python3
lrwxrwxrwx 1 root root 9 Mar 13 2020 /usr/bin/python2 -> python2.7
-rwxr-xr-x 1 root root 3662032 Jul 1 15:27 /usr/bin/python2.7
lrwxrwxrwx 1 root root 33 Jul 1 15:27 /usr/bin/python2.7-config -> x86_64-linux-gnu-python2.7-config
lrwxrwxrwx 1 root root 16 Mar 13 2020 /usr/bin/python2-config -> python2.7-config
lrwxrwxrwx 1 root root 9 Aug 3 15:38 /usr/bin/python3 -> python3.8
-rwxr-xr-x 1 root root 5838616 Dec 7 04:12 /usr/bin/python3.10
lrwxrwxrwx 1 root root 34 Dec 7 04:12 /usr/bin/python3.10-config -> x86_64-linux-gnu-python3.10-config
-rwxr-xr-x 1 root root 5494584 Nov 14 15:59 /usr/bin/python3.8
lrwxrwxrwx 1 root root 33 Nov 14 15:59 /usr/bin/python3.8-config -> x86_64-linux-gnu-python3.8-config
lrwxrwxrwx 1 root root 16 Mar 13 2020 /usr/bin/python3-config -> python3.8-config
-rwxr-xr-x 1 root root 152 Apr 9 2020 /usr/bin/python3-pbr
-rwxr-xr-x 1 root root 384 Dec 17 2019 /usr/bin/python3-unit2
``` | 0easy
|
Title: BUG: `pd.Series.isnumeric()` doesn't work on decimal value strings
Body: ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({"string_values": ["1", "1.0", "1.1"]})
df.string_values.str.isnumeric()
```
### Issue Description
The series method `.isnumeric()` only works on integer strings. If a string number is decimal, it will return `False`. When running the example below, the following is returned:
<img width="609" alt="Image" src="https://github.com/user-attachments/assets/9cdd0a8e-4a74-4e2f-ba07-44914a085b4d" />
This is the docs description for the method:
<img width="758" alt="Image" src="https://github.com/user-attachments/assets/0c10d350-56af-4699-8fcb-2f20a739e28a" />
### Expected Behavior
Running the method on decimal strings should return `True`.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.8
python-bits : 64
OS : Linux
OS-release : 5.15.49-linuxkit-pr
Version : #1 SMP PREEMPT Thu May 25 07:27:39 UTC 2023
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 2.2.1
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 8.31.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : 5.3.0
matplotlib : 3.10.0
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : 18.1.0
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.1
sqlalchemy : 2.0.37
tables : None
tabulate : 0.9.0
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| 0easy
|
Title: Unit test fails under Python 3.13
Body: In Fedora we have RobotFramework tests running for upcoming Fedora 41 using Python 3.13.03 in a side tag.
While running the unit tests (version 7.0) I have a new failure in utils:
```
FAIL: test_remove_entries_with_lambda_and_multiple_entries
(test_error.TestRemoveRobotEntriesFromTraceback.test_remove_entries_with_lambda_and_multiple_entries)
----------------------------------------------------------------------
======================================================================
FAIL: test_remove_entries_with_lambda_and_multiple_entries (test_error.TestRemoveRobotEntriesFromTraceback.test_remove_entries_with_lambda_and_multiple_entries)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/builddir/build/BUILD/robotframework-7.0/utest/utils/test_error.py", line 107, in test_remove_entries_with_lambda_and_multiple_entries
self._verify_traceback(r'''
~~~~~~~~~~~~~~~~~~~~~~^^^^^
Traceback \(most recent call last\):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
1/0
^^^
'''.strip(), assert_raises, AssertionError, raising_lambda)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/builddir/build/BUILD/robotframework-7.0/utest/utils/test_error.py", line 133, in _verify_traceback
raise AssertionError('\nExpected:\n%s\n\nActual:\n%s' % (expected, tb))
AssertionError:
Expected:
Traceback \(most recent call last\):
File ".*", line \d+, in <lambda.*>
raising_lambda = lambda: raises\(\)
File ".*", line \d+, in raises
1/0
Actual:
Traceback (most recent call last):
File "/builddir/build/BUILD/robotframework-7.0/utest/utils/test_error.py", line 106, in <lambda>
raising_lambda = lambda: raises()
~~~~~~^^
File "/builddir/build/BUILD/robotframework-7.0/utest/utils/test_error.py", line 105, in raises
1/0
~^~
ZeroDivisionError: division by zero
----------------------------------------------------------------------
Ran 635 tests in 0.577s
FAILED (failures=1)
```
Looking at the output and playing a bit around the solution to me looks like changing the test filter (line in test_error.py) from:
```
# Remove lines indicating error location with `^^^^` used by Python 3.11+.
tb = '\n'.join(line for line in tb.splitlines() if line.strip('^ '))
```
To:
```
# Remove lines indicating error location with `^^^^` used by Python 3.11+ and `~~~~^` variants in Python 3.13+.
tb = '\n'.join(line for line in tb.splitlines() if line.strip('^~ '))
```
Which filters also the error indicators with `~` that seem to be new.
This makes the test pass, I hope/believe without breaking in previous versions.
If the proposal makes sense I can of course open a MR!
Thanks!
F.
| 0easy
|
Title: Root with undefined variable in host_group_vars
Body: ### Summary
Variable `key` may be undefined when cache is falsy at https://github.com/ansible/ansible/blob/12abfb06c21799cfc109db2eb520693071c82c1b/lib/ansible/plugins/vars/host_group_vars.py#L140
Alternatively it may have value from previous entity.
### Issue Type
Bug Report
### Component Name
host_group_vars
### Ansible Version
```console
2.18
```
### Configuration
```console
default
```
### OS / Environment
Any
### Steps to Reproduce
I don't have an example
### Expected Results
I don't have an example
### Actual Results
```console
I don't have an example
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | 0easy
|
Title: None value in optional search fields of index documents cause errors
Body: The current implementation of `InMemoryExactNNIndex` does not allow None value in search fields even when they are defined as Optional, for example:
```python
import torch
from typing import Optional
from docarray import BaseDoc, DocList
from docarray.typing import TorchTensor
from docarray.index import InMemoryExactNNIndex
class TestDoc(BaseDoc):
embedding: Optional[TorchTensor[768]]
# Some of the documents have the embedding field set to None
dl = DocList[TestDoc]([TestDoc(embedding=(torch.rand(768,) if i%2 else None)) for i in range(5)])
index = InMemoryExactNNIndex[TestDoc](dl)
index.find(torch.rand((768,)), search_field="embedding", limit=3)
```
This will cause the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 15
12 dl = DocList[TestDoc]([TestDoc(embedding=(torch.rand(768,) if i%2 else None)) for i in range(5)])
14 index = InMemoryExactNNIndex[TestDoc](dl)
---> 15 index.find(torch.rand((768,)), search_field="embedding", limit=3)
File ~/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/docarray/index/backends/in_memory.py:238, in InMemoryExactNNIndex.find(self, query, search_field, limit, **kwargs)
234 return FindResult(documents=[], scores=[]) # type: ignore
236 config = self._column_infos[search_field].config
--> 238 docs, scores = find(
239 index=self._docs,
240 query=query,
241 search_field=search_field,
242 limit=limit,
243 metric=config['space'],
244 )
245 docs_with_schema = DocList.__class_getitem__(cast(Type[BaseDoc], self._schema))(
246 docs
247 )
248 return FindResult(documents=docs_with_schema, scores=scores)
File ~/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/docarray/utils/find.py:114, in find(index, query, search_field, metric, limit, device, descending)
51 """
52 Find the closest Documents in the index to the query.
53 Supports PyTorch and NumPy embeddings.
(...)
111 and the second element contains the corresponding scores.
112 """
113 query = _extract_embedding_single(query, search_field)
--> 114 docs, scores = find_batched(
115 index=index,
116 query=query,
117 search_field=search_field,
118 metric=metric,
119 limit=limit,
120 device=device,
121 descending=descending,
122 )
123 return FindResult(documents=docs[0], scores=scores[0])
File ~/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/docarray/utils/find.py:207, in find_batched(index, query, search_field, metric, limit, device, descending)
204 comp_backend = embedding_type.get_comp_backend()
206 # extract embeddings from query and index
--> 207 index_embeddings = _extract_embeddings(index, search_field, embedding_type)
208 query_embeddings = _extract_embeddings(query, search_field, embedding_type)
210 # compute distances and return top results
File ~/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/docarray/utils/find.py:269, in _extract_embeddings(data, search_field, embedding_type)
267 if isinstance(data, DocList):
268 emb_list = list(AnyDocArray._traverse(data, search_field))
--> 269 emb = embedding_type._docarray_stack(emb_list)
270 elif isinstance(data, (DocVec, BaseDoc)):
271 emb = next(AnyDocArray._traverse(data, search_field))
File ~/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/docarray/typing/tensor/abstract_tensor.py:293, in AbstractTensor._docarray_stack(cls, seq)
290 comp_backend = cls.get_comp_backend()
291 # at runtime, 'T' is always the correct input type for .stack()
292 # but mypy doesn't know that, so we ignore it here
--> 293 return cls._docarray_from_native(comp_backend.stack(seq))
File ~/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/docarray/computation/torch_backend.py:46, in TorchCompBackend.stack(cls, tensors, dim)
42 @classmethod
43 def stack(
44 cls, tensors: Union[List['torch.Tensor'], Tuple['torch.Tensor']], dim: int = 0
45 ) -> 'torch.Tensor':
---> 46 return torch.stack(tensors, dim=dim)
TypeError: expected Tensor as element 0 in argument 0, but got NoneType
```
Is there any way to either filter those documents with None value or ignore them when performing vector search?
Thanks! | 0easy
|
Title: [ENH] Proposing a "jitter" function
Body: # Brief Description
Had to use this at work recently, where I had to jitter the data that we had to slightly anonymize it.
All that we did here was to add Gaussian noise scaled by a fraction of the magnitude of each value.
# Example API
This could operate at two levels: at the pandas Series and DataFrame levels. I implemented the "Series" version at work, but not the "DataFrame" one.
As a Series method, we would do something like this:
```python
from typing import Optional, Tuple
def jitter(s: pd.Series, scale_magnitude: np.number, clip: Optional[Tuple[float, float]] = None):
"""
Jitter a pandas Series by applying Gaussian noise.
By default, the jitter function takes the numeric value of each series element as the mean,
and uses a multiplier of the numeric value (scale_magnitude) as the standard deviation.
"""
loc = 0
scale = s * scale_magnitude
noise = np.random.normal(loc=0, scale=scale)
result = s + noise
if clip:
result = np.clip(result, *clip)
return result
```
I think it might be possible to do the analogous thing for DataFrames.
The API would look like the following:
```python
# series API
s = df['column_name']
s.jitter()
# dataframe API
df = pd.DataFrame(...).jitter("column_name", scale_magnitude=0.1, clip=(0, None))
```
What do others think?
| 0easy
|
Title: Replace use of websocketbridge.js
Body: The underlying `channels` library has a backwards-incompatible change in v2.1.4 - the file `websocketbridge.js` was removed.
The workaround is to require v2.1.3 of this package, but a better fix is to replace the use of websocketbridge.js in the messaging [template](https://github.com/GibbsConsulting/django-plotly-dash/blob/master/django_plotly_dash/templates/django_plotly_dash/plotly_messaging.html). | 0easy
|
Title: Add option to load opener text from a file name.
Body: For example:
/chat-gpt opener: -f filename
Allows the user to start the bot as a particular profession from the available files. | 0easy
|
Title: Dynamic API: Support positional-only arguments
Body: RF 4.0 added support for Python's positional-only arguments (#3695). It isn't that important feature, but sometimes it comes handy. For consistency reasons also dynamic libraries should support it.
Implementing this enhancement shouldn't be too complicated. Our `ArgumentSpec` already supports positional-arguments and that's what is used during execution. The only needed change ought to be enhancing the code that parses argument information returned by dynamic libraries. Well, even before that we needed to agree on the syntax but `['posonly', '/', 'normal']` that matches the syntax used by Python is a pretty obvious candidate. It's also consistent how the dynamic API supports named-only arguments like `['normal', '*', 'namedonly']`. | 0easy
|
Title: More Notebooks and Examples
Body: More Notebooks with different strategies and visualizations (matplotlib, bokeh, mplfinance) | 0easy
|
Title: Missing API endpoints /token/verify/ and token/refresh/ in the documentation
Body: In the documentation from [ReadTheDocs](https://dj-rest-auth.readthedocs.io/en/latest/api_endpoints.html) the API endpoints **token/verify/** and **token/refresh/ are missing.**
I found these API endpoints in the [urls.py](https://github.com/jazzband/dj-rest-auth/blob/master/dj_rest_auth/urls.py) of the project, they are added to the urlpatterns when REST_USE_JWT is set to True. | 0easy
|
Title: Remove deprecated scrapy.utils.misc.extract_regex()
Body: Deprecated in 2.3.0. | 0easy
|
Title: Importing python faker library in Settings fails in RF 7.2.2
Body: Hello folks,
If I use faker library (python version) with RF7 and higher. There is an error during runtime, while importing the library.
However the library is imported successfully if RF6 is used.
Since the time to upgrade to latest and greatest has come, I have tried and found myself stuck on this issue.
**RF & Python version:** Robot Framework 7.2.2 (Python 3.13.2 on win32)
**OS:** Windows 11 Enterprise, Version: 23H2, OS build: 22631.4890, Installed on: 10. 7. 2024
### Steps to reproduce:
- Install robotframework
`pip install robotframework==7.2.2`
`pip install faker`
- Create test a case that uses faker
```
*** Settings ***
Library faker.Faker sk-SK
*** Test Cases ***
Do nothing
Sleep 1s
```
- Run the test case
```
$ robot robot-faker.robot
[ ERROR ] Unexpected error: AttributeError: aba
Traceback (most recent call last):
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\utils\application.py", line 81, in _execute
rc = self.main(arguments, **options)
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\run.py", line 475, in main
result = suite.run(settings)
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\running\model.py", line 802, in run
self.visit(runner)
~~~~~~~~~~^^^^^^^^
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\model\testsuite.py", line 421, in visit
visitor.visit_suite(self)
~~~~~~~~~~~~~~~~~~~^^^^^^
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\model\visitor.py", line 128, in visit_suite
if self.start_suite(suite) is not False:
~~~~~~~~~~~~~~~~^^^^^^^
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\running\suiterunner.py", line 81, in start_suite
ns.handle_imports()
~~~~~~~~~~~~~~~~~^^
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\running\namespace.py", line 57, in handle_imports
self._handle_imports(self._imports)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\running\namespace.py", line 68, in _handle_imports
self._import(item)
~~~~~~~~~~~~^^^^^^
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\running\namespace.py", line 76, in _import
action(import_setting)
~~~~~~^^^^^^^^^^^^^^^^
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\running\namespace.py", line 124, in _import_library
lib = IMPORTER.import_library(name, import_setting.args,
import_setting.alias, self.variables)
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\running\importer.py", line 54, in import_library
lib.create_keywords()
~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\running\testlibraries.py", line 326, in create_keywords
StaticKeywordCreator(self, avoid_properties=True).create_keywords()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\running\testlibraries.py", line 373, in create_keywords
kw = self._create_keyword(instance, name)
File "C:\Users\jan.dubcak\robot-faker\robot-faker\Lib\site-packages\robot\running\testlibraries.py", line 460, in _create_keyword
candidate = inspect.getattr_static(instance, name)
File "C:\Users\jan.dubcak\AppData\Local\Programs\Python\Python313\Lib\inspect.py", line 1863, in getattr_static
raise AttributeError(attr)
AttributeError: aba
```
- Downgrade to RF 6.1.1
`pip install robotframework==6.1.1`
- Run the test case again
```
$ robot robot-faker.robot
==============================================================================
Robot-Faker
==============================================================================
Do nothing | PASS |
------------------------------------------------------------------------------
Robot-Faker | PASS |
1 test, 1 passed, 0 failed
==============================================================================
Output: C:\Users\jan.dubcak\robot-faker\output.xml
Log: C:\Users\jan.dubcak\robot-faker\log.html
Report: C:\Users\jan.dubcak\robot-faker\report.html
```
Kind thanks for anyone looking at this,
Jan. | 0easy
|
Title: [Tracker] Code health related to ruff
Body: ## 🚀 Feature
Recently switched to using ruff as the tool responsible for code health (#2292). But we can extract and benefit much more from this tool, and for that, we need to update our code base. So feel free to submit a PR allowing us to enable some of these rules. We understand that some of them require a lot of effort, so PRs that partially fix the existing problems are very welcome too!
A list including some of the rules we should consider enable on ruff to improve the package health
* [ ] B (bugbear): https://beta.ruff.rs/docs/rules/#flake8-bugbear-b
* [ ] ISC (flake8-implicit-str-concat): https://beta.ruff.rs/docs/rules/#flake8-implicit-str-concat-isc
* [ ] RET (flake8-return): https://beta.ruff.rs/docs/rules/#flake8-return-ret
* [ ] SIM (simplify): https://beta.ruff.rs/docs/rules/#flake8-simplify-sim
* [ ] TCH (flake8-type-checking): https://beta.ruff.rs/docs/rules/#flake8-type-checking-tch
* [ ] PGH (pygrep-hooks): https://beta.ruff.rs/docs/rules/#pygrep-hooks-pgh
* [ ] PT (flake8-pytest-style-pt): https://beta.ruff.rs/docs/rules/#flake8-pytest-style-pt
* Partially fix some of the existing issues - #2444
The complete list of rules available on ruff can be found on the official webpage: https://beta.ruff.rs/docs/rules/
* A list of the rules is also available on the pyproject.toml: #2441
## Related PRs
* #2292
* #2358
* #2442
## Related issues
* #2443
______________________________________________________________________
#### Consider also to contribute to Kornia universe projects :)
<sub>
- [**Tutorials**](https://github.com/kornia/tutorials): our repository containing the tutorials.
</sub>
| 0easy
|
Title: Use the path .gpteng for storing logs etc per default
Body: ### Details
Change the workflow of gpt-engineer to be like so:
- Always set `memory` and `archive` folders paths to be to the .gpteng folder (which will create it per default if it doesn't exist)
- Always set `workspace` and `input` folder paths to be directly in the `project path`
<details open>
<summary>Checklist</summary>
- [X] `gpt_engineer/db.py`
> • Modify the `__init__` method of the DB class to set the default paths for the `memory` and `archive` folders to be in the `.gpteng` folder.
> • Modify the `__init__` method of the DB class to set the default paths for the `workspace` and `input` folders to be in the `project path`.
> • Modify the `__init__` method of the DB class to create the `.gpteng` folder if it does not exist.
- [X] `tests/test_db.py`
> • Update any tests related to the default paths of the `memory`, `archive`, `workspace`, and `input` folders to reflect the changes in the DB class's `__init__` method.
> • Add a test to verify that the `.gpteng` folder is created if it does not exist.
- [X] `docs/intro/db_class.md`
> • Document the changes to the default paths of the `memory`, `archive`, `workspace`, and `input` folders.
> • Document the creation of the `.gpteng` folder if it does not exist.
</details>
| 0easy
|
Title: [DOCS] Submit to pyviz.org
Body: https://pyviz.org/tools.html#dashboarding
We make sense for mult categories afaict: High-Level, Native-GUI, Other InfoVis, SciVis, Graphs and networks, and arguably, Dashboarding
Callouts/Built on:
* webgl
* graphistry: https://images.app.goo.gl/HDfe7R8WismQzBz56 | 0easy
|
Title: BaseUrl: Provide a property or method to access the encoded URL string.
Body: ### Initial Checks
- [x] I have searched Google & GitHub for similar requests and couldn't find anything
- [x] I have read and followed [the docs](https://docs.pydantic.dev) and still think this feature is missing
### Description
This feature request is the result of hitting a bug in our application and wishing there was a way to prevent it in the future.
## Context
Through the body of this feature request, I use `HttpUrl` as an example, but this feature request likely applies to the whole _BaseUrl hierarchy and possibly other unrelated but similar types.
Imagine a BaseModel looking something like the following:
```python
class MyData(BaseModel):
my_url: HttpUrl | None
```
And an API like the following:
```python
def store_my_url(my_url: str | None) -> None: ...
```
The important part is that the API of `store_my_url` is very inflexible, and it aggressively checks the type of its input.
The bug we had was with code like the following: `store_my_url(str(my_data.my_url))` (where `my_data: MyData`). Unfortunately, this is almost right, but no tooling(1) will help you identify that you are going to convert `None` into `"None"` which is entirely unintended.
(1) The way we have mypy configured doesn't seem to catch this, and I think we have it on a super strict mode.
## Feature Request / Proposal
It would be nice if there was a property or method like:
```python
@property
def encoded(self) -> str: # This needs a better name.
return str(self)
```
This would allow the buggy call to be written as: `store_my_url(my_data.my_url.encoded)` which would have instantly failed type checking and avoided the translation error. This would also make changing the `my_url` field of MyData from `HttpUrl` to `HttpUrl | None` safer as static analysis tooling would be able to identify more places where the type union would need to be updated.
Alternative approaches to solving this problem would also be acceptable as I might have overlooked something. It would be nice if the pattern for converting `HttpUrl | None` to a string representation could be less error prone.
Thanks for your consideration.
### Affected Components
- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)
- [ ] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)
- [ ] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`
- [ ] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)
- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)
- [ ] [Model Config](https://docs.pydantic.dev/concepts/config/)
- [x] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type
- [ ] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)
- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)
- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode
- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc. | 0easy
|
Title: Improve structure of `admin.apps.config.set` HTTP API arguments
Body: I improved the documentation and helped implement the API methods for `admin.apps.config.*` recently in the node-slack-sdk. As part of this work, it became clear that the arguments for the `admin.apps.config.set` method were vaguely documented and unclear.
I did some discovery and was able to more specifically type them in the node-slack-sdk. See [this section of the PR in node-slack-sdk](https://github.com/slackapi/node-slack-sdk/pull/1676/files#diff-3cc91696a3000faca80f10afd29704e5e7aee229b13a431fc966f7d7ef1e3a4aR1046).
Perhaps we could do something similar in the Python SDK.
### Category (place an `x` in each of the `[ ]`)
- [x] **slack_sdk.web.WebClient (sync/async)** (Web API client) | 0easy
|
Title: Add description of parameter to docstring
Body: @Haebuk had opened a PR to deal with this but closed it https://github.com/DistrictDataLabs/yellowbrick/pull/1233
We just need to add this line below to the KneeLocator docstring for `y` located in yellowbrick/utils/kneed.py
"A list of k scores corresponding to each value of k. The type of k scores are determined by the metric parameter from the KElbowVisualizer class. "
https://github.com/DistrictDataLabs/yellowbrick/blob/092c0ca25187b3cde9f608a1f7bc6d8c2b998f96/yellowbrick/utils/kneed.py#L61-L62
```
class KneeLocator(object):
"""
Finds the "elbow" or "knee" which is a value corresponding to the point of maximum curvature
in an elbow curve, using knee point detection algorithm. This point is accessible via the
`knee` attribute.
Parameters
----------
x : list
A list of k values representing the no. of clusters in KMeans Clustering algorithm.
y : list
A list of k scores corresponding to each value of k. The type of k scores are determined by the metric parameter from the KElbowVisualizer class.
S : float, default: 1.0
Sensitivity parameter that allows us to adjust how aggressive we want KneeLocator to
be when detecting "knees" or "elbows".
curve_nature : string, default: 'concave'
A string that determines the nature of the elbow curve in which "knee" or "elbow" is
to be found.
curve_direction : string, default: 'increasing'
A string that determines tha increasing or decreasing nature of the elbow curve in
which "knee" or "elbow" is to be found.
online : bool, default: False
kneed will correct old knee points if True, will return first knee if False
Notes
-----
The KneeLocator is implemented using the "knee point detection algorithm" which can be read at
`<https://www1.icsi.berkeley.edu/~barath/papers/kneedle-simplex11.pdf>`
``` | 0easy
|
Title: Allow a fixed num_steps in HMC
Body: It would be nice to allow a fixed num_steps in HMC. If the trajectory length is fixed and step size is too small, num_steps will turn out to be quite large. This will be helpful for #1355 and #1353. | 0easy
|
Title: Modify to be more pythonic
Body: - [x] Remove `type` parameter for `def make_translate_slider` It is not used and it doesn't make any sense.
https://github.com/mithi/hexapod-robot-simulator/blob/5c1f8a187e7497a37e9b2b5d66ec2fe72b3cc61f/widgets/ik_ui.py#L34
- [x] Use None instead of {} as default parameter as per convention
https://github.com/mithi/hexapod-robot-simulator/blob/5c1f8a187e7497a37e9b2b5d66ec2fe72b3cc61f/widgets/pose_control/kinematics_section_maker.py#L28
- dangerous-default-value (W0102) Dangerous default value %s as argument Used when a mutable value as list or dictionary is detected in a default value for an argument.
- See also https://stackoverflow.com/questions/26320899/why-is-the-empty-dictionary-a-dangerous-default-value-in-python/26320917
- [x] It seems like we don't actually need to store n_axis as an attribute
https://github.com/mithi/hexapod-robot-simulator/blob/5c1f8a187e7497a37e9b2b5d66ec2fe72b3cc61f/hexapod/models.py#L108 | 0easy
|
Title: [ENH] Much faster compressed uploads using new REST API features
Body: Especially in distributed settings, a bit of compression can go a long way for faster uploads:
### Easy wins
The current REST API supports compression at several layers:
- [ ] Maybe: Instead of Arrow, send single Parquet with Snappy compression
- [ ] Generic: Send as gz/gzip
- [x] Cache: Use File IDs for nodes/edges via a global weakmap of df -> FileID (https://github.com/graphistry/pygraphistry/pull/195)
### Trickier wins
- [ ] Do a quick per-col Categoricals check to see if we can dictionary-encode any cols
- [ ] Multi-part uploads (multiple parquet, ..)
- [x] Hash-checked files (https://github.com/graphistry/pygraphistry/pull/195)
### Interface
Unclear what the defaults + user overrides should be --
Default:
* No compression when `nginx` / `localhost` / `127.0.0.1`
* No compression when table is < X KB
* Otherwise, compress?
Override:
* In `register` < `settings` < `plot()` cascade, be able to decide what happens
* When providing arrow/parquet, that may be meaningful too
Ex:
```python
graphistry.register(server='nginx')
g.plot() # no compression
```
```python
g.edges(small_df).plot() # no compression
```
```python
g.edges(big_arr).plot() # auto-compress
```
```python
graphistry.register(transfer_encoding='gzip', gzip_opts={...})
g = g.settings(transfer_type='parquet')
g.edges(small_arr).plot(parquet_opts={...})
```
Another thought is:
```g.plot(compression='auto' | True | False | None)```
* When given pandas/cudf/arrow/etc., we do auto policies
* When given parquet:
* by default, we do nothing: the user can control many optimizations at that level and we just pass along
* `compression=True` will let us start doing things again
Or somewhere inbetween..
### Prioritization
* The new File API and point-and-click features encourage more & bigger uploads
* User reports of upload issues when on slow networks
* Usage will ensure steady early exercise of the new APIs
### References
* Multiple potential encodings - gzip, brotli, ... - and not hard to add server support if any preferred
* REST API: https://hub.graphistry.com/docs/api/2/rest/upload/data/#uploaddata2
* PyArrow
* Dictionary encoding for categoricals: https://arrow.apache.org/docs/python/generated/pyarrow.compress.html
* new gzip-level support, but unclear if useful at that level:
https://arrow.apache.org/docs/python/generated/pyarrow.compress.html
* Parquet:
* cudf defaults to snappy, I think: https://docs.rapids.ai/api/cudf/nightly/api.html?highlight=parquet#cudf.io.parquet.to_parquet
* pyarrow parquet writer has fancier per-col modes: https://docs.rapids.ai/api/cudf/nightly/api.html?highlight=parquet#cudf.io.parquet.to_parquet | 0easy
|
Title: MPL Cropping
Body: There are some issues with when creating reports with seaborn or MPL plots, where they appear cropped on either axes. Generally making sure that the matplotlib gcf has `.tightlayout()` applied has fixed the issue, but this doesn't always lead to expected results.
It seems this is a common problem when using matplotlib's `save_fig` function, which we use under the hood:
- https://stackoverflow.com/questions/45239261/matplotlib-savefig-text-chopped-off
- https://stackoverflow.com/questions/6774086/why-is-my-xlabel-cut-off-in-my-matplotlib-plot
This isn't strictly a datapane bug, but we can probably improve the user experience here by adding some padding or making the layout tighter by default. | 0easy
|
Title: DALLE-3 not at parity with ChatGPT DALL-E3
Body: 
Images generated by the bot are not consistent with images drawn by ChatGPT for the same prompt. Some thoughts:
- Might be an issue with the default drawing params (e.g `natural` vs `vivid)
- Might be differences in the size of the image generated by chatgpt vs the bot (we default to 1024x1024)
- Are we not invoking DALL-E3 correctly when the draw command is executed? (This one is unlikely I think, I'm fairly confident that d3 is being invoked correctly) | 0easy
|
Title: [FEATURE] Expose the way to configure header generation
Body: ### Is your feature request related to a problem? Please describe
The default generation strategy often leads to a failures coming from the webserver or the framework. In such cases users may want (as discussed multiple times) to tune data generation
### Describe the solution you'd like
some function that will adjust the internal headers format.
### Additional context
Relevant PR - https://github.com/WordPress/openverse/pull/4126
| 0easy
|
Title: Unexpected results on Chande-Kroll stop indicator
Body: Hi. I couldn't find a mailing list to ask this directly.
According to the book *"The New Technical Trader"* by Chande & Kroll (1st ed, Wiley, ISBN 9780471597803), on page 95, the CK **long** stop is calculated by computing a 10 day simple moving average of the ATR, which they name *ATR_10*, to which we subtract 3*ATR_10 from highest high of the last 10 days. This quantity will be the preliminary long stop. The final long stop will be the highest value of the preliminary long stop over a 20 day period.
There seem to be some discrepancies however. According to the book then, the p value is 10, x = 3, q = 20, and the MA mode is a simple moving average.
However https://github.com/twopirllc/pandas-ta/blob/a31489af18f8a23f36a1831d1f3c5bbd37a930c9/pandas_ta/trend/cksp.py#L19 has the ATR call without specifying a MA mode, which defaults to the Wilder moving average. This and the 1st step only would be similar to the Donchian channels if i'm not mistaken. The defaults aren't matching the defaults from the book either at https://github.com/twopirllc/pandas-ta/blob/a31489af18f8a23f36a1831d1f3c5bbd37a930c9/pandas_ta/trend/cksp.py#L13
The implementation sketch from the reference at https://github.com/twopirllc/pandas-ta/blob/a31489af18f8a23f36a1831d1f3c5bbd37a930c9/pandas_ta/trend/cksp.py#L63 , namely [here](https://www.multicharts.com/discussion/viewtopic.php?t=48914)
seems to be in conflict with the idea expressed in the book as well:
> first high stop = HIGHEST[p](high) - x * Average True Range[p]
> first low stop = LOWEST[p](high) + x * Average True Range[p]
> stop short = HIGHEST[q](first high stop)
> top long = LOWEST[q](first low stop)
here the short and long stop seem swapped.
Which exact implementation is provided in Pandas-TA? Is Pandas-TA trying to follow the behavior of another application or service (i.e, tradingview), hence the different defaults and behaviour?
Thank you.
| 0easy
|
Title: Add sklearn's DecisionTreeRegressor
Body: | 0easy
|
Title: Indicator Request - Weis Wave Volume
Body: Hello,
It would be nice if this indicator gets integrated in the library (made by modhelius, TV):
```
//@version=4
study("Weis Wave Volume", shorttitle="WWV", overlay=false, resolution="")
method = input(defval="ATR", options=["ATR", "Traditional", "Part of Price"], title="Renko Assignment Method")
methodvalue = input(defval=14.0, type=input.float, minval=0, title="Value")
pricesource = input(defval="Close", options=["Close", "Open / Close", "High / Low"], title="Price Source")
useClose = pricesource == "Close"
useOpenClose = pricesource == "Open / Close" or useClose
useTrueRange = input(defval="Auto", options=["Always", "Auto", "Never"], title="Use True Range instead of Volume")
isOscillating = input(defval=false, type=input.bool, title="Oscillating")
normalize = input(defval=false, type=input.bool, title="Normalize")
vol = useTrueRange == "Always" or useTrueRange == "Auto" and na(volume) ? tr : volume
op = useClose ? close : open
hi = useOpenClose ? close >= op ? close : op : high
lo = useOpenClose ? close <= op ? close : op : low
if method == "ATR"
methodvalue := atr(round(methodvalue))
if method == "Part of Price"
methodvalue := close / methodvalue
currclose = float(na)
prevclose = nz(currclose[1])
prevhigh = prevclose + methodvalue
prevlow = prevclose - methodvalue
currclose := hi > prevhigh ? hi : lo < prevlow ? lo : prevclose
direction = int(na)
direction := currclose > prevclose ? 1 : currclose < prevclose ? -1 : nz(direction[1])
directionHasChanged = change(direction) != 0
directionIsUp = direction > 0
directionIsDown = direction < 0
barcount = 1
barcount := not directionHasChanged and normalize ? barcount[1] + barcount : barcount
vol := not directionHasChanged ? vol[1] + vol : vol
res = barcount > 1 ? vol / barcount : vol
plot(isOscillating and directionIsDown ? -res : res, style=plot.style_columns, color=directionIsUp ? color.green : color.red, transp=75, linewidth=3, title="Wave Volume")
```
Personally, I find it quite tricky to update 'currclose' value in the Dataframe, since 'prevclose' value doesn't get dynamically updated.
Thanks! | 0easy
|
Title: [GOOD FIRST ISSUE]: Reduce sleeping time
Body: ### Issue summary
Reduce or remove the 60 seconds of sleep between searches for job titles
### Detailed description
How can I remove the the 60 seconds of sleep that the bot takes between different searches for different job titles? I know I can just type "y" and hit enter, but then I can't just leave the bot run. Any help would be greatly appreciated.
### Steps to reproduce (if applicable)
_No response_
### Expected outcome
_No response_
### Additional context
_No response_ | 0easy
|
Title: move `wrap_litserve_start` to utils
Body: > I would rather reserve `conftest` for fixtures and this (seems to be) general functionality move to another utils module
_Originally posted by @Borda in https://github.com/Lightning-AI/LitServe/pull/190#discussion_r1705957686_
| 0easy
|
Title: Try our assistant button
Body:
https://github.com/LAION-AI/Open-Assistant/assets/95025816/9c1ec7eb-4eb5-47c2-96a3-1101df12e6d7
Another one too hard to explain with words... | 0easy
|
Title: tabula.io.read_pdf 'columns' argument change type annotation to Iterable[float]
Body: **Is your feature request related to a problem? Please describe.**
<!--- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
The thing is that I tried to give some meaning to the coordinates and wanted to use NamedTuple instead of List, Tuple, etc. In case of the argument "area" it works as it uses more general typing - Iterable[float]. But columns typed as List[float] so the linter gives an error in that case.
**Describe the solution you'd like**
<!--- A clear and concise description of what you want to happen. -->
I wanted to suggest changing it to Iterable[float] as well if there is no any specific reason using List[float].
**Describe alternatives you've considered**
<!--- A clear and concise description of any alternative solutions or features you've considered. -->
n/a
**Additional context**
<!--- Add any other context or screenshots about the feature request here. -->
n/a | 0easy
|
Title: Improve error message of timedelta validation
Body: We currently have the following error message from timedelta validation (for example, requiring a timedelta greater than an hour):
```python
from pydantic import TypeAdapter
from annotated_types import Gt
from typing import Annotated
from datetime import timedelta
TypeAdapter(Annotated[timedelta, Gt(timedelta(hours=1, minutes=30))]).validate_python(timedelta(seconds=180))
```
Gives the following error
```
ValidationError: 1 validation error for timedelta
Input should be greater than datetime.timedelta(seconds=5400) [type=greater_than, input_value=datetime.timedelta(seconds=180), input_type=timedelta]
For further information visit https://errors.pydantic.dev/2.1/v/greater_than
```
In general we would like error messages to avoid giving requirements of Python knowledge. So we would prefer to reverse the timedelta into a human-readable form. For the example above, maybe the error message could read:
```
Input should be greater than 1 hour and 30 minutes [type=greater_than, input_value=datetime.timedelta(seconds=180), input_type=timedelta]
```
It shouldn't be too hard to write a function which takes a Python timedelta and creates a human-friendly representation of it.
For `timedelta()`, i.e. zero timedelta, maybe `0 seconds` is a good enough representation.
Selected Assignee: @adriangb | 0easy
|
Title: Progress Bars are not compatible with Pandas 0.25.0
Body: Pandas broke tqdm integration with version 0.25.0. And thusly, it seems to have broken swifter as well.
see: https://github.com/tqdm/tqdm/issues/780
It still works with `.swifter.progress_bar(False)` | 0easy
|
Title: Support TLS secured proxies
Body:
Instead of raise error before making requests:
```py
(CurlError("Failed to perform, ErrCode: 35, Reason: 'error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER'"),)
```
We could wrap CurlError if proxy scheme is HTTPS, then show additional warning:
```py
warning.warn(
"You are using httpS proxy, we noticed proxy-server doesn't support HTTPS,"
" try replace proxy protocol 'https://' with 'http://' and check if it's helping."
)
```
_Originally posted in https://github.com/yifeikong/curl_cffi/pull/171#discussion_r1429055071_ | 0easy
|
Title: [ERROR] asyncio.exceptions.CancelledError
Body: The program throws an error at runtime, how can this error be solved?
```python
[ERROR] [2022-06-27 14:13:51,391:asyncio.events]
Traceback (most recent call last):
File "/home/xumaoyuan/.virtualenvs/lib/python3.8/site-packages/distributed/utils.py", line 761, in wrapper
return await func(*args, **kwargs)
File "/home/xumaoyuan/.virtualenvs/lib/python3.8/site-packages/distributed/client.py", line 1400, in _handle_report
await self._reconnect()
File "/home/xumaoyuan/.virtualenvs/lib/python3.8/site-packages/distributed/utils.py", line 761, in wrapper
return await func(*args, **kwargs)
File "/home/xumaoyuan/.virtualenvs/lib/python3.8/site-packages/distributed/client.py", line 1211, in _reconnect
await self._ensure_connected(timeout=timeout)
File "/home/xumaoyuan/.virtualenvs/lib/python3.8/site-packages/distributed/client.py", line 1241, in _ensure_connected
comm = await connect(
File "/home/xumaoyuan/.virtualenvs/lib/python3.8/site-packages/distributed/comm/core.py", line 313, in connect
await asyncio.sleep(backoff)
File "/usr/lib/python3.8/asyncio/tasks.py", line 659, in sleep
return await future
asyncio.exceptions.CancelledError
```
| 0easy
|
Title: Responsive support for echart
Body: Thanks for the great project!
"echarts" is not responsive, so I think that needs to be addressed.
In echarts, when the resize event is fired, it seems to be enough to resize this.chart as follows.
```
this.chart.resize()
```
If possible, I would appreciate it if you could support this.
reference:https://apache.github.io/echarts-handbook/en/concepts/chart-size/ | 0easy
|
Title: Fix type hints for keyboard builders
Body: ### Problem
Type hints for `InlineKeyboardBuilder` and `ReplyKeyboardBuilder` `.as_markup()` method are generic. So now it's `Union[InlinekeyboardMarkup, ReplyKeyboardMarkup]`.
It would be better to make `InlineKeybordBuilder.as_markup` return type be `InlineKeyboardMarkup` and similarly for the `ReplyKeyboardBuilder` class
### Possible solution
Separate type hints for builder classes
### Alternatives
_No response_
### Code example
```python3
inline_keyboard = (
InlineKeyboardBuilder()
.add(InlineKeyboardButton(text="Hello World", url="https://google.com"))
.as_markup()
) # type is InlineKeyboardMarkup | ReplyKeyboardMarkup
# Keyboard came from InlineKeyboardBuilder but has generic type
# so linter complains about it
# "ReplyKeyboardMarkup" is incompatible with "InlineKeyboardMarkup"
await bot.edit_message_reply_markup(0, 0, reply_markup=inline_keyboard)
```
### Additional information
_No response_ | 0easy
|
Title: Disable welcome message
Body: I'm working on migrating my `.xonshrc` into a xontrib library, but after I've done that, I get the welcome message, which says "Create ~/.xonshrc file manually or use xonfig to suppress the welcome message".
How does one "use xonfig to suppress the welcome message"?
I can't find anything in the docs or the xonfig help that suggests how to suppress the welcome message. Moreover, when I look at the code, I don't see any hooks that could plausibly work to suppress the message.
It looks like I can suppress it by `monkeypatching xonsh.main.print_welcome_message`.
Or by setting `xonsh.xonfig.WELCOME_MSG = []`.
It looks like I can do it with one line and avoiding polluting the namespace with `__import__('xonsh.xonfig').xonfig.WELCOME_MSG = []`. Is that the recommended syntax? Should there not perhaps be a cleaner way to do that and for the welcome message to make it easier to provide or link to that technique? | 0easy
|
Title: [FEATURE] scribe.rip alternative to medium.com
Body: <!--
DO NOT REQUEST UI/THEME/GUI/APPEARANCE IMPROVEMENTS HERE
THESE SHOULD GO IN ISSUE #60
REQUESTING A NEW FEATURE SHOULD BE STRICTLY RELATED TO NEW FUNCTIONALITY
-->
**Describe the feature you'd like to see added**
Allow [scribe.rip](https://scribe.rip) to replace all `medium.com` and `*.medium.com` results
**Additional context**
Should function the same as `twitter.com` -> `nitter.net`, `instagram.com` -> `bibliogram.art/u`, etc, but with the exception that it needs to replace the full domain (including subdomain) for every matching result, not just the `medium.com` portion.
| 0easy
|
Title: Improve error message in Placeholder
Body: `Placeholder` raises an issue if the template has undefined parameters. We use jinja under the hood and we can leverage jinja's error to make the error message clearer:
https://github.com/ploomber/ploomber/blob/2e0d764c8f914ba21480b275c545ea50ff800513/src/ploomber/placeholders
/placeholder.py#L221
Example:
```python
from ploomber.placeholders.placeholder import Placeholder
# breaks because the variable (an int with value 1) does not have an attribute "some_attribute"
Placeholder('SELECT * FROM {{variable.some_attribute}}').render(dict(variable=1))
```
However, the current implementation does not show the original error message (right after `jinja2 raised an UndefinedError`), so we need to modify it.
add tests here: https://github.com/ploomber/ploomber/blob/master/tests/placeholders/test_placeholder.py
the tests should check that an exception is raised and the error message, [see this for an example](https://docs.pytest.org/en/6.2.x/assert.html#assertions-about-expected-exceptions)
| 0easy
|
Title: Remove deprecated `accept_plain_values` from `timestr_to_secs` utility function
Body: It was deprecated in RF 6.1 (#4522) because it wasn't used by Robot itself and wasn't considered too useful in general. It can now be removed. | 0easy
|
Title: Add Linter and Formatter for python source codes
Body: /kind feature
**Describe the solution you'd like**
In kubeflow/katib, there are lots of source codes written in python. Even though there are well-defined CI processes for source codes written in Golang, however for Python codes, only charmed-katib seems to have such process now.
There should be a fixed linter, formatter and tester with some rule-config files(.flake8, .pylintrc, ...) and the github action workflow to check those all. For example, there are lots of unit test codes for suggestion, but current CI process doesn't check if they are failed or check the test coverage decreased in every PR.
If there are any convention checking tools, please share in katib community :)
**Anything else you would like to add:**
If you guys agree to this, I'd like to contribute to this feature in some parts. | 0easy
|
Title: [new]: `array_concat(array1, array2)`
Body: ### Check the idea has not already been suggested
- [X] I could not find my idea in [existing issues](https://github.com/unytics/bigfunctions/issues?q=is%3Aissue+is%3Aopen+label%3Anew-bigfunction)
### Edit the title above with self-explanatory function name and argument names
- [X] The function name and the argument names I entered in the title above seems self explanatory to me.
### BigFunction Description as it would appear in the documentation
-
### Examples of (arguments, expected output) as they would appear in the documentation
array_concat([1], [2, 3]) --> [1, 2, 3] | 0easy
|
Title: Make markdown report pretty
Body: 💄 | 0easy
|
Title: Add doc strings to method annotation classes
Body: Method annotation classes in `uplink/decorators.py` are missing class doc strings. To improve code documentation, we need to add doc strings to the following classes, adhering the [Google Style Guide](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments) for consistency with the rest of the codebase:
- [x] `uplink.decorators.headers`
- [x] `uplink.decorators.form_url_encode`
- [x] `uplink.decorators.multipart`
- [x] `uplink.decorators.json`
- [x] `uplink.decorators.timeout`
- [x] `uplink.decorators.args` | 0easy
|
Title: Document how to use Faker inside LazyAttribute
Body: #### The problem
The class `factory.Faker` is a wrapper around the "real" Faker.
It works fine but sometimes you need to use it inside a LazyAttribute and you need to generate a value.
At the moment I use a trick like
```python
class MyFactory(factory.Factory):
class Params:
user = None
@factory.lazy_attribute
def current_ip_address(obj):
if obj.user:
return obj.user.ip_address
else:
return factory.Faker('ipv4').evaluate(None, None, {'locale': factory.Faker._DEFAULT_LOCALE})
```
#### Proposed solution
In some cases I can use `Maybe` but not always.
Would be nice to have a section in the documentation that explain how to use `Faker` in `LazyAttribute`
| 0easy
|
Title: BotKicked exception match string outdated?
Body: Looks like the error message string expected by BotKicked exception isn't matching the current Telegram API response.
So the `BotKicked` exception is never raised.
## Expected Behavior
`BotKicked` exception should be raised.
## Current Behavior
`TelegramAPIError` exception is raised with the following message: "Forbidden: bot was kicked from the group chat".
However, `BotKicked` exception isn't raised (it expects 'bot was kicked from a chat').
### Steps to Reproduce
1. Add bot to the group
2. Remove bot from the group
3. Send a message from the bot to this group
### Failure Logs
Here's the actual response from the Telegram API:
```
{
"ok": false,
"error_code": 403,
"description": "Forbidden: bot was kicked from the group chat"
}
```
The [API source code](https://github.com/tdlib/telegram-bot-api/blob/master/telegram-bot-api/Client.cpp) has 3 situations for BotKicked:
- Forbidden: bot was kicked from the group chat
- Forbidden: bot was kicked from the supergroup chat
- Forbidden: bot was kicked from the channel chat
| 0easy
|
Title: [Test] E2e Tests for Notebook Examples
Body: ### What you would like to be added?
We plan to add e2e tests for notebooks in CI/CD, run with papermill.
REF: https://github.com/nteract/papermill
The notebook examples in need of e2e tests are listed in the following table:
Under `examples/v1beta1/kubeflow-pipelines`:
- [ ] early-stopping.ipynb
- [ ] kubeflow-e2e-mnist.ipynb
Under `examples/v1beta1/sdk`:
- [ ] cmaes-and-resume-policies.ipynb
- [ ] nas-with-darts.ipynb
- [ ] tune-train-from-func.ipynb
And I would strongly recommend that we start with examples under `sdk` subdirectory since examples under `kubeflow-pipelines` need full kubeflow components on test env and are more difficult :)
### Why is this needed?
This will help us ensure the correctness of our notebook examples for data scientists.
### Love this feature?
Give it a 👍 We prioritize the features with most 👍 | 0easy
|
Title: Add Flower Baseline: [FedRS]
Body: ### Paper
Xin-Chun Li, De-Chuan Zhan. FedRS: Federated Learning with Restricted Softmax for Label Distribution Non-IID Data (KDD'21)
### Link
https://dl.acm.org/doi/10.1145/3447548.3467254
### Maybe give motivations about why the paper should be implemented as a baseline.
FedRS (Federated Learning with Restricted Softmax) is a method to correct the negative effect of local training with missing classes by restricting the update of weights via a corrected softmax term. The main parameter is alpha (between 0.0 and 1.0), which determines the strength of this correction. If alpha=1.0, this is same as vanilla FedAvg.
FedRS (100+ citations) is referenced in other papers exploring Federated Learning with non-iid datasets, such as [FedLC](https://arxiv.org/abs/2209.00189) (Federated Learning with Label Distribution Skew via Logits Calibration) and [FedConcat](https://arxiv.org/abs/2312.06290) (Exploiting Label Skews in Federated Learning with Model Concatenation).
The plan is to reproduce the results for FedAvg and FedRS (with different alpha values) in Table 5 of the paper:
<img width="509" alt="Screenshot 2024-11-19 at 06 14 23" src="https://github.com/user-attachments/assets/66f11575-5c62-4c9d-b501-c9f368391c33">
### Is there something else you want to add?
I've found no mention of FedRS in PRs or in existing baselines. I've started the repro process.
### Implementation
#### To implement this baseline, it is recommended to do the following items in that order:
### For first time contributors
- [x] Read the [`first contribution` doc](https://flower.ai/docs/first-time-contributors.html)
- [x] Complete the Flower tutorial
- [x] Read the Flower Baselines docs to get an overview:
- [x] [How to use Flower Baselines](https://flower.ai/docs/baselines/how-to-use-baselines.html)
- [x] [How to contribute a Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html)
### Prepare - understand the scope
- [X] Read the paper linked above
- [X] Decide which experiments you'd like to reproduce. The more the better!
- [X] Follow the steps outlined in [Add a new Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html#add-a-new-flower-baseline).
- [X] You can use as reference [other baselines](https://github.com/adap/flower/tree/main/baselines) that the community merged following those steps.
### Verify your implementation
- [X] Follow the steps indicated in the `EXTENDED_README.md` that was created in your baseline directory
- [X] Ensure your code reproduces the results for the experiments you chose
- [X] Ensure your `README.md` is ready to be run by someone that is no familiar with your code. Are all step-by-step instructions clear?
- [X] Ensure running the formatting and typing tests for your baseline runs without errors.
- [X] Clone your repo on a new directory, follow the guide on your own `README.md` and verify everything runs. | 0easy
|
Title: Supertrend gives different results on 5min timeframe in TradingView
Body: Supertrend indicator is showing me different values in TradingView on 5m timeframe.
I'm trying to fix it myself but I have no idea where to start, I'm very new to this. I would appreciate any suggestions.
Version:
0.3.14b0 | 0easy
|
Title: Improve `--sysinfo` by providing version data and env variables insights
Body: ## Feature description
Enhance the `--sysinfo` option to include additional useful and relevant information:
1. Indicate the version of gpt-engineer that is running, and specify whether it is a released version or a development version (installed via pip or from GitHub repo).
2. Include the output of which environment variables are set, ensuring that API key values and other sensitive data are masked. The focus is to verify that these variables are correctly set without exposing their values.
## Motivation/Application
This feature is useful for several reasons:
1. **Version tracking** - Knowing the exact version of gpt-engineer in use helps us diagnose issues more effectively, distinguishing between potential bugs in released versions and bugs in the dev version.
2. **Environment configuration insight** - Including environment variables in the sysinfo output will provide valuable context about the user's setup, aiding in the debugging process while maintaining security by masking sensitive information.
3. **Security compliance** - Masking API keys and other sensitive information in environment variables ensures no sensitive data is inadvertently exposed, maintaining user trust and compliance with security best practices.
Enhancing the `--sysinfo` option with these capabilities can streamline the debugging process, improve security, and provide clearer insights into user environments. This will enable us to resolve issues more quickly and achieve greater stability in our software.
I'm labeling this as a good first issue, too!
| 0easy
|
Title: Curio still mentioned in docs although unsupported since 0.21.0
Body: See discussion https://github.com/encode/httpx/discussions/2232?converting=1, see also comment https://github.com/encode/httpx/discussions/1953#discussioncomment-1715439
Curio is still mentioned in documentation here as being supported: https://www.python-httpx.org/async/#curio
But it was dropped in version 0.21.0, when HTTPCore dropped curio support as part of the 0.14.0 redesign.
There is discussion on getting curio support back: https://github.com/encode/httpx/discussions/1953
But right now the documentation is incorrect and curio should be dropped from it. | 0easy
|
Title: Datatree docs should mention that assigning to `.dataset` is allowed
Body: ### What is your issue?
See https://github.com/xarray-contrib/datatree/issues/312, fyi @flamingbear | 0easy
|
Title: [Feature request] Add apply_to_images to CropNonEmptyMaskIfExists
Body: | 0easy
|
Title: Refactor / rewrite Upload_ReactComponent.react.js
Body: The `Upload_ReactComponent.react.js` would really need some refactoring. This would make further development a lot easier and the package more maintainable. | 0easy
|
Title: Set the version on pyproject.toml automatically
Body: ## Description
Set [the version on pyproject.toml file](https://github.com/scanapi/scanapi/blob/master/pyproject.toml#L3) automatically. We want to avoid to manually [bump the version](https://github.com/scanapi/scanapi/pull/212/files#diff-522adf759addbd3b193c74ca85243f7dR3) for each release PR.
Maybe this would be a good candidate: https://github.com/mtkennerly/poetry-dynamic-versioning. It needs more investigation | 0easy
|
Title: [Feature request] Add apply_to_images to ToSepia
Body: | 0easy
|
Title: 增加 stripe 收款设置
Body: 增加 stripe 收款设置 需要接入stripe 并实现展示,主题可以把首页增加只展示一个商品和多个商品进行页面切换
| 0easy
|
Title: Race condition when connecting with the same websockets transport twice at the same time
Body: Hi @leszekhanusz ,
First of all, thanks again for implementing the subscription part, that's great!
I still get some issues to set up multiple subscriptions and I'm not sure how to solve this. Consider taking your example: https://github.com/graphql-python/gql/blame/master/README.md#L318-L342
Can you confirm me that about subscriptions the `asyncio.create_task(...)` will immediately run the function in another thread, and that all the `await taskX` is to remain the program blocking until each task finishes?
On my side even with your example I get this random error (it's not immediate, sometimes after 1 second, sometimes 10...):
```
RuntimeError: cannot call recv while another coroutine is already waiting for the next message
```
The message is pretty explicit but I don't understand how to bypass this 😢
If you have any idea 👍
Thank you,
EDIT: that's weird because sometimes without modifying the code, the process can run more than 5 minutes without having this error...
EDIT2: Note that sometimes I also get this error about the `subscribe(...)` method
```
async for r in self.ws_client.subscribe(subscriptions['scanProbesRequested']):
TypeError: 'async for' requires an object with __aiter__ method, got generator
```
EDIT3: If I use a different way of doing async (with the same library)
```
try:
loop = asyncio.get_event_loop()
task3 = loop.create_task(execute_subscription1())
task4 = loop.create_task(execute_subscription2())
loop.run_forever()
```
it works without any error. That's really strange... | 0easy
|
Title: `FileDrop` limit file types with 'accept'
Body: I'd like to limit file types accepted by the `FileDrop`, similar to the `FileInput` of #46 , but currently the `FileDrop` does not take the `accept` kwarg | 0easy
|
Title: Confusing name for degree of freedom in Chi2 metrics
Body: We currently use `df` to represent degree of freedom in chi^2 metrics. This is confusing as it could potentially be read as dataframe, we may want to change to something like `deg_of_free` or `dff` something less ambiguous | 0easy
|
Title: rare label encoder: add warning when categories in variables are below n_categories
Body: Add a warning to inform users that some of the categorical variables in their data sets contain less categories than the limit indicated in the transformer to perform the rare imputation | 0easy
|
Title: np.median raises AssertionError for empty arrays while numpy returns nan
Body: ## Reporting a bug
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
This is a very similar issue to #8451 but here it affects `numpy.median` and setting the error_model does not change the outcome.
```python
import unittest
import numpy as np
from numba import njit
def median_numpy(arr):
return np.median(arr)
@njit
def median_numba(arr):
return np.median(arr)
class TestCase(unittest.TestCase):
def test_median_numpy(self):
self.assertTrue(np.isnan(median_numpy(np.array([], dtype=float))))
self.assertTrue(np.isnan(median_numpy(np.array([]))))
def test_median_numba(self):
self.assertTrue(np.isnan(median_numba(np.array([], dtype=float))))
self.assertTrue(np.isnan(median_numba(np.array([]))))
if __name__ == "__main__":
unittest.main()
```
While numpy returns `nan`, numba raises an AssertionError:
```
Failure
Traceback (most recent call last):
File "/Users/leo/code/msi/code/tests/unit/misc/test_numpy_util.py", line 88, in test_median_numba
self.assertTrue(np.isnan(NumpyUtil.median_numba(np.array([], dtype=float))))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/leo/.miniconda3/envs/exp-2023-10/lib/python3.11/site-packages/numba/np/arraymath.py", line 1556, in _select_two
assert high > low # by construction
AssertionError
```
| 0easy
|
Title: Make `plot_config` and choice of renderer a global config
Body: As described [here](https://lux-api.readthedocs.io/en/latest/source/guide/style.html#styling-custom-plot-settings),`plot_config` is used for setting the plotting style for the rendered visualization. Currently, this is a property that is tied with the dataframe, we should extend this to make this a global config setting (e.g., `lux.config.plot_setting`). The configuration is dependent on the choice of renderer too, so it makes sense to have this as a global setting. | 0easy
|
Title: Replacing hard coded /tmp/ for some instances in the code
Body:
# /tmp/ has been hard coded in some instances
This will be replaced with https://docs.python.org/3/library/tempfile.html which has been well implemented in some instances in the code. | 0easy
|
Title: OpenSSF Best Practices Badge metric API
Body: The canonical definition is here: https://chaoss.community/?p=3939 | 0easy
|
Title: Deprecate `elapsed_time_to_string` accepting time as milliseconds
Body: The `robot.utils.elapsed_time_to_string` utility function currently accepts the elapsed time to be formatted as an integer or float representing milliseconds. This made sense earlier, because the `elapsedtime` attribute of our result model objects contained milliseconds as well, but as part of #4258 we now use `elapsed_time` that contains a `timedelta`. `timedelta` has microsecond precision and also directly support seconds with their `total_seconds()` method. Our utils working with milliseconds doesn't thus make much sense anymore.
The `elapsed_time_to_string` was already enhanced to accept the elapsed time as a `timedelta`, but I believe we should change it to consider ints/floats as seconds as well. Although it's unlikely this function is used outside our code base, that is certainly possible and just changing the behavior wouldn't be good. I thus believe we should deprecate the old behavior first. The problem is that the int/float value itself cannot tell should it be interpreted as seconds or milliseconds, so we need to add a new argument to control that. If the new argument is used, I believe it could be `seconds=True`, then the value is interpreted as seconds. If it's not used, we can interpret it as milliseconds and emit a deprecation warning. | 0easy
|
Title: Interactive Visualizer demos with Binder
Body: Using Ipywidgets, we can create interactive Yellowbrick visualizations. I would like to allow other users to see these interactive demos using Binder. Currently our README has a Binder button that launches a Docker with our examples.ipynb notebook viewable. This allows users can interact with it without having to clone YB and run the notebooks locally. Additional research is needed to determine how Binder can show the demos.
Images of an interactive visualization. It includes a dropdown menu that allows users to toggle between different classification visualizations:
<img width="885" alt="screen shot 2018-08-22 at 5 06 36 pm" src="https://user-images.githubusercontent.com/24831129/44490962-cdb91c80-a62d-11e8-83e3-0961e2002e19.png">
<img width="923" alt="screen shot 2018-08-22 at 5 06 55 pm" src="https://user-images.githubusercontent.com/24831129/44490968-cf82e000-a62d-11e8-99f7-3f4e86db4b41.png">
| 0easy
|
Title: Type conversion: Ignore hyphens when matching enum members
Body: Enum conversion is already now case, space and underscore insensitive (#3611). Ignoring also hyphens would mean that enums like
```python
class Click(Enum):
left_click = auto()
right_click = auto()
```
could be used as `left-click` in addition to the exact match `left_click` and other normalized variants like `left click`.
| 0easy
|
Title: Add max length to `secure_filename()`
Body: There is already a `TODO` in the source code ([falcon/util/misc.py](https://github.com/falconry/falcon/blob/master/falcon/util/misc.py#L367)):
```python
# TODO(vytas): max_length (int): Maximum length of the returned
# filename. Should the returned filename exceed this restriction, it is
# truncated while attempting to preserve the extension.
```
Actually implement this `max_length` parameter according to the above note.
Also add a new option to [`MultipartParseOptions`](https://falcon.readthedocs.io/en/stable/api/multipart.html#falcon.media.multipart.MultipartParseOptions) that is used when evaluating `part.secure_filename`.
In order to avoid a breaking change, both values should be `None` by default, with a note that starting from Falcon 5.0, this value will non-nullable, with the new default of `NN` characters (please suggest). | 0easy
|
Title: support for debugging R scripts
Body: Currently, the `Task.debug()` method only works on Python. If someone tries to debug an R script, this happens:
```py
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-5-828509ada065> in <cell line: 1>()
----> 1 dag['model-run-adultsVsChildren-child'].debug()
~/opt/anaconda3/envs/wordsense_pipeline/lib/python3.8/site-packages/ploomber/tasks/notebook.py in debug(self, kind)
382 """
383 if self.source.language != 'python':
--> 384 raise NotImplementedError(
385 'debug is not implemented for "{}" '
386 'notebooks, only python is supported'.format(
NotImplementedError: debug is not implemented for "r" notebooks, only python is supported
```
We should implement support for debugging R scripts. The implementation isn't that difficult and will be very similar to what we already have for [Python scripts](https://github.com/ploomber/ploomber/blob/2d92ebcea2055da169eadd22dd5afa4d953d878e/src/ploomber/tasks/notebook.py#L373).
## Implementation observations
### providing context to the user
To facilitate learning how to use the debugger we should add a new FAQ that explains how to debug R scripts and add a link to it in the error message. something like:
```python
>>> task.debug()
```
```
You started a debugging session, to learn more: https://ploomber/s/r-debug
```
we need to add a new file here:
https://github.com/ploomber/ploomber/tree/master/doc/user-guide/faq
and then list it here:
https://github.com/ploomber/ploomber/blob/master/doc/user-guide/faq_index.rst
once that's done, ping @edublancas so he can create the short URL (`https://ploomber/s/r-debug`)
### Default values
We need to change the default values of the `.debug()` function, it's currently set to `kind='ipdb'` - but that's a Python-specific value. I think we should change it to `kind=None`.
Then figure out the actual default depending on Python (ipdb), or R (None)
### debugging session
the current implementation supports two debugging modes in Python. Starting a debugging session that allows you to move line by line in a script (using either ipdb or pdb) and a post-mortem session (which runs the scripts and then it starts the debugger once it throws an error).
based on [this](https://adv-r.hadley.nz/debugging.html#browser-commands), it looks like `browser()` is the equivalent in R to pdb in Python, essentially, we'd need to add a `browser()` line at the beginning of the script and then start a subprocess so start the debugging session
### post-mortem debugging
the second debugging option is post-mortem, which runs the script and starts the debugging session once it fails. In R, it looks like adding [options(error = recover)](https://rdrr.io/r/base/options.html) at the top of the script will work.
## options matrix
These are the possible values for the `kind` argument in the `debug()` method depending on the language:
*Python*
ipdb, pdb and pm (remain without changes)
*R*
None: starts a debugging session (aka it uses `browser()`)
pm: starts a post-mortem session (aka it uses `options(error = recover)`)
If we're dealing with an R script and we get a value that is not None or pm we should throw an error. Same with Python (we should only accept ipdb, pdb, pm and None). In the case of None, we replace it internally by ipdb.
| 0easy
|
Title: Handle edge case search queries better
Body: We should do some preprocessing of `search_query` here to prevent errors in edge cases, e.g. where the query ends with an operator like `&` or `|`.
Words separated by only a space and no operator also have no effect currently, we could modify these such that `&` is inserted automatically.
List of valid operations I am aware of:
```
&
|
<->
!
```
The syntax also allows combinations like `&!` and groupings via parantheses.
https://github.com/LAION-AI/Open-Assistant/blob/main/backend/oasst_backend/prompt_repository.py#L1023 | 0easy
|
Title: Add MFLES from StatsForecast
Body: MFLES is a newer model that was implemented in Nixtla's statsforecast:
https://nixtlaverse.nixtla.io/statsforecast/docs/models/mfles.html
Since there are other StatsForecasts methods (theta/ets/arima etc) hopefully it is easy to add this as well!
Let me know if there are any questions with parameters or the implementation.
| 0easy
|
Title: [ENH] technical roadmap 2025
Body: Umbrella issue for collecting, consolidating, and prioritizing roadmap items for the 2025 increment.
How to contribute:
* user or developer - community suggestions appreciated in this thread!
* new to open source and want to contribute code? Check if you see sth interesting and get in touch on [discord](https://discord.com/invite/54ACzaFsn7) - at meet-ups, workstream meetings, or just chat | 0easy
|
Title: OperatingSystem library docs have broken link / title
Body: https://robotframework.org/robotframework/latest/libraries/OperatingSystem.html contains three broken HTML snippets I could identify:
One in Table of Contents:
<img width="301" alt="Screenshot 2023-01-24 at 14 58 38" src="https://user-images.githubusercontent.com/13387304/214297779-7e728790-f913-4392-b824-2e4155698b9d.png">
And two in the actual section where that link should go to. (one is the title of the section, other is the `[https://docs.python.org/3/library/pathlib.html pathlib.Path]` link showing inline)
<img width="1120" alt="Screenshot 2023-01-24 at 14 58 52" src="https://user-images.githubusercontent.com/13387304/214297847-152ea93f-de57-4d8a-b956-5230d1a9dd60.png">
| 0easy
|
Title: Support for Modin
Body: ### 🚀 The feature
This is a feature proposal, more than a feature request: shall we support [`modin`](https://github.com/modin-project/modin) in pandas-ai?
The implementation would require:
- an additional dependency
- optional logic to check if `modin` is installed, if marked as optional dependency
- replacing `pandas as pd` with `modin.pandas as pd` in the execution sandbox
### Motivation, pitch
Quoting `modin`'s docs:
> Modin is a drop-in replacement for [pandas](https://github.com/pandas-dev/pandas). While pandas is single-threaded, Modin lets you instantly speed up your workflows by scaling pandas so it uses all of your cores. Modin works especially well on larger datasets, where pandas becomes painfully slow or runs [out of memory](https://modin.readthedocs.io/en/latest/getting_started/why_modin/out_of_core.html).
>
>By simply replacing the import statement, Modin offers users effortless speed and scale for their pandas workflows:
```py
import modin.pandas as pd
``` | 0easy
|
Title: Incorrect typing of variable_values
Body: In `AIOHTTPTransport.execute` and `AsyncTransport.execute`, `variable_values` is typed as `Optional[Dict[str, str]]`. GraphQL variable objects are not limited to string values, so this typing is not right.
The typing used in `RequestTransport.execute` is more appropriate (`Optional[Dict[str, Any]]`).
Happy to submit a PR. | 0easy
|
Title: [ENH] Tigergraph bindings via PyTigergraph
Body: PyTigerGraph is aiming to provide stable Python bindings for TigerGraph, which may provide for better speed and upgrades vs. PyGraphistry maintaining the current direct REST bindings. This is currently blocked by the inability to load large datasets.
The target interface is unclear:
* We may want to provide a simple mode similar to the current, and ensure
* Add a passthrough mode to operate through PyTigergraph
* Internally, this may motivate finally externalizing the plugin style, which in turn may help with mutual maintenance
LINKS:
* Repo: https://github.com/pyTigerGraph/pyTigerGraph
* Blocking issue: https://github.com/pyTigerGraph/pyTigerGraph/issues/7
cc @HerkTG @parkererickson | 0easy
|
Title: No size of models in docs
Body: Would be nice if the docs included info on how many models come included and how large they are, similar to what's available for Whisper
https://github.com/openai/whisper

Useful info to know before starting the download process, thanks!
| 0easy
|
Title: Move in-code comments in `new_blocks.md` to admonitions
Body: See this conversation: [https://github.com/Significant-Gravitas/AutoGPT/pull/8725#discussion_r1872537543](https://github.com/Significant-Gravitas/AutoGPT/pull/8725#discussion_r1872537543) | 0easy
|
Title: [New feature] Add apply_to_images to ChromaticAbberation
Body: | 0easy
|
Title: Add warning message near chat window about model hallucinations
Body: A broader audience now begins to chat with our models. I saw multiple youtube videos in which people were not sure if OpenAssistant had an internet connection (search etc.) and they simply asked OA which sometimes stated that it would have internet access and people took that as a potentially "official" reply. At the current state OA has no tool-access and doesn't use search results to formulate its responses.
IMO for a general audience we need to add a warning near the chat that creates awareness for hallucination.
In general outputs look very convincing and not all people might be aware how the system works or how convincing the model can produce factually completely wrong outputs .. potentially even in an authoritative tone. By design LLMs are great imitators (that's their training objective). People need to be skeptical about "facts" presented by tho model.
The situation will improve a bit with plugins that can retrieve relevant information (e.g. via web-search) ... but still a warning would be appropriate. | 0easy
|
Title: [BUG] the new version having trouble while running
Body: the new version which is released 4 hrs back is not working on my existing colab notebook, its having this issue-
NameError Traceback (most recent call last)
[<ipython-input-2-e076c01ca006>](https://localhost:8080/#) in <cell line: 15>()
13 tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
14
---> 15 model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
16 model_basename=model_basename,
17 use_safetensors=True,
7 frames
[/usr/local/lib/python3.10/dist-packages/auto_gptq/nn_modules/qlinear/qlinear_cuda_old.py](https://localhost:8080/#) in __init__(self, bits, group_size, infeatures, outfeatures, bias, use_cuda_fp16, kernel_switch_threshold, trainable)
81 self.kernel_switch_threshold = kernel_switch_threshold
82 self.autogptq_cuda_available = _autogptq_cuda_available
---> 83 self.autogptq_cuda = autogptq_cuda_256
84 if infeatures % 256 != 0 or outfeatures % 256 != 0:
85 self.autogptq_cuda = autogptq_cuda_64
NameError: name 'autogptq_cuda_256' is not defined
can you help me to solve this | 0easy
|
Title: Add convenience methods to export and import fine-tuned prompts
Body: I can run["deep" prompt-tuning](https://github.com/bigscience-workshop/petals/blob/main/examples/prompt-tuning-personachat.ipynb) successfully.
but, how can I export the prefix tokens from the Petals client?
and, how can I use the prefix tokens which export from Petals client when I want to do some inference job?
I appreciate you help | 0easy
|
Title: [example] using Hamilton with duckdb
Body: # idea
https://duckdb.org/ is a hot new tool. It could be a nice way to load data for people. We should show some ways people could use it with Hamilton. IIRC duckdb does predicate push down when querying files like parquet, which can be faster than loading them via pandas and then doing some filters.
Rough sketch:
1. data_loaders module that connects with duckdb
2. feature_logic module(s) that works from the output of the data_loaders
| 0easy
|
Title: Warn if x=0 when prior='log-uniform' / Space instance should check that bounds are valid
Body: I got the exception below while running forest_minimize(). I checked and the objective function never returned a NaN/inf. The objective function was using precision@6 as a metric. It had just run 11 iterations. The 11th iteration returned -0.0920692682266 as its score. So I'm guessing the exception is due to some number not being convertable to float32??
Any ideas what is wrong and how to work around it?
2017-01-19` 19:31:12,321 INFO __main__: Finished iteration 11, NOT-BEST, metrics:
{'epochs': 41, 'precision@6': 0.092069268, 'learning_rate': 0.0029863437810542567, 'alpha': 1.0590354793387869e-06, 'no_components': 41}
Traceback (most recent call last):
File "./lightfm_optimize.py", line 86, in <module>
res_fm = forest_minimize(objective, space, x0=initial_x, n_calls=args.num_iterations, random_state=random_state, verbose=verbose)
File "/usr/local/lib/python2.7/site-packages/skopt/optimizer/forest.py", line 167, in forest_minimize
callback=callback, acq_optimizer="sampling")
File "/usr/local/lib/python2.7/site-packages/skopt/optimizer/base.py", line 262, in base_minimize
gp.fit(space.transform(Xi), yi)
File "/usr/local/lib64/python2.7/site-packages/sklearn/ensemble/forest.py", line 247, in fit
X = check_array(X, accept_sparse="csc", dtype=DTYPE)
File "/usr/local/lib64/python2.7/site-packages/sklearn/utils/validation.py", line 407, in check_array
_assert_all_finite(array)
File "/usr/local/lib64/python2.7/site-packages/sklearn/utils/validation.py", line 58, in _assert_all_finite
" or a value too large for %r." % X.dtype)
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').
| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.