text
stringlengths
20
57.3k
labels
class label
4 classes
Title: [API server] Helm warning: annotation "kubernetes.io/ingress.class" is deprecated Body: ``` W0310 23:08:14.543122 97112 warnings.go:70] annotation "kubernetes.io/ingress.class" is deprecated, please use 'spec.ingressClassName' instead NAME: skypilot ``` We can choose the field to set based on the server version
0easy
Title: Implement Query Params Body: Similar to `headers` implementation ```yaml api: base_url: ${BASE_URL} headers: Authorization: ${BEARER_TOKEN} params: per_page: 10 ```
0easy
Title: Show "... is thinking" during a conversation Body: Make the bot denote that it is waiting to generate a response inside conversations, for example maybe an embed after a user sends a message that says something like "Thinking...". That message should be deleted when the bot actually responds to the conversation.
0easy
Title: DateTime library uses deprecated `datetime.utcnow()` Body: [datetime.datetime.utcnow()](https://docs.python.org/3/library/datetime.html#datetime.datetime.utcnow) has been deprecated in Python 3.12 and `datetime.datetime.now(datetime.UTC)` should be used instead. That `datetime.UTC` is new in Python 3.11, so we need to use either version or feature detection to avoid using it with earlier Python versions.
0easy
Title: Support stopping execution using `robot:exit-on-failure` tag Body: The command line argument `--exitonfailure` has been very useful. If we can have a reserved tag `robot:exit-on-failure` that would be great so that the same behavior can be enabled in individual test suite. For example: `suite1.robot`: ``` *** Settings *** Test Tags robot:exit-on-failure *** Test Cases *** Pass Pass Execution Pass this Scenario 1 Simple Task 1 Fail Simple Task 2 # Will not execute Scenario 2 # Will not execute Simple Task 1 Fail Simple Task 2 Scenario 3 # Will not execute Simple Task 1 *** Keywords *** Simple Task 1 Log To Console ${\n}Simple Task 1...${\n} Simple Task 2 Log To Console ${\n}Simple Task 2...${\n} ``` Output, something like this: ``` ============================================================================== Suite1 ============================================================================== Suite1.Test ============================================================================== Pass | PASS | Pass this ------------------------------------------------------------------------------ Scenario 1 Simple Task 1... Scenario 1 | FAIL | AssertionError ------------------------------------------------------------------------------ Scenario 2 | SKIP | Skipped due to exit-on-failure ------------------------------------------------------------------------------ Scenario 3 | SKIP | Skipped due to exit-on-failure ------------------------------------------------------------------------------ Suite1.Test | FAIL | 4 tests, 1 passed, 1 failed, 2 skipped ============================================================================== Suite1 | FAIL | 4 tests, 1 passed, 1 failed, 2 skipped ==============================================================================
0easy
Title: [Feature Request] DynDNS API can update subdomains Body: **Feature Description:** Using the [dynDNS API] it is possible to update `A` and `AAAA` records of sub-domains, e.g. `sub.main.dedyn.io`. One possible way to achieve this would be via the already existing `hostname` parameter in the API: ``` curl -u main.dedyn.io:<token> https://update.dedyn.io/?hostname=sub.main.dedyn.io ``` **Context:** * according to [this comment](https://github.com/desec-io/desec-stack/issues/411#issuecomment-640259255) the current behavior does_not_ allow to do so * according to [this comment](https://github.com/desec-io/desec-stack/issues/411#issuecomment-640806623) the required changes _could be a relatively easy change_. * this feature request stems from the issue #411. [dynDNS API]: https://desec.readthedocs.io/en/latest/dyndns/update-api.html
0easy
Title: Fix CI notebook tests for MacOS and Windows Body: **Description of the issue** After activating MacOS and Windows testing in [ci-daily.yml](https://github.com/quantumlib/Cirq/blob/master/.github/workflows/ci-daily.yml) in #6331, the notebook tests failed on these platforms, [example](https://github.com/quantumlib/Cirq/actions/runs/6700241980). As a temporary solution, #6335 restricts notebook testing to run on Linux only. For a proper solution we need to fix notebook tests on Mac and Windows platforms. The affected notebook tests are those touched in #6335, ie, those decorated with ``` @pytest.mark.skipif(sys.platform != "linux", reason="Linux-only test") ``` **Cirq version** 1.3.0.dev at 34e8dab087c65ff62957e8fc33c418f19f47333a
0easy
Title: Increase nbconvert timeout Body: Sometimes the CircleCI build will fail because it took too long to run the example notebooks. This is a timeout in `nbconvert`. We should increase the timeout so that we do not get spurious build failures.
0easy
Title: validate import_tasks_from Body: when loading the `import_tasks_from: file.yaml`, we are not validating its contents, if the file is empty, `yaml.safe_load` will return `None`, which will throw a cryptic error message: https://github.com/ploomber/ploomber/blob/beb625cc977bcd34481608a91daddc5493e0983c/src/ploomber/spec/dagspec.py#L326 Fix: * If `yaml.safe_load` returns None, replace it with an empty list * If returns something other than a list, raise an error saying we were expecting a list
0easy
Title: Is there an example of how to integrate piccolo into an existing fastapi application? Body: Is there an example of how to integrate piccolo into an existing fastapi application? Something along the lines of the examples in the fastapi docs [here](https://fastapi.tiangolo.com/tutorial/sql-databases/) and [here](https://fastapi.tiangolo.com/advanced/async-sql-databases/#connect-and-disconnect). I'm interested in trying the ORM out as it reminds me a lot of how EntityFramework operates in the .NET world, but right now it seems a lot of magic and black boxed and to require using one of the generators to put all the pieces into place. That's just my perception. Anyways, would love to see a clean and understandable example for how to put this into an existing fastapi app and use similarily to the ORMs already mentioned in the fastapi docs. Thanks much - wg
0easy
Title: run profiling to determine loading times Body: We need to optimize the time it takes ploomber to load when running `ploomber` or `ploomber --help` We need to run profiling and see the different load times of the python packages and how we can optimize it to get a faster user response.
0easy
Title: load_cardio() and load_letter() do not work under Ubuntu 14.04 Body: While running comb_example.py, the program may fail due to loadmat() function. A quick workaround is to use synthesized data instead of real-world datasets. This only affects comb_example.py. Will be addressed in the next release.
0easy
Title: Tox v4.14.1 is no longer expanding {envtmpdir} (and potentially other variables) Body: ## Issue We are using `package = external` and `package_env = build-metatensor-core` in our tox setup, and build the wheels with `pip wheel python/metatensor-core {[testenv]build_single_wheel_flags} --wheel-dir {envtmpdir}/dist` On tox 4.14.0, everything is fine, on 4.14.1 tox creates a directory literally named `{envtmpdir}/dist` (instead of expanding this to `.tox/build-metatensor-core/tmp/dist`. ```console $ ls \{envtmpdir\}/dist metatensor_core-0.2.0.dev7-py3-none-macosx_14_0_arm64.whl ``` ## Environment Provide at least: - OS: macOS 14.3.1 <details open> <summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary> ```console $ pip list Package Version ----------------------- -------- archspec 0.2.3 boltons 23.1.1 Brotli 1.1.0 build 1.0.3 cachetools 5.3.3 certifi 2024.2.2 cffi 1.16.0 chardet 5.2.0 charset-normalizer 3.3.2 colorama 0.4.6 conda 24.1.2 conda-libmamba-solver 24.1.0 conda-package-handling 2.2.0 conda_package_streaming 0.9.0 distlib 0.3.8 distro 1.9.0 filelock 3.13.1 fsspec 2024.2.0 idna 3.6 importlib-metadata 7.0.1 Jinja2 3.1.3 jsonpatch 1.33 jsonpointer 2.4 libmambapy 1.5.7 mamba 1.5.7 MarkupSafe 2.1.5 menuinst 2.0.2 mpmath 1.3.0 networkx 3.2.1 numpy 1.26.4 packaging 23.2 pip 24.0 platformdirs 4.2.0 pluggy 1.4.0 pycosat 0.6.6 pycparser 2.21 pyproject-api 1.6.1 pyproject_hooks 1.0.0 PySocks 1.7.1 requests 2.31.0 ruamel.yaml 0.18.6 ruamel.yaml.clib 0.2.8 setuptools 69.1.1 sympy 1.12 tomli 2.0.1 torch 2.2.1 tox 4.14.1 tqdm 4.66.2 truststore 0.8.0 typing_extensions 4.10.0 urllib3 2.2.1 virtualenv 20.25.1 wheel 0.42.0 zipp 3.17.0 zstandard 0.22.0 ``` </details> ## Output of running tox <details open> <summary>Output of <code>tox -rvv</code></summary> ```console $ tox -rvv -e core-tests build-metatensor-core: 111 W remove tox env folder /Users/guillaume/code/metatensor/.tox/build-metatensor-core [tox/tox_env/api.py:323] build-metatensor-core_sdist_meta: 111 W remove tox env folder /Users/guillaume/code/metatensor/.tox/build-metatensor-core_sdist_meta [tox/tox_env/api.py:323] core-tests: 115 I find interpreter for spec PythonSpec(path=/opt/miniforge3/bin/python3.11) [virtualenv/discovery/builtin.py:58] core-tests: 115 I proposed PythonInfo(spec=CPython3.11.7.final.0-64, exe=/opt/miniforge3/bin/python3.11, platform=darwin, version='3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65] core-tests: 115 D accepted PythonInfo(spec=CPython3.11.7.final.0-64, exe=/opt/miniforge3/bin/python3.11, platform=darwin, version='3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:67] core-tests: 116 D filesystem is not case-sensitive [virtualenv/info.py:25] core-tests: 130 I create virtual environment via CPython3Posix(dest=/Users/guillaume/code/metatensor/.tox/core-tests, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:50] core-tests: 130 D create folder /Users/guillaume/code/metatensor/.tox/core-tests/bin [virtualenv/util/path/_sync.py:12] core-tests: 131 D create folder /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages [virtualenv/util/path/_sync.py:12] core-tests: 131 D write /Users/guillaume/code/metatensor/.tox/core-tests/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:33] core-tests: 131 D home = /opt/miniforge3/bin [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D implementation = CPython [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D version_info = 3.11.7.final.0 [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D virtualenv = 20.25.1 [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D base-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D base-exec-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D base-executable = /opt/miniforge3/bin/python3.11 [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D symlink /opt/miniforge3/bin/python3.11 to /Users/guillaume/code/metatensor/.tox/core-tests/bin/python [virtualenv/util/path/_sync.py:32] core-tests: 131 D create virtualenv import hook file /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:91] core-tests: 131 D create /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:94] core-tests: 131 D ============================== target debug ============================== [virtualenv/run/session.py:52] core-tests: 132 D debug via /Users/guillaume/code/metatensor/.tox/core-tests/bin/python /opt/miniforge3/lib/python3.11/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:200] core-tests: 131 D { "sys": { "executable": "/Users/guillaume/code/metatensor/.tox/core-tests/bin/python", "_base_executable": "/opt/miniforge3/bin/python3.11", "prefix": "/Users/guillaume/code/metatensor/.tox/core-tests", "base_prefix": "/opt/miniforge3", "real_prefix": null, "exec_prefix": "/Users/guillaume/code/metatensor/.tox/core-tests", "base_exec_prefix": "/opt/miniforge3", "path": [ "/opt/miniforge3/lib/python311.zip", "/opt/miniforge3/lib/python3.11", "/opt/miniforge3/lib/python3.11/lib-dynload", "/Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages" ], "meta_path": [ "<class '_virtualenv._Finder'>", "<class '_frozen_importlib.BuiltinImporter'>", "<class '_frozen_importlib.FrozenImporter'>", "<class '_frozen_importlib_external.PathFinder'>" ], "fs_encoding": "utf-8", "io_encoding": "utf-8" }, "version": "3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]", "makefile_filename": "/opt/miniforge3/lib/python3.11/config-3.11-darwin/Makefile", "os": "<module 'os' (frozen)>", "site": "<module 'site' (frozen)>", "datetime": "<module 'datetime' from '/opt/miniforge3/lib/python3.11/datetime.py'>", "math": "<module 'math' from '/opt/miniforge3/lib/python3.11/lib-dynload/math.cpython-311-darwin.so'>", "json": "<module 'json' from '/opt/miniforge3/lib/python3.11/json/__init__.py'>" } [virtualenv/run/session.py:53] core-tests: 151 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/guillaume/Library/Application Support/virtualenv) [virtualenv/run/session.py:57] core-tests: 152 D got embed update of distribution %s from ('pip', PosixPath('/Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/embed/3/pip.json')) [virtualenv/app_data/via_disk_folder.py:131] core-tests: 154 D install wheel from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/wheel-0.42.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] core-tests: 154 D install setuptools from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/setuptools-69.1.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] core-tests: 154 D install pip from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-24.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] core-tests: 154 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools-69.1.0.dist-info to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/setuptools-69.1.0.dist-info [virtualenv/util/path/_sync.py:40] core-tests: 155 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel-0.42.0.dist-info to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/wheel-0.42.0.dist-info [virtualenv/util/path/_sync.py:40] core-tests: 155 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip-24.0.dist-info to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/pip-24.0.dist-info [virtualenv/util/path/_sync.py:40] core-tests: 156 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:40] core-tests: 156 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/distutils-precedence.pth to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] core-tests: 157 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:40] core-tests: 157 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:40] core-tests: 162 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel-0.42.0.virtualenv to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/wheel-0.42.0.virtualenv [virtualenv/util/path/_sync.py:40] core-tests: 163 D generated console scripts wheel wheel-3.11 wheel3 wheel3.11 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] core-tests: 191 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/pkg_resources to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] core-tests: 200 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/_distutils_hack to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] core-tests: 200 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools-69.1.0.virtualenv to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/setuptools-69.1.0.virtualenv [virtualenv/util/path/_sync.py:40] core-tests: 201 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] core-tests: 234 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip-24.0.virtualenv to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/pip-24.0.virtualenv [virtualenv/util/path/_sync.py:40] core-tests: 234 D generated console scripts pip3 pip3.11 pip-3.11 pip [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] core-tests: 234 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:63] core-tests: 236 D write /Users/guillaume/code/metatensor/.tox/core-tests/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:33] core-tests: 236 D home = /opt/miniforge3/bin [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D implementation = CPython [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D version_info = 3.11.7.final.0 [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D virtualenv = 20.25.1 [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D base-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D base-exec-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D base-executable = /opt/miniforge3/bin/python3.11 [virtualenv/create/pyenv_cfg.py:38] core-tests: 238 W install_deps> python -I -m pip install numpy pytest pytest-cov toml 'torch==2.2.*' [tox/tox_env/api.py:425] Collecting numpy Using cached numpy-1.26.4-cp311-cp311-macosx_11_0_arm64.whl.metadata (114 kB) Collecting pytest Using cached pytest-8.0.2-py3-none-any.whl.metadata (7.7 kB) Collecting pytest-cov Using cached pytest_cov-4.1.0-py3-none-any.whl.metadata (26 kB) Collecting toml Using cached toml-0.10.2-py2.py3-none-any.whl.metadata (7.1 kB) Collecting torch==2.2.* Using cached torch-2.2.1-cp311-none-macosx_11_0_arm64.whl.metadata (25 kB) Collecting filelock (from torch==2.2.*) Using cached filelock-3.13.1-py3-none-any.whl.metadata (2.8 kB) Collecting typing-extensions>=4.8.0 (from torch==2.2.*) Using cached typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB) Collecting sympy (from torch==2.2.*) Using cached sympy-1.12-py3-none-any.whl.metadata (12 kB) Collecting networkx (from torch==2.2.*) Using cached networkx-3.2.1-py3-none-any.whl.metadata (5.2 kB) Collecting jinja2 (from torch==2.2.*) Using cached Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB) Collecting fsspec (from torch==2.2.*) Using cached fsspec-2024.2.0-py3-none-any.whl.metadata (6.8 kB) Collecting iniconfig (from pytest) Using cached iniconfig-2.0.0-py3-none-any.whl.metadata (2.6 kB) Collecting packaging (from pytest) Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB) Collecting pluggy<2.0,>=1.3.0 (from pytest) Using cached pluggy-1.4.0-py3-none-any.whl.metadata (4.3 kB) Collecting coverage>=5.2.1 (from coverage[toml]>=5.2.1->pytest-cov) Using cached coverage-7.4.3-cp311-cp311-macosx_11_0_arm64.whl.metadata (8.2 kB) Collecting MarkupSafe>=2.0 (from jinja2->torch==2.2.*) Using cached MarkupSafe-2.1.5-cp311-cp311-macosx_10_9_universal2.whl.metadata (3.0 kB) Collecting mpmath>=0.19 (from sympy->torch==2.2.*) Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB) Using cached torch-2.2.1-cp311-none-macosx_11_0_arm64.whl (59.7 MB) Using cached numpy-1.26.4-cp311-cp311-macosx_11_0_arm64.whl (14.0 MB) Using cached pytest-8.0.2-py3-none-any.whl (333 kB) Using cached pytest_cov-4.1.0-py3-none-any.whl (21 kB) Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB) Using cached coverage-7.4.3-cp311-cp311-macosx_11_0_arm64.whl (207 kB) Using cached pluggy-1.4.0-py3-none-any.whl (20 kB) Using cached typing_extensions-4.10.0-py3-none-any.whl (33 kB) Using cached filelock-3.13.1-py3-none-any.whl (11 kB) Using cached fsspec-2024.2.0-py3-none-any.whl (170 kB) Using cached iniconfig-2.0.0-py3-none-any.whl (5.9 kB) Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB) Using cached networkx-3.2.1-py3-none-any.whl (1.6 MB) Using cached packaging-23.2-py3-none-any.whl (53 kB) Using cached sympy-1.12-py3-none-any.whl (5.7 MB) Using cached MarkupSafe-2.1.5-cp311-cp311-macosx_10_9_universal2.whl (18 kB) Using cached mpmath-1.3.0-py3-none-any.whl (536 kB) Installing collected packages: mpmath, typing-extensions, toml, sympy, pluggy, packaging, numpy, networkx, MarkupSafe, iniconfig, fsspec, filelock, coverage, pytest, jinja2, torch, pytest-cov Successfully installed MarkupSafe-2.1.5 coverage-7.4.3 filelock-3.13.1 fsspec-2024.2.0 iniconfig-2.0.0 jinja2-3.1.3 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.4 packaging-23.2 pluggy-1.4.0 pytest-8.0.2 pytest-cov-4.1.0 sympy-1.12 toml-0.10.2 torch-2.2.1 typing-extensions-4.10.0 core-tests: 11493 I exit 0 (11.25 seconds) /Users/guillaume/code/metatensor> python -I -m pip install numpy pytest pytest-cov toml 'torch==2.2.*' pid=44644 [tox/execute/api.py:280] build-metatensor-core: 11495 I find interpreter for spec PythonSpec(path=/opt/miniforge3/bin/python3.11) [virtualenv/discovery/builtin.py:58] build-metatensor-core: 11495 I proposed PythonInfo(spec=CPython3.11.7.final.0-64, exe=/opt/miniforge3/bin/python3.11, platform=darwin, version='3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65] build-metatensor-core: 11495 D accepted PythonInfo(spec=CPython3.11.7.final.0-64, exe=/opt/miniforge3/bin/python3.11, platform=darwin, version='3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:67] build-metatensor-core: 11496 I create virtual environment via CPython3Posix(dest=/Users/guillaume/code/metatensor/.tox/build-metatensor-core, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:50] build-metatensor-core: 11496 D create folder /Users/guillaume/code/metatensor/.tox/build-metatensor-core/bin [virtualenv/util/path/_sync.py:12] build-metatensor-core: 11496 D create folder /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages [virtualenv/util/path/_sync.py:12] build-metatensor-core: 11496 D write /Users/guillaume/code/metatensor/.tox/build-metatensor-core/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:33] build-metatensor-core: 11496 D home = /opt/miniforge3/bin [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D implementation = CPython [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D version_info = 3.11.7.final.0 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D virtualenv = 20.25.1 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D base-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D base-exec-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D base-executable = /opt/miniforge3/bin/python3.11 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11497 D symlink /opt/miniforge3/bin/python3.11 to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/bin/python [virtualenv/util/path/_sync.py:32] build-metatensor-core: 11497 D create virtualenv import hook file /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:91] build-metatensor-core: 11497 D create /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:94] build-metatensor-core: 11497 D ============================== target debug ============================== [virtualenv/run/session.py:52] build-metatensor-core: 11497 D debug via /Users/guillaume/code/metatensor/.tox/build-metatensor-core/bin/python /opt/miniforge3/lib/python3.11/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:200] build-metatensor-core: 11497 D { "sys": { "executable": "/Users/guillaume/code/metatensor/.tox/build-metatensor-core/bin/python", "_base_executable": "/opt/miniforge3/bin/python3.11", "prefix": "/Users/guillaume/code/metatensor/.tox/build-metatensor-core", "base_prefix": "/opt/miniforge3", "real_prefix": null, "exec_prefix": "/Users/guillaume/code/metatensor/.tox/build-metatensor-core", "base_exec_prefix": "/opt/miniforge3", "path": [ "/opt/miniforge3/lib/python311.zip", "/opt/miniforge3/lib/python3.11", "/opt/miniforge3/lib/python3.11/lib-dynload", "/Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages" ], "meta_path": [ "<class '_virtualenv._Finder'>", "<class '_frozen_importlib.BuiltinImporter'>", "<class '_frozen_importlib.FrozenImporter'>", "<class '_frozen_importlib_external.PathFinder'>" ], "fs_encoding": "utf-8", "io_encoding": "utf-8" }, "version": "3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]", "makefile_filename": "/opt/miniforge3/lib/python3.11/config-3.11-darwin/Makefile", "os": "<module 'os' (frozen)>", "site": "<module 'site' (frozen)>", "datetime": "<module 'datetime' from '/opt/miniforge3/lib/python3.11/datetime.py'>", "math": "<module 'math' from '/opt/miniforge3/lib/python3.11/lib-dynload/math.cpython-311-darwin.so'>", "json": "<module 'json' from '/opt/miniforge3/lib/python3.11/json/__init__.py'>" } [virtualenv/run/session.py:53] build-metatensor-core: 11517 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/guillaume/Library/Application Support/virtualenv) [virtualenv/run/session.py:57] build-metatensor-core: 11518 D got embed update of distribution %s from ('pip', PosixPath('/Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/embed/3/pip.json')) [virtualenv/app_data/via_disk_folder.py:131] build-metatensor-core: 11518 D install setuptools from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/setuptools-69.1.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] build-metatensor-core: 11518 D install wheel from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/wheel-0.42.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] build-metatensor-core: 11518 D install pip from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-24.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] build-metatensor-core: 11519 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip-24.0.dist-info to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/pip-24.0.dist-info [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11519 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools-69.1.0.dist-info to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/setuptools-69.1.0.dist-info [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11519 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel-0.42.0.dist-info to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/wheel-0.42.0.dist-info [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11521 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11521 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/distutils-precedence.pth to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11522 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11522 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11528 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel-0.42.0.virtualenv to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/wheel-0.42.0.virtualenv [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11529 D generated console scripts wheel wheel3 wheel-3.11 wheel3.11 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] build-metatensor-core: 11558 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/pkg_resources to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11567 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/_distutils_hack to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11568 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools-69.1.0.virtualenv to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/setuptools-69.1.0.virtualenv [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11568 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] build-metatensor-core: 11604 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip-24.0.virtualenv to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/pip-24.0.virtualenv [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11604 D generated console scripts pip3 pip3.11 pip-3.11 pip [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] build-metatensor-core: 11604 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:63] build-metatensor-core: 11605 D write /Users/guillaume/code/metatensor/.tox/build-metatensor-core/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:33] build-metatensor-core: 11605 D home = /opt/miniforge3/bin [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D implementation = CPython [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D version_info = 3.11.7.final.0 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D virtualenv = 20.25.1 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D base-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D base-exec-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11606 D base-executable = /opt/miniforge3/bin/python3.11 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11607 W install_requires> python -I -m pip install cmake packaging setuptools wheel [tox/tox_env/api.py:425] Collecting cmake Using cached cmake-3.28.3-py2.py3-none-macosx_10_10_universal2.macosx_10_10_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl.metadata (6.3 kB) Collecting packaging Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB) Requirement already satisfied: setuptools in ./.tox/build-metatensor-core/lib/python3.11/site-packages (69.1.0) Requirement already satisfied: wheel in ./.tox/build-metatensor-core/lib/python3.11/site-packages (0.42.0) Using cached cmake-3.28.3-py2.py3-none-macosx_10_10_universal2.macosx_10_10_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl (48.5 MB) Using cached packaging-23.2-py3-none-any.whl (53 kB) Installing collected packages: cmake, packaging Successfully installed cmake-3.28.3 packaging-23.2 build-metatensor-core: 13373 I exit 0 (1.77 seconds) /Users/guillaume/code/metatensor> python -I -m pip install cmake packaging setuptools wheel pid=44648 [tox/execute/api.py:280] build-metatensor-core: 13374 W install_deps> python -I -m pip install cmake packaging setuptools wheel [tox/tox_env/api.py:425] Requirement already satisfied: cmake in ./.tox/build-metatensor-core/lib/python3.11/site-packages (3.28.3) Requirement already satisfied: packaging in ./.tox/build-metatensor-core/lib/python3.11/site-packages (23.2) Requirement already satisfied: setuptools in ./.tox/build-metatensor-core/lib/python3.11/site-packages (69.1.0) Requirement already satisfied: wheel in ./.tox/build-metatensor-core/lib/python3.11/site-packages (0.42.0) build-metatensor-core: 13647 I exit 0 (0.27 seconds) /Users/guillaume/code/metatensor> python -I -m pip install cmake packaging setuptools wheel pid=44650 [tox/execute/api.py:280] build-metatensor-core: 13648 W commands[0]> pip wheel python/metatensor-core --no-deps --no-build-isolation --check-build-dependencies --wheel-dir '{env_tmp_dir}/dist' [tox/tox_env/api.py:425] Processing ./python/metatensor-core Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: metatensor-core Building wheel for metatensor-core (pyproject.toml) ... done Created wheel for metatensor-core: filename=metatensor_core-0.2.0.dev7-py3-none-macosx_14_0_arm64.whl size=393337 sha256=3ef52bf49aeeab3cb26abb2f37e70fa66a4e087d0bea399fe5138888d440b34f Stored in directory: /Users/guillaume/Library/Caches/pip/wheels/51/2c/1e/776d763cc8f4fe85ef01b2aa554b8f88005d759914ef385ec8 Successfully built metatensor-core build-metatensor-core: 14924 I exit 0 (1.28 seconds) /Users/guillaume/code/metatensor> pip wheel python/metatensor-core --no-deps --no-build-isolation --check-build-dependencies --wheel-dir '{env_tmp_dir}/dist' pid=44652 [tox/execute/api.py:280] core-tests: 14925 E failed with no package found in /Users/guillaume/code/metatensor/.tox/build-metatensor-core/tmp/dist/* [tox/session/cmd/run/single.py:57] core-tests: FAIL code 1 (14.82 seconds) evaluation failed :( (14.85 seconds) ``` </details> ---- Ping @gaborbernat, this seems to be a fallout of #3237
0easy
Title: Class-level decorators on Consumer classes do not apply to inherited methods Body: **Describe the bug** For consumer classes that inherit consumer methods (i.e., methods decorated with `@uplink.get`, `@uplink.post`, etc.) from one or more parent classes, uplink decorators such as`@response_handler` or `@timeout` are not applied to those inherited methods when these decorators are used as class-level decorators. In other words, these decorators are strictly applied to consumer methods that are directly defined on the decorated consumer class. **To Reproduce** Consider the following consumer class: ```python class GitHub(uplink.Consumer): @uplink.get("/users/{username}") def get_user(self, username): """Get a single user.""" ``` Create a subclass of `GitHub` and decorate it with any uplink decorator that should propagate to consumer methods when used as a class decorator. For this example, I apply a `@response_handler` that should make any consumer method return the integer `1`, regardless of the actual response returned by the server: ```python @response_handler(lambda resp: 1) class GitHubSubclass(GitHub): pass ``` Here’s a quick test that shows that the response handler is not applied to the inherited method (i.e., the assertion fails): ```python client = GitHubSubclass(...) assert github.get_user(“prkumar”) == 1 ``` **Expected behavior** Applying a decorator to a Consumer class should propagate to ALL consumer methods available to that class, including inherited consumer methods. **Additional context** Prior to v0.3.0, the actual behavior reflected the expected behavior detailed above. However, as part of #27, we unnecessarily began restricting the application of class-level decorators to only those consumer methods defined directly on the decorated consumer class. Hence, a fix for this bug should effectively revert the changes made in #27. Notably, this means that the fix should make changes to the function `uplink.helpers.get_api_definitions`.
0easy
Title: Update SQLAlchemy-Utils Body: The current version of SQLAlchemy-Utils that we are using was yanked for some reason. We should update this soon.
0easy
Title: set_env substitution doesn't pass PYTHONHASHSEED in tox 4.x Body: ## Issue Hi, I have encountered what I believe is tox bug. With 2 test environments: base (`[testenv]`) and non-base (`[testenv:hs]`), _some_ environment variables from the base one aren't set in the non-base when they are passed as a substitution, like: ```ini [testenv] setenv = PYTHONHASHSEED=0 OTHER=foo [testenv:hs] setenv = {[testenv]setenv} ``` In above example `PYTHONHASHSEED` will be random for every run of `tox -e has` and `OTHER` will be correctly set to foo. I found out that tox displays the same behaviour for some other variables, like `PATH`, but I didn't dig any further. Funny thing is that if we mix `setenv` and `set_env` a little bit, tox suddenly starts passing `PYTHONHASHSEED`, for example below tox.ini sets `PYTHONHASHSEED` to 0. ```ini [testenv] set_env = PYTHONHASHSEED=0 OTHER=foo [testenv:hs] setenv = {[testenv]set_env} ``` I checked this on tox 4.3.1, 4.3, 4.2, 4.1 and 4.0. Tox 3.x doesn't have this behaviour. Full tox.ini files attached below. ## Environment ## Output of running tox Provide the output of `tox -rvv`: ```console using tox.ini: /home/mgoral/test/tox.ini (pid 279807) removing /home/mgoral/test/.tox/log could not satisfy requires MissingDependency(<Requirement('tox>=4.0')>) using tox-3.21.4 from /usr/lib/python3/dist-packages/tox/__init__.py (pid 279807) /usr/bin/python3 (/usr/bin/python3) is {'executable': '/usr/bin/python3', 'implementation': 'CPython', 'version_info': [3, 9, 2, 'final', 0], 'version': '3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110]', 'is_64': True, 'sysplatform': 'linux', 'extra_version_info': None} .tox uses /usr/bin/python3 .tox start: getenv /home/mgoral/test/.tox/.tox .tox cannot reuse: -r flag .tox recreate: /home/mgoral/test/.tox/.tox removing /home/mgoral/test/.tox/.tox setting PATH=/home/mgoral/test/.tox/.tox/bin:/home/mgoral/.cargo/bin:/home/mgoral/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games [279827] /home/mgoral/test/.tox$ /usr/bin/python3 -m virtualenv --no-download --python /usr/bin/python3 .tox created virtual environment CPython3.9.2.final.0-64 in 81ms creator CPython3Posix(dest=/home/mgoral/test/.tox/.tox, clear=False, no_vcs_ignore=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/mgoral/.local/share/virtualenv) added seed packages: pip==20.3.4, pkg_resources==0.0.0, setuptools==44.1.1, wheel==0.34.2 activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator .tox installdeps: tox>=4.0 setting PATH=/home/mgoral/test/.tox/.tox/bin:/home/mgoral/.cargo/bin:/home/mgoral/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games [279835] /home/mgoral/test$ /home/mgoral/test/.tox/.tox/bin/python -m pip install 'tox>=4.0' Collecting tox>=4.0 Using cached tox-4.3.1-py3-none-any.whl (147 kB) Collecting platformdirs>=2.6.2 Using cached platformdirs-2.6.2-py3-none-any.whl (14 kB) Collecting virtualenv>=20.17.1 Using cached virtualenv-20.17.1-py3-none-any.whl (8.8 MB) Collecting pyproject-api>=1.4 Using cached pyproject_api-1.4.0-py3-none-any.whl (12 kB) Collecting pluggy>=1 Using cached pluggy-1.0.0-py2.py3-none-any.whl (13 kB) Collecting chardet>=5.1 Using cached chardet-5.1.0-py3-none-any.whl (199 kB) Collecting tomli>=2.0.1 Using cached tomli-2.0.1-py3-none-any.whl (12 kB) Collecting filelock>=3.9 Using cached filelock-3.9.0-py3-none-any.whl (9.7 kB) Collecting colorama>=0.4.6 Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB) Collecting packaging>=23 Using cached packaging-23.0-py3-none-any.whl (42 kB) Collecting cachetools>=5.2.1 Using cached cachetools-5.2.1-py3-none-any.whl (9.3 kB) Collecting distlib<1,>=0.3.6 Using cached distlib-0.3.6-py2.py3-none-any.whl (468 kB) Installing collected packages: tomli, platformdirs, packaging, filelock, distlib, virtualenv, pyproject-api, pluggy, colorama, chardet, cachetools, tox Successfully installed cachetools-5.2.1 chardet-5.1.0 colorama-0.4.6 distlib-0.3.6 filelock-3.9.0 packaging-23.0 platformdirs-2.6.2 pluggy-1.0.0 pyproject-api-1.4.0 tomli-2.0.1 tox-4.3.1 virtualenv-20.17.1 .tox finish: getenv /home/mgoral/test/.tox/.tox after 3.43 seconds .tox start: finishvenv write config to /home/mgoral/test/.tox/.tox/.tox-config1 as '409276cb52787e20907912730020ca0f84204a375c30a79fcb494ffde0e0f116 /usr/bin/python3\n3.21.4 0 0 0\n00000000000000000000000000000000 tox>=4.0' .tox finish: finishvenv after 0.02 seconds .tox start: provision [279851] /home/mgoral/test$ /home/mgoral/test/.tox/.tox/bin/python -m tox -rvv -e hs hs: 135 W remove tox env folder /home/mgoral/test/.tox/hs [tox/tox_env/api.py:321] hs: 159 I find interpreter for spec PythonSpec(major=3) [virtualenv/discovery/builtin.py:56] hs: 159 D discover exe for PythonInfo(spec=CPython3.9.2.final.0-64, exe=/home/mgoral/test/.tox/.tox/bin/python, platform=linux, version='3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110]', encoding_fs_io=utf-8-utf-8) in /usr [virtualenv/discovery/py_info.py:437] hs: 159 D filesystem is case-sensitive [virtualenv/info.py:24] hs: 160 D got python info of /usr/bin/python3.9 from /home/mgoral/.local/share/virtualenv/py_info/1/36cf16204b8548560b1c020c4e8fb5b57f0e4c58016f52f2d4be01e192833930.json [virtualenv/app_data/via_disk_folder.py:129] hs: 161 I proposed PythonInfo(spec=CPython3.9.2.final.0-64, system=/usr/bin/python3.9, exe=/home/mgoral/test/.tox/.tox/bin/python, platform=linux, version='3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63] hs: 161 D accepted PythonInfo(spec=CPython3.9.2.final.0-64, system=/usr/bin/python3.9, exe=/home/mgoral/test/.tox/.tox/bin/python, platform=linux, version='3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65] hs: 186 I create virtual environment via CPython3Posix(dest=/home/mgoral/test/.tox/hs, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:48] hs: 186 D create folder /home/mgoral/test/.tox/hs/bin [virtualenv/util/path/_sync.py:9] hs: 186 D create folder /home/mgoral/test/.tox/hs/lib/python3.9/site-packages [virtualenv/util/path/_sync.py:9] hs: 187 D write /home/mgoral/test/.tox/hs/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30] hs: 187 D home = /usr/bin [virtualenv/create/pyenv_cfg.py:34] hs: 187 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34] hs: 187 D version_info = 3.9.2.final.0 [virtualenv/create/pyenv_cfg.py:34] hs: 187 D virtualenv = 20.17.1 [virtualenv/create/pyenv_cfg.py:34] hs: 187 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34] hs: 187 D base-prefix = /usr [virtualenv/create/pyenv_cfg.py:34] hs: 187 D base-exec-prefix = /usr [virtualenv/create/pyenv_cfg.py:34] hs: 187 D base-executable = /usr/bin/python3.9 [virtualenv/create/pyenv_cfg.py:34] hs: 187 D symlink /usr/bin/python3.9 to /home/mgoral/test/.tox/hs/bin/python [virtualenv/util/path/_sync.py:28] hs: 187 D create virtualenv import hook file /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:89] hs: 187 D create /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:92] hs: 188 D ============================== target debug ============================== [virtualenv/run/session.py:50] hs: 188 D debug via /home/mgoral/test/.tox/hs/bin/python /home/mgoral/test/.tox/.tox/lib/python3.9/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:197] hs: 188 D { "sys": { "executable": "/home/mgoral/test/.tox/hs/bin/python", "_base_executable": "/home/mgoral/test/.tox/hs/bin/python", "prefix": "/home/mgoral/test/.tox/hs", "base_prefix": "/usr", "real_prefix": null, "exec_prefix": "/home/mgoral/test/.tox/hs", "base_exec_prefix": "/usr", "path": [ "/usr/lib/python39.zip", "/usr/lib/python3.9", "/usr/lib/python3.9/lib-dynload", "/home/mgoral/test/.tox/hs/lib/python3.9/site-packages" ], "meta_path": [ "<class '_virtualenv._Finder'>", "<class '_frozen_importlib.BuiltinImporter'>", "<class '_frozen_importlib.FrozenImporter'>", "<class '_frozen_importlib_external.PathFinder'>" ], "fs_encoding": "utf-8", "io_encoding": "utf-8" }, "version": "3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110]", "makefile_filename": "/usr/lib/python3.9/config-3.9-x86_64-linux-gnu/Makefile", "os": "<module 'os' from '/usr/lib/python3.9/os.py'>", "site": "<module 'site' from '/usr/lib/python3.9/site.py'>", "datetime": "<module 'datetime' from '/usr/lib/python3.9/datetime.py'>", "math": "<module 'math' (built-in)>", "json": "<module 'json' from '/usr/lib/python3.9/json/__init__.py'>" } [virtualenv/run/session.py:51] hs: 213 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/mgoral/.local/share/virtualenv) [virtualenv/run/session.py:55] hs: 215 D got embed update of distribution wheel from /home/mgoral/.local/share/virtualenv/wheel/3.9/embed/3/wheel.json [virtualenv/app_data/via_disk_folder.py:129] hs: 216 D got embed update of distribution setuptools from /home/mgoral/.local/share/virtualenv/wheel/3.9/embed/3/setuptools.json [virtualenv/app_data/via_disk_folder.py:129] hs: 216 D got embed update of distribution pip from /home/mgoral/.local/share/virtualenv/wheel/3.9/embed/3/pip.json [virtualenv/app_data/via_disk_folder.py:129] hs: 219 D install wheel from wheel /home/mgoral/test/.tox/.tox/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/wheel-0.38.4-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47] hs: 219 D install setuptools from wheel /home/mgoral/test/.tox/.tox/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/setuptools-65.6.3-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47] hs: 219 D install pip from wheel /home/mgoral/test/.tox/.tox/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/pip-22.3.1-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47] hs: 220 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip-22.3.1.dist-info to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/pip-22.3.1.dist-info [virtualenv/util/path/_sync.py:36] hs: 220 D copy /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools-65.6.3.virtualenv to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/setuptools-65.6.3.virtualenv [virtualenv/util/path/_sync.py:36] hs: 220 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/wheel [virtualenv/util/path/_sync.py:36] hs: 221 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/setuptools [virtualenv/util/path/_sync.py:36] hs: 224 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/pip [virtualenv/util/path/_sync.py:36] hs: 227 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.dist-info to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/wheel-0.38.4.dist-info [virtualenv/util/path/_sync.py:36] hs: 229 D copy /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.virtualenv to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/wheel-0.38.4.virtualenv [virtualenv/util/path/_sync.py:36] hs: 231 D generated console scripts wheel-3.9 wheel3 wheel3.9 wheel [virtualenv/seed/embed/via_app_data/pip_install/base.py:41] hs: 256 D copy /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/distutils-precedence.pth to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:36] hs: 256 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/pkg_resources to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/pkg_resources [virtualenv/util/path/_sync.py:36] hs: 264 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/_distutils_hack to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:36] hs: 264 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools-65.6.3.dist-info to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/setuptools-65.6.3.dist-info [virtualenv/util/path/_sync.py:36] hs: 265 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:41] hs: 286 D copy /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip-22.3.1.virtualenv to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/pip-22.3.1.virtualenv [virtualenv/util/path/_sync.py:36] hs: 287 D generated console scripts pip3.9 pip3 pip-3.9 pip [virtualenv/seed/embed/via_app_data/pip_install/base.py:41] hs: 287 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:61] hs: 288 D write /home/mgoral/test/.tox/hs/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30] hs: 288 D home = /usr/bin [virtualenv/create/pyenv_cfg.py:34] hs: 288 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34] hs: 288 D version_info = 3.9.2.final.0 [virtualenv/create/pyenv_cfg.py:34] hs: 288 D virtualenv = 20.17.1 [virtualenv/create/pyenv_cfg.py:34] hs: 288 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34] hs: 288 D base-prefix = /usr [virtualenv/create/pyenv_cfg.py:34] hs: 288 D base-exec-prefix = /usr [virtualenv/create/pyenv_cfg.py:34] hs: 288 D base-executable = /usr/bin/python3.9 [virtualenv/create/pyenv_cfg.py:34] hs: 291 W commands[0]> python -c 'import os; print(os.environ["PYTHONHASHSEED"])' [tox/tox_env/api.py:427] 288846098 hs: 311 I exit 0 (0.02 seconds) /home/mgoral/test> python -c 'import os; print(os.environ["PYTHONHASHSEED"])' pid=279864 [tox/execute/api.py:275] hs: 312 W commands[1]> python -c 'import os; print(os.environ["OTHER"])' [tox/tox_env/api.py:427] foo hs: 327 I exit 0 (0.01 seconds) /home/mgoral/test> python -c 'import os; print(os.environ["OTHER"])' pid=279870 [tox/execute/api.py:275] hs: OK (0.19=setup[0.16]+cmd[0.02,0.01] seconds) congratulations :) (0.24 seconds) .tox finish: provision after 0.39 seconds ``` ## Minimal example tox.ini which doesn't pass `PYTHONHASHSEED` to `hs` environment, but passes `OTHER`: ```ini [tox] requires = tox>=4.0 skipsdist = True [testenv] basepython = python3 setenv = PYTHONHASHSEED=0 OTHER=foo [testenv:hs] commands = python -c 'import os; print(os.environ["PYTHONHASHSEED"])' python -c 'import os; print(os.environ["OTHER"])' setenv = {[testenv]setenv} ``` tox.ini which passes `PYTHONHASHSEED` to `hs` environment (watch the underscores in set_env and setenv) ```ini [tox] requires = tox>=4.0 skipsdist = True [testenv] basepython = python3 set_env = PYTHONHASHSEED=0 OTHER=foo [testenv:hs] commands = python -c 'import os; print(os.environ["PYTHONHASHSEED"])' python -c 'import os; print(os.environ["OTHER"])' setenv = {[testenv]set_env} ```
0easy
Title: Feature: add K8S probes checker API Body: We should provides users with an ability to check application health in a some way. It can be reached by using multiple mechanism, so we can create a base `Checker` Protocol and use it the following way ```python app = FastStream(broker) app.add_checker( SocketChecker("/sock"), # HTTPChecker(port=8000), # ... ) ``` And check all of them by the same command ```shell faststream probe main:app ``` So, we can create multiple checkers for various cases and user can implement any checker by itself in the same time * [x] add `broker.ping()` unified method * [ ] add `BaseCheker` using `broker.ping()` * [ ] implement `HTTPChecker` (inheritor of `BaseCheker`) * [ ] add CLI command to use any checker
0easy
Title: Make the distinction between user and team mode clearer during setup Body: We should probably have some kind of big checkbox div that explains what it means to be in user/teams mode. It would reduce the likelihood of needing to switch.
0easy
Title: Write a test Body: Any kind of test. Just to get testing rolling.
0easy
Title: Documentation wrong on page for "badges" Body: ## Description of the problem, including code/CLI snippet Documentation seems to be wrong here: https://python-gitlab.readthedocs.io/en/stable/gl_objects/badges.html#examples -> "Update a badge" ## Expected Behavior Update a badge: badge.image_url = new_image_url badge.link_url = new_link_url badge.save() ## Actual Behavior Update a badge: badge.image_link = new_link badge.save() ## Specifications - python-gitlab version: documentation issue only - API version you are using: documentation issue only - Gitlab server version: documentation issue only
0easy
Title: JWTBearerTokenValidator don't send parameters now and leeway to claim.validate Body: ```python # authlib\oauth2\rfc7523\validator.py class JWTBearerTokenValidator: def authenticate_token(self, token_string): try: claims = jwt.decode( ... ) claims.validate() return claims except JoseError as error: ... ``` But: ```python # authlib\jose\rfc7519\claims.py class JWTClaims(BaseClaims): ... def validate(self, now=None, leeway=0): ... ``` I see the solution in: ```python def authenticate_token(self, token_string, now=None, leeway=0): ... claims.validate(now, leeway) ... ``` The bug appears in testing phase.
0easy
Title: CleanLearning default classifier Body: Currently CleanLearning can be run like this which is nice for users who don't know what classifier to use for their data: ``` cl = CleanLearning() cl.find_label_issues(data, labels) ``` but it always defaults to sklearn's LogisticRegression, which may not work for many types of `data`. Consider deferring the choice of default classifier until `data` is provided and then selecting from a broader suite of default options beyond LogisticRegression as well to make this work for more data types. Challenge is how to do this without introducing additional dependencies for the cleanlab package, or making this code too complex. This challenge makes this quite complex without developing a full autoML strategy, which we don't want to do here. A useful contribution in the meantime could be just to provide better error handling when the default LogisticRegression classifier won't work (eg. the dataset is a pytorch or tensorflow dataset).
0easy
Title: Add support for `UUID` Type Body: ## 🚀 Feature Request Add support for `UUID` in `sgqlc.types` ## Description <!-- Add a clear and concise description of what this new feature aims to do and achieve --> <!-- Is this new feature related to a specific problem? If so, please describe it --> There are the following types supported by `sgqlc`: `int, str, boolean, float, id, enum, union, datetime , non_null , list_of` (Maybe more) But there is no support for `UUID`. Most of the people who work with `databases` in the backend, need a unique identification for entities that they can use as a `primary key`. This can be of type `str` but not always. `UUID` is used by many(including my organization). So, it would be great to have this ` UUID Type` supported. ## Implementation details <!-- How will this feature work? Do you already have something in mind? A potential implementation may be (concisely) explained here --> For example: I want to create a `user` and update that with its `uuid`. The `class` for this `User` will look like this: ``` class User(Type): uuid=UUID code=str name=str createdAt=datetime ``` 1. Create with `uuid` type in response: ``` mutation MutationCreateUser($code: String!, $name: String!) { createUser(code: $code, name: $name) { uuid code name createdAt } } ``` Variables: ` {"code":"user_code", "name":"user_name"}` 2. Update with created `uuid` in request: ``` mutation MutationUpdateUser($uuid: UUID! $code: String!, $name: String!) { updateUser(uuid: $uuid, code: $code, name: $name) { uuid code name createdAt } } ``` Variables: `{"uuid": "94fda4fb-d574-470b-82e2-0f4ec2a2db90", "code":"user_code_updated", "name":"user_name_updated"}` *Note*: There is a library for the `uuid` type(https://docs.python.org/3/library/uuid.html) which can be referred to. ## Acceptance criteria - Support for `UUID` in `sgqlc.types` - The query mentioned in the example above should be possible to execute.
0easy
Title: [Track] VLM accuracy in MMMU benchmark Body: This issue keeps track of all vlm models accuracy in MMMU benchmark ``` python python benchmark/mmmu/bench_sglang.py python benchmark/mmmu/bench_hf.py --model-path model ``` | | sglang | hf | |--|--|--| | Qwen2-VL-7B-Instruct | 0.485 | 0.255 | | Qwen2.5-VL-7B-Instruct | 0.477 | 0.242 | | MiniCPM-V-2_6 | 0.426 | | | DeepseekVL2| 0.447 | | | Deepseek-Janus-Pro-7B| | | | Llava + Llama| | | | Llava + qwen| | | | Llava + Mistral| | | | Mlama | | | | Gemma-3-it-4B| 0.409 | 0.403 | | InternVL2.5-38B | 0.61 | |
0easy
Title: ping function call returns PONG when Redis server is not running (sock connection). Body: ``` import aioredis conn = await aioredis.create_redis_pool('unix:///var/run/redis/redis-server.sock?db=0') await conn.ping() ``` returns **b'PONG'** even though the server is not running. ``` import redis r = redis.Redis('unix:///var/run/redis/redis-server.sock?db=0') r.ping() ``` raises the **ConnectionError** exception. aioredis should not return **b'PONG'** on its own, it should only come from a Redis server. Maybe `self._process_data(data or b'PONG')` (aioredis/connection.py) is the culprit?
0easy
Title: feature request: add describe support for new reduction models Body: [This](https://github.com/ContextLab/hypertools/pull/136) pull request adds support for a wide range of new data reduction models. However, there aren't equivalent `describe` methods like [this one](http://hypertools.readthedocs.io/en/latest/hypertools.tools.describe_pca.html#hypertools.tools.describe_pca) for those new models. I propose changing the name of `hyp.tools.describe_pca` to `hyp.tools.describe`, and then adding a `model` flag to specify the reduction model (default: either PCA or IncrementalPCA depending on how we resolve [this issue](https://github.com/ContextLab/hypertools/issues/134)).
0easy
Title: Make improve flag less intrusive by moving over files like "all_output.txt" and "file_list" to the .gpteng folder Body: This is done by simply using the new DB in #665 and writing to it
0easy
Title: Add docstrings for public packages etc Body: ## Public Packages ``` hexapod/ widgets/ pages/ tests/ ``` ## Some public methods ``` hexapod/ik_solver/ik_solver2.py - init hexapod/linkage.py - init - str - repr hexapod/points.py - init - repr - str - eq hexapod/models.py - for VirtualHexapod - for Hexagon ``` See also: https://www.python.org/dev/peps/pep-0257/
0easy
Title: [BUG]使用headers自定义User Agent时,会自动转为小写。 Body: **Describe the bug** 你好,我不清楚这是不是需求导致的问题,我通过headers修改UserAgent时,curl_cffi会自动将我的User Agent转换为小写,导致我访问网站的时候被拦截。希望能够得到解决方案,谢谢。 curl_cffi/requests/headers.py:81行 ```py self._list = [ ( normalize_header_key(k, lower=False, encoding=encoding), normalize_header_key(k, lower=True, encoding=encoding), normalize_header_value(v, encoding), ) for k, v in headers.items() ] ``` 这里会生成大小写的headers curl_cffi/requests/headers.py:147行 ```py {key.decode(self.encoding): None for _, key, _ in self._list}.keys() ``` 这段代码会自动选择小写的key作为传入值 **Versions** - OS: [e.g. windows 10] - curl_cffi version [0.5.9]
0easy
Title: Web accessibility problems on document edition page Body: ## Bug Report **Problematic behavior** Here is the detailed analysis by Antoine : https://www.loom.com/share/3c9642546c2c4e5391b2ce04a5c3df93
0easy
Title: Web config tool includes \r in prompt Body: When using the web config (xonfig web), newlines are interpreted as "\r\n". On macOS (and Linux, I assume) this adds a "^M" to the prompt, before the newline. ## xonfig ``` +------------------+--------------------------------+ | xonsh | 0.13.3 | | Python | 3.10.7 | | PLY | 3.11 | | have readline | True | | prompt toolkit | 3.0.31 | | shell type | prompt_toolkit | | history backend | json | | pygments | 2.13.0 | | on posix | True | | on linux | False | | on darwin | True | | on windows | False | | on cygwin | False | | on msys2 | False | | is superuser | False | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | xontrib | [] | +------------------+--------------------------------+ ``` ## Expected Behavior I would expect the "\r" to be stripped from the text field before the prompt is set. ## Current Behavior I was able to confirm that the "\r" is being added to the prompt variable by checking my .xonshrc after using the web config. Removing this value removes the "^M" from my prompt. ## Steps to Reproduce * Run xonfig web from your shell. * Select a multi-line prompt, and set your .xonshrc * Exit and restart xonsh. You'll see a "^M" before the new line. ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
0easy
Title: Filter problem at the post list admin page Body: These options of filter: Title, Summary, Created At, Available At aren't working well. The first time that page is loaded and some filter is added, the filter wasn't work.
0easy
Title: Unusable password generator for Django Body: #### The problem The recently added `Password` generator for Django is helpful, but it's not clear how to use it to create an unusable password (similar to calling `set_unusable_password` on the generated user). #### Proposed solution Django's `set_unusable_password` is a call to `make_password` with `None` as the password argument: https://github.com/django/django/blob/0b506bfe1ab9f1c38e439c77b3c3f81c8ac663ea/django/contrib/auth/base_user.py#L118-L120 Using `password = factory.django.Password(None)` will actually work (and will allow factory users to override the password if desired). However, currently the password argument to this factory is documented as a string and this option is not mentioned. #### Extra notes The default value of the `password` argument to `factory.django.Password` could also be set to `None`. This would make that factory generate unusable passwords by default, which may or may not be desired.
0easy
Title: Switch from setup.py to a build library Body: Current solution is deprecated and we need something more future proof
0easy
Title: [BUG] The README code does not immediately work Body: **Minimal Code To Reproduce** ```python from typing import Iterable, Dict, Any, List # Creating sample data data = [ ["A", "2020-01-01", 10], ["A", "2020-01-02", None], ["A", "2020-01-03", 30], ["B", "2020-01-01", 20], ["B", "2020-01-02", None], ["B", "2020-01-03", 40] ] schema = "id:str,date:date,value:int" # schema: *, filled:int def fillna(df:Iterable[Dict[str,Any]],value:int=0) -> Iterable[Dict[str,Any]]: for row in df: for col in cols: row["filled"] = (row["value"] or value) yield row with FugueWorkflow() as dag: df1 = dag.df(data, schema).transform(fillna) df1.show() ``` **Describe the bug** 1. FugueWorkflow needs to be imported 2. Int will have problems on Dask because it can't hold None. Change the type to double **Expected behavior** It should work out of the box. All other code in the README should work as well so just run the code and make sure everything works. **Environment (please complete the following information):** - Backend: All engines - Backend version: 0.4.9 - Python version: 3.7 - OS: linux/windows: Both
0easy
Title: Add doc strings to argument annotation classes Body: Argument Annotation classes in `uplink/types.py` are missing class doc strings. To improve code documentation, we need to add doc strings to the following classes, adhering the [Google Style Guide](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments) for consistency with the rest of the codebase: - [x] `uplink.types.Query` - [x] `uplink.types.QueryMap` - [x] `uplink.types.Header` - [x] `uplink.types.HeaderMap` - [x] `uplink.types.Field` - [x] `uplink.types.FieldMap` - [x] `uplink.types.Part` - [x] `uplink.types.PartMap` - [x] `uplink.types.Body` - [x] `uplink.types.Url`
0easy
Title: improve falcon form data read part Body: > @yedpodtrzitko It depends what you are trying to achieve. Do you want to buffer whole files in memory? Or do you want to spool them to temp files like some other frameworks do? > Falcon even puts a cap on a maximum amount of data that can be referenced this way ([`await part.get_data()`](https://falcon.readthedocs.io/en/stable/api/multipart.html#falcon.media.multipart.BodyPart.get_data)) in order to avoid surprises such as running out of memory. > Use `await part.stream.read()` to read the whole part as a bytestring, or [`await part.stream.pipe(async_file)`](https://falcon.readthedocs.io/en/stable/api/multipart.html#multipart-forms), or read by chunks, and store the result somewhere. You'll probably need to introduce some new object type to hold these attributes. _Originally posted by @vytas7 in https://github.com/0b01001001/spectree/pull/225#discussion_r936042043_
0easy
Title: Add IsLeapYear primitive Body: - This primitive determine the `is_leap_year` attribute of a datetime column
0easy
Title: ruff (D): adopt some the ignored rules for the docstrings Body: we need an issue to discuss and check which of these rules we may want to have _Originally posted by @johnnv1 in https://github.com/kornia/kornia/pull/3082#discussion_r1868460841_ The ignored rules are (from https://docs.astral.sh/ruff/rules/#pydocstyle-d): - 'D100' : Missing docstring in public module - 'D101' : Missing docstring in public class - 'D102' : Missing docstring in public method - 'D103' : Missing docstring in public function - 'D104' : Missing docstring in public package - 'D105' : Missing docstring in magic method - 'D107' : Missing docstring in __init__ - 'D203' : 1 blank line required before class docstring - 'D204' : 1 blank line required after class docstring - 'D205' : 1 blank line required between summary line and description - 'D213' : Multi-line docstring summary should start at the second line - 'D400' : First line should end with a period - 'D401' : First line of docstring should be in imperative mood: "{first_line}" - 'D404' : First word of the docstring should not be "This" - 'D406' : Section name should end with a newline ("{name}") - 'D407' : Missing dashed underline after section ("{name}") - 'D415' : First line should end with a period, question mark, or exclamation point - 'D417' : Missing argument description in the docstring for {definition}: {name} maybe we should adopt some default style used by the community (numpy or google) cc @edgarriba @shijianjian @ducha-aiki - https://docs.astral.sh/ruff/faq/#does-ruff-support-numpy-or-google-style-docstrings - https://docs.astral.sh/ruff/formatter/#docstring-formatting --- Missing rules to be enable after #3088 - [ ] 'D100' - [ ] 'D101' - [ ] 'D102' - [ ] 'D103 - [ ] 'D104', - [ ] 'D105' - [ ] 'D107' - [ ] 'D417'
0easy
Title: CTFd pages route is relative when it shouldn't be Body: For some reason CTFd page routes are being generated in the navbar as relative when they shouldn't be. E.g. (`page` instead of `/page`).
0easy
Title: callbacks in autoencoder Body: How can i implement callback parameter in fit moder Autoencoder ? There is not parameter. from keras.callbacks.callbacks import EarlyStopping cb_earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False) pyod_model.fit(scaler, callbacks=[cb_earlystop]) TypeError: fit() got an unexpected keyword argument 'callbacks' Can you implement this parameter? Its very usefull for monitor, early stop and another cases.
0easy
Title: Display warning when no recommendations are generated Body: When no recommendations are generated (e.g., when [dataframe is small but not preaggregated](https://github.com/lux-org/lux/blob/master/lux/core/frame.py#L153), possibly other cases), we should display a warning that explains why the Lux view is not showing up. Add an advanced ReadTheDoc page explaining default recommendation logic, including when recommendations are *not* displayed. _Originally posted by @akanz1 in https://github.com/lux-org/lux/issues/110#issuecomment-706659586_
0easy
Title: Add more information to main Readme.md Body: - add optimized metric - add total run time - add validation metric - add parameters used in AutoML
0easy
Title: [MNT] remove mutable objects from defaults Body: We should ensure that no mutable objects are argument defaults, e.g., lists, dicts. All mutable defaults should be replaced by appropriate defaults, e.g., strings if applicable, or `None`, which internally is then replaced by a newly initialized mutable defaults. Care needs to be taken in cases where a `self` write happens, e.g., dataclass-like structures should not overwrite the `self` attr with a mutable default either, instead write the replacement default to `self`.
0easy
Title: Improve `__pow__` for `SingleQubitCliffordGate` and `CliffordGate` class Body: **Is your feature request related to a use case or problem? Please describe.** The `__pow__` operator for `CliffordGate` is implemented only for integer powers and has complexity $\mathcal{O}(n)$ where $n$ is the exponent. https://github.com/quantumlib/Cirq/blob/ec84a057614396bf89459cd141a5f77b4d01ed48/cirq-core/cirq/ops/clifford_gate.py#L399-L411 For `SingleQubitCliffordGate` it's implemented for only integer powers where it falls to `CliffordGate.__pow__` and for $\pm \sqrt{}$. https://github.com/quantumlib/Cirq/blob/ec84a057614396bf89459cd141a5f77b4d01ed48/cirq-core/cirq/ops/clifford_gate.py#L718-L728 **Describe the solution you'd like** For `CliffordGate.__pow__` exponentiation should be done using [binary exponentiation](https://cp-algorithms.com/algebra/binary-exp.html) to reduce the complexity ot $\mathcal{O}(\log{n})$. Support for non integer exponents is hard in the general case. For `SingleQubitCliffordGate.__pow__`. The single qubit clifford gates are a group of size 24. see. https://github.com/quantumlib/Cirq/blob/ec84a057614396bf89459cd141a5f77b4d01ed48/cirq-core/cirq/ops/clifford_gate.py#L149 support for integer powers can be done in $\mathcal{O}(1)$ if we either fall to the optimized `CliffordGate.__pow__` but with `exponent%24` instead of `expnent` or cache the results in table and access `group_powers[self][exponent%24]`. For rational exponents, When the clifford operation has a sqrt. The operation becomes well defined for exponents of the form $\frac{k}{2}$ where $k \in \mathbb{Z}$. For example $X^\frac{5}{2}$ is the same as $SqrtX^5$ and $X^\frac{-5}{2}$ which is the same as $(SqrtX^\dagger)^5$. **What is the urgency from your perspective for this issue? Is it blocking important work?** <!-- Please choose one and remove the others --> P3 - I'm not really blocked by it, it is an idea I'd like to discuss / suggestion based on principle
0easy
Title: [FEA] validator Body: **Is your feature request related to a problem? Please describe.** A recent viz had some null titles -- would help to check validate somehow! **Describe the solution you'd like** Ex: https://gist.github.com/lmeyerov/423df6b3b5bd85d12fd74b85eca4a17a - nodes not in edges - edges referencing non-existent nodes - na nodes/edges - if colors/sizes/icons/titles, NA vals - if no title and defaulting to guess title, NAs there
0easy
Title: [Docs] Document how to configure shared memory for multi GPU deployments Body: This is a copy of https://github.com/sgl-project/sgl-project.github.io/issues/5. I did not realize the documentation content is generated, so it seems more likely the request belongs here... (?) The [documentation](https://docs.sglang.ai/backend/server_arguments.html#tensor-parallelism) states `python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 2` is a way to enable multi-GPU tensor parallelism. However one must think how the processes (?) communicate together, usually there's a shared memory setup needed. And if this is not properly set, one might run into issues like: ``` torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/NCCLUtils.cpp:81, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5 ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error. Last error: Error while creating shared memory segment /dev/shm/nccl-vzIpS6 (size 9637888) ``` when running sglang server. This means the size of shared memory is too low. When running in docker containers, this could be set up with `--shm-size` flag (see vllm's doc at https://docs.vllm.ai/en/latest/deployment/docker.html) When running in kubernetes, it's possible that the default size for shared memory will not be enough for your containers, so one might need to set up bigger size. Common way to do it is mount `/dev/shm` as emptyDir and set up proper `sizeLimit`. Like this: ``` spec: containers: - command: ... < your usual container setup > ... volumeMounts: - mountPath: /dev/shm name: shared volumes: - emptyDir: medium: Memory sizeLimit: 1Gi name: shared ``` I have found out that vllm project recommends 20Gi as a default value for the shared memory size, see https://github.com/vllm-project/production-stack/issues/44 and their helm chart value https://github.com/vllm-project/production-stack/pull/105/files#diff-7d931e53fe7db67b34609c58ca5e5e2788002e7f99657cc2879c7957112dd908R130 However I'm not sure where does this number come from. I was testing on the node with 2 NVIDIA L40 GPU's with DeepSeek-R1-Distill-Qwen-32B model, and having 1GiB of shared memory seemed enough.
0easy
Title: Add Support for Circular Gradients Body: Rio already ships with linear gradients, but circular gradients are conspicuously missing.
0easy
Title: [Feature]: Add Warning for Chat Template Mismatches similar to SGLang Body: I'm requesting a feature to add warnings when users supply a chat template that differs from the official template for a particular model. Currently, vLLM simply acknowledges the supplied template without alerting users to potential performance issues. **Current Behavior:** When using a wrong chat template with vLLM, it only logs: ``` Using supplied chat template: {% for message in messages %}{% if message.role == 'user' %}{{ message.content }}{% endif %}{% endfor %} ``` **Requested Behavior:** While SGLang provides this helpful warning: ``` Using a chat_template: 'None', which is different from official chat template: 'llama-3-instruct', This discrepancy may lead to performance degradation. ``` I think that would improve the user experience by making potential issues more visible before they cause problems in production. ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
0easy
Title: ExportDialog Drawn Off Screen Body: Depending on the size of the scene the export dialog box can be drawn off or partially off screen. This is due to an implementation of the `show` command that allows moving the box to negative pixel indices. Problem Code: https://github.com/pyqtgraph/pyqtgraph/blob/a5f48ec5b58a10260195f1424309f7374a85ece7/pyqtgraph/GraphicsScene/exportDialog.py#L57-L62 To fix this, the position calculation can be clipped using `max`, and the `setGeometry` command can be changed to `move` to account for the size of the window's frame. Potential Fix: ```python if not self.shown: self.shown = True vcenter = self.scene.getViewWidget().geometry().center() x = max(0, int(vcenter.x() - self.width() / 2)) y = max(0, int(vcenter.y() - self.height() / 2)) self.move(x, y) ``` I can't say I understand the motivation for moving the dialog box in the first place, but atleast with this modification the dialog box is always accessible with the mouse.
0easy
Title: [New feature] Add apply_to_images to Equalize Body:
0easy
Title: Documentation - Update page titles to align better to writing style guide Body: Our writing style guide advises we should align with the Google developer documentation style guide. However, some page titles are still using an inconsistent style. * https://docs.wagtail.org/en/latest/contributing/documentation_guidelines.html#writing-style-guide * https://developers.google.com/style/headings ### Pertinent section of the Wagtail docs There are a few pages that have inconsistent main titles that do not adhere to the usage of sentence case. > Use sentence case for headings and titles. These are minor changes but help us present a consistent tone in our documentation (at least in the TOC - Table of Contents). ### Details These are the ones I have found, there could be others. | URL | Change to make | |-|-| | https://docs.wagtail.org/en/latest/advanced_topics/images/feature_detection.html | `Feature detection` (lower case d) | | https://docs.wagtail.org/en/latest/advanced_topics/api/v2/configuration.html | `Wagtail API v2 configuration guide` (lower case c & g) | | https://docs.wagtail.org/en/latest/advanced_topics/api/v2/usage.html | `Wagtail API v2 usage guide` (lower case u & g) | We do have the page [Managing the Reference Index](https://docs.wagtail.org/en/latest/advanced_topics/reference_index.html), we either need to change the title to be lower case 'reference index' or update the content to consistently use the proper noun 'Reference Index'. ### Working on this Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
0easy
Title: Add 'load new apps' admin action Body: Add an action in the django admin screen to create ORM model instances for stateless apps that are not already present.
0easy
Title: add links to cookbook/file-client in relevant sections in the docs Body: we recently added an example to use File clients, we should update the docs to link to the example wherever is relevant: https://github.com/ploomber/projects/tree/master/cookbook/file-client
0easy
Title: VOT Testing AssertionError: Body: I created VOT2018 folder and put VOT2018.json file inside, To do a quick check I run >> python -u ../../tools/test.py \ --snapshot snapshot/checkpoint_e5.pth \ --config config.yaml \ --dataset VOT2018 2>&1 | tee logs/test_dataset.log And got this error. >>AssertionError: /home/thomas/pysot/tools/../testing_dataset/VOT2018/ants1/color/00000001.jpg loading VOT2018: 0%| | 0/60 [00:00<?, ?it/s, ants1] What is problem? Do I need to download VOT2018 (images) datset and put it inside VOT2018 folder? I found out pysot-toolkit, but did not get it properly, how to download VOT2018 datset?
0easy
Title: quokka.utils.paas broken in Python 3 Body: this file uses **execute** to activate a venv, find a solution for python3
0easy
Title: Default URL param value for Gravatar URL have been deprecated (`mm` -> `mp`) Body: ### Issue Summary We currently pass in `mm` to the `d` (default) param, this is used to determine what avatar will show if there's no matching avatar. However, the latest documentation advises that this should be `mp` (mystery person) instead. https://github.com/wagtail/wagtail/blob/c2676af857a41440e05e03038d85a540dcca3ce2/wagtail/users/utils.py#L28-L29 https://github.com/wagtail/wagtail/blob/c2676af857a41440e05e03038d85a540dcca3ce2/wagtail/users/utils.py#L45 https://docs.gravatar.com/api/avatars/images/#default-image ### Describe the solution you'd like Update the param value from `mm` to `mp` and ensure any unit tests are updated. This way, if the support for this legacy value gets dropped, it will not be a breaking change for Wagtail users. ### Describe alternatives you've considered It might be nice to have a better approach to this by allowing the param to be passed into the function / overridden somehow. Best to discuss that in a different issue though - see https://github.com/wagtail/wagtail/issues/12659 ### Additional context Two PRs have attempted this (and other changes), see the feedback and the PRs for reference. - #11077 - #11800 ### Working on this - Anyone can contribute to this, be sure you understand how to reproduce the avatar scenario. - It might be good to tackle this small change before tackling the other related issues. - View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
0easy
Title: Duplicate JWT cookie descriptions Body: **Description** In the section for JWT auth & cookies, both `Lax` and `Strict` descriptions in README are the same. **Screenshots** ![image](https://github.com/igorbenav/FastAPI-boilerplate/assets/122524301/fc42b662-ccce-48f1-a353-e92e565da8e0)
0easy
Title: Unexpected exception while updating completions Body: <!--- Provide a general summary of the issue in the Title above --> When I set $UPDATE_COMPLETIONS_ON_KEYPRESS = True and type for instance /usr/bin/ls -a in terminal, following exception is thrown: "Exception [Errno 13] Permission denied: '/usr/bin/ls.json'" <!--- If you have a question along the lines of "How do I do this Bash command in xonsh" please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html If you don't find an answer there, please do open an issue! --> ## xonfig <details> ``` +------------------+----------------------+ | xonsh | 0.13.4 | | Python | 3.8.10 | | PLY | 3.11 | | have readline | True | | prompt toolkit | 3.0.36 | | shell type | prompt_toolkit | | history backend | json | | pygments | 2.14.0 | | on posix | True | | on linux | True | | distro | ubuntu | | on wsl | False | | on darwin | False | | on windows | False | | on cygwin | False | | on msys2 | False | | is superuser | False | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | xontrib | [] | | RC file 1 | /home/ralis/.xonshrc | +------------------+----------------------+ ``` </details> ## Expected Behavior <!--- Tell us what should happen --> The warning should be either more subtle or no completion suggestions should be shown. ## Current Behavior <!--- Tell us what happens instead of the expected behavior --> Huge multi-line error is printed. <!--- If part of your bug report is a traceback, please first enter debug mode before triggering the error To enter debug mode, set the environment variable `XONSH_DEBUG=1` _before_ starting `xonsh`. On Linux and OSX, an easy way to to do this is to run `env XONSH_DEBUG=1 xonsh` --> ### Traceback (if applicable) <details> ``` Unhandled exception in event loop: File "/home/ralis/.local/lib/python3.8/site-packages/prompt_toolkit/buffer.py", line 1939, in new_coroutine await coroutine(*a, **kw) File "/home/ralis/.local/lib/python3.8/site-packages/prompt_toolkit/buffer.py", line 1763, in async_completer async for completion in async_generator: File "/home/ralis/.local/lib/python3.8/site-packages/prompt_toolkit/completion/base.py", line 326, in get_completions_async async for completion in completer.get_completions_async( File "/home/ralis/.local/lib/python3.8/site-packages/prompt_toolkit/completion/base.py", line 202, in get_completions_async for item in self.get_completions(document, complete_event): File "/usr/local/lib/python3.8/dist-packages/xonsh/ptk_shell/completer.py", line 58, in get_completions completions, plen = self.completer.complete( File "/usr/local/lib/python3.8/dist-packages/xonsh/completer.py", line 121, in complete return self.complete_from_context( File "/usr/local/lib/python3.8/dist-packages/xonsh/completer.py", line 272, in complete_from_context for comp in self.generate_completions( File "/usr/local/lib/python3.8/dist-packages/xonsh/completer.py", line 233, in generate_completions for comp in res: File "/usr/local/lib/python3.8/dist-packages/xonsh/completers/man.py", line 137, in completions for desc, opts in _parse_man_page_options(cmd).items(): File "/usr/local/lib/python3.8/dist-packages/xonsh/completers/man.py", line 121, in _parse_man_page_options path.write_text(json.dumps(options)) File "/usr/lib/python3.8/pathlib.py", line 1255, in write_text with self.open(mode='w', encoding=encoding, errors=errors) as f: File "/usr/lib/python3.8/pathlib.py", line 1222, in open return io.open(self, mode, buffering, encoding, errors, newline, File "/usr/lib/python3.8/pathlib.py", line 1078, in _opener return self._accessor.open(self, flags, mode) Exception [Errno 13] Permission denied: '/usr/bin/ls.json' ``` </details> ## Steps to Reproduce <!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! --> ```xsh $UPDATE_COMPLETIONS_ON_KEYPRESS = True /usr/bin/ls - # exception after typing ``` ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
0easy
Title: Improve automatic bin determination for histograms Body: Currently, the formula for histogram binning sometimes results in bins that are very "skinny" and sometimes bins that are very "wide". We need to improve histogram bin width and size determination to ensure more accurate histograms are plotted. This is especially true for the "Filter" action. Example: ```python df = pd.read_csv("https://github.com/lux-org/lux-datasets/blob/master/data/olympic.csv?raw=True") df.intent=["Height"] df ``` ![image](https://user-images.githubusercontent.com/5554675/104189469-aa3cd980-5455-11eb-94c6-06849b836e5a.png) ![image](https://user-images.githubusercontent.com/5554675/104189512-bd4fa980-5455-11eb-9eca-9dad6fb71ae7.png) This needs to be customized for matplotlib and Altair.
0easy
Title: Please support callback_on_step_end for following pipelines Body: **Is your feature request related to a problem? Please describe.** Missing callback_on_step_end in these pipeline takes away the capability to show the progress in UI **Describe the solution you'd like.** Please support callback_on_step_end **Describe alternatives you've considered.** N.A. **Additional context.** 1. AuraFlowPipeline TypeError: AuraFlowPipeline.__call__() got an unexpected keyword argument 'callback_on_step_end' 2. LuminaText2ImgPipeline
0easy
Title: Change docstring style to google-style Body: Right now there is numpy-style in docstring. Please change to google-style (https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) because I would like to integrate this doc with mkdocs-material and the only reasonable package to do this in automated way (mkdocstrings) works only with google-style docstring.
0easy
Title: [Bug] fix DeepSeek V2/V3 awq Body: ### Checklist - [ ] 1. I have searched related issues but cannot get the expected help. - [ ] 2. The bug has not been fixed in the latest version. - [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback. - [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed. - [ ] 5. Please use English, otherwise it will be closed. ### Describe the bug I tried to integrate the awq dequant from sgl-kernel and found that both the main version and the integrated version have issues with the awq of DeepSeek V2 Coder and DeepSeek V3, which need to be fixed. ``` casperhansen/deepseek-coder-v2-instruct-awq cognitivecomputations/DeepSeek-V3-AWQ ``` ### Reproduction N/A ### Environment N/A
0easy
Title: [BUG-REPORT] Issue converting object to string Body: **Description** Trying to convert an "object" column to string fails. Example code: ```python import vaex import numpy as np if __name__ == "__main__": arr = np.array([123, "test", None], dtype=object) df = vaex.from_arrays(test=arr) df['test'] = df['test'].astype('str') print(df.head()) ``` Exception: ```python TypeError: to_string(): incompatible function arguments. The following argument types are supported: 1. (arg0: numpy.ndarray[float32]) -> vaex.superstrings.StringList64 2. (arg0: numpy.ndarray[float64]) -> vaex.superstrings.StringList64 3. (arg0: numpy.ndarray[int64]) -> vaex.superstrings.StringList64 4. (arg0: numpy.ndarray[int32]) -> vaex.superstrings.StringList64 5. (arg0: numpy.ndarray[int16]) -> vaex.superstrings.StringList64 6. (arg0: numpy.ndarray[int8]) -> vaex.superstrings.StringList64 7. (arg0: numpy.ndarray[uint64]) -> vaex.superstrings.StringList64 8. (arg0: numpy.ndarray[uint32]) -> vaex.superstrings.StringList64 9. (arg0: numpy.ndarray[uint16]) -> vaex.superstrings.StringList64 10. (arg0: numpy.ndarray[uint8]) -> vaex.superstrings.StringList64 11. (arg0: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 12. (arg0: numpy.ndarray[float32], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 13. (arg0: numpy.ndarray[float64], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 14. (arg0: numpy.ndarray[int64], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 15. (arg0: numpy.ndarray[int32], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 16. (arg0: numpy.ndarray[int16], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 17. (arg0: numpy.ndarray[int8], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 18. (arg0: numpy.ndarray[uint64], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 19. (arg0: numpy.ndarray[uint32], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 20. (arg0: numpy.ndarray[uint16], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 21. (arg0: numpy.ndarray[uint8], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 22. (arg0: numpy.ndarray[bool], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64 Invoked with: array([123], dtype=object) Process finished with exit code 0 ``` **Software information** - Vaex version (`import vaex; vaex.__version__)`: ``` {'vaex': '4.5.0', 'vaex-core': '4.6.0a5', 'vaex-viz': '0.5.0', 'vaex-hdf5': '0.10.0', 'vaex-server': '0.6.1', 'vaex-astro': '0.9.0', 'vaex-jupyter': '0.6.0', 'vaex-ml': '0.14.0', 'vaex-arrow': '0.4.2'} ``` - Vaex was installed via: pip - OS: Ubuntu 20.04 **Additional information** This makes it impossible to perform an export to HDF5 or other operations that do not support the object type. This is especially blocking when loading CSV files. Should be relevant to #1568.
0easy
Title: Improve docs re suffixed responders Body: - [x] Add an example to the `suffix` kwarg docstring for `add_route()` - [x] Add a section about suffixed responders to the [routing](https://falcon.readthedocs.io/en/stable/api/routing.html) page
0easy
Title: Bus factor metric API Body: The canonical definition is here: https://chaoss.community/?p=3944
0easy
Title: Feature: Add ability to specify on_assign, on_revoke, and on_lost callbacks for a Confluent subscriber Body: **Is your feature request related to a problem? Please describe.** Yes, I want to know when my confluent Consumer gets topic partitions assigned and removed. Currently, I reach through FastStream into confluent_kafka.Consumer.assignment() every time my k8s liveness probe runs, but its noisy and most notably, not *right* when it happens. I may even, at times, want to do something with the informaiton beyond logging. Potentially clear some cached state, cancel some running threads/processes, etc... **Describe the solution you'd like** I want to specify at the subscriber registration level the callbacks that I want called, and for FastStream to pass them into the confluent_kafka.Consumer.subscribe() call inside AsyncConfluentConsumer. **Feature code example** ```python from faststream import FastStream ... broker = KafkaBroker(...) @broker.subscriber( "my-topic", on_assign=lambda consumer, partitions: ..., on_revoke=lambda consumer, partitions: ..., ) def my_handler(body: str): print(body) ``` **Describe alternatives you've considered** I monkey patch AsyncConfluentConsumer at import time in the FastStream library. ```python import faststream.confluent.broker.broker from faststream.confluent.broker.broker import AsyncConfluentConsumer from observing.observing import logger class PatchedAsyncConfluentConsumer(AsyncConfluentConsumer): """A patched version of the AsyncConfluentConsumer class.""" def __init__(self, *topics, **kwargs): super().__init__(*topics, **kwargs) self.topic_partitions = set() def on_revoke(self, consumer, partitions): """Conditionally pauses the consumer when partitions are revoked.""" self.topic_partitions -= set(partitions) logger.info( "Consumer rebalance event: partitions revoked.", topic_partitions=dict( n_revoked=len(partitions), revoked=[ dict(topic=tp.topic, partition=tp.partition) for tp in partitions ], n_current=len(self.topic_partitions), current=[ dict(topic=tp.topic, partition=tp.partition) for tp in self.topic_partitions ], ), memberid=self.consumer.memberid(), topics=self.topics, config=dict( group_id=self.config.get("group.id"), group_instance_id=self.config.get("group.instance.id"), ), ) def on_assign(self, consumer, partitions): """Conditionally resumes the consumer when partitions are assigned.""" self.topic_partitions |= set(partitions) logger.info( "Consumer rebalance event: partitions assigned.", topic_partitions=dict( n_assigned=len(partitions), assigned=[ dict(topic=tp.topic, partition=tp.partition) for tp in partitions ], n_current=len(self.topic_partitions), current=[ dict(topic=tp.topic, partition=tp.partition) for tp in self.topic_partitions ], ), memberid=self.consumer.memberid(), topics=self.topics, config=dict( group_id=self.config.get("group.id"), group_instance_id=self.config.get("group.instance.id"), ), ) async def start(self) -> None: """Starts the Kafka consumer and subscribes to the specified topics.""" self.consumer.subscribe( self.topics, on_revoke=self.on_revoke, on_assign=self.on_assign ) def patch_async_confluent_consumer(): logger.info("Patching AsyncConfluentConsumer.") faststream.confluent.broker.broker.AsyncConfluentConsumer = ( PatchedAsyncConfluentConsumer ) ``` Obviously, this is ideal for no one. **Additional context**
0easy
Title: Selecting Match phrase case to typed abbreviation also selects its opposite, Ignore case of typed abbreviation Body: ## Classification: Bug ## Reproducibility: Always AutoKey version: 0.95.10 Both If the problem is known to be present in more than one version, please list all of those. Installed via: debs from GitHub Linux Distribution: kubuntu 18.04 and others ## Summary Selecting Match phrase case to typed abbreviation also selects its opposite, Ignore case of typed abbreviation in the GUI ## Steps to Reproduce (if applicable) Define a phrase and select Match phrase case to typed abbreviation ## Expected Results Just that one option should be selected ## Actual Results Ignore case of typed abbreviation also becomes automatically selected - which should be mutually exclusive with the selected option ## Notes In 0.95.10, if you define a phrase and select Match phrase case to typed abbreviation, it automatically also selects Ignore case of typed abbreviation which makes no sense to me. This only works one way. Selecting Ignore case of typed abbreviation does not auto-select Match phrase case to typed abbreviation. I recreated this on both front ends. I find it most curious that this bug appears in both front ends. I thought most of that code was disjoint. I did not check which option actually takes effect, but I believe it honors the first option. If it didn't, we would probably have seen numerous error reports starting shortly after 0.95.10 was released (assuming that's where the bug was introduced - which has not been investigated.)
0easy
Title: Process: Kill process if Robot's timeout occurs when waiting for process to end Body: Issue #5345 reported that Robot's timeouts weren't able to stop `Run Process` or `Wait For Process` keywords. That was fixed so that these keywords can be stopped, but processes that keywords were waiting for were left running. Leaving process on background especially is likely not a good idea, especially because they often have hung in this case. This issue proposes killing the processes instead. Killing processes if Robot's timeout occur requires handling the timeout in the library code. That is actually surprisingly easy by catching `robot.errors.TimeoutError` and re-raising it once the process has been killed. There could be other libraries that want to do such cleanup as well, and documenting how to do that in the User Guide is probably a good idea. I'll submit a separate issue about that. Notice that killing process as proposed above doesn't fully prevent processes to be left running. That can still happen if you use `Start Process` and Robot's timeout occurs before `Wait For Process` is called. We could enhance the library by adding some kind of auto-closing functionality to it, but I don't consider that too high priority because the library already has `Terminate All Processes` that can be used in test or suite teardown. Such an enhancement should anyway get its own issue.
0easy
Title: Marketplace - agent page - update font of description header Body: ### Describe your issue. <img width="598" alt="Screenshot 2024-12-17 at 18 53 43" src="https://github.com/user-attachments/assets/32fe7be0-ef3e-400b-a750-372381a8d177" /> Please update font to the "p-ui-medium" style in the typography sheet Update font to the following: font-family: Geist; font-size: 16px; font-weight: 500; line-height: 24px; text-align: left; text-underline-position: from-font; text-decoration-skip-ink: none; **Update color to:** background: var(--neutral-800, #262626);
0easy
Title: [Reservations] Only waits for reservations Body: Reservations can be much cheaper than the on-demand instances, once a reservation is purchased for a future period of time, a user wants `sky launch` to only wait the reservation to be ready and launch the job, not risking to get an on-demand cluster with a much higher price. It could be done by: 1. Allowing a new value for the `prioritize_reservations` field in `~/.sky/config.yaml`: `reservation_only` 2. Allowing the `prioritize_reservations` to be specified in SkyPilot yaml, i.e. the experimental section.
0easy
Title: ImportError: cannot import name 'CLOSED' from 'websockets.connection' Body: ### Is there an existing issue for this? - [X] I have searched the existing issues ### Describe the bug We're using Sanic `21.12.2` at rasa and notice this bug whenever rasa tries to spin a sanic server, ``` File "/Users/zi/Work/Rasa/venv-rasa-oss-3-10-2/lib/python3.10/site-packages/sanic/mixins/startup.py", line 57, in <module> from sanic.server.protocols.websocket_protocol import WebSocketProtocol File "/Users/zi/Work/Rasa/venv-rasa-oss-3-10-2/lib/python3.10/site-packages/sanic/server/protocols/websocket_protocol.py", line 3, in <module> from websockets.connection import CLOSED, CLOSING, OPEN ImportError: cannot import name 'CLOSED' from 'websockets.connection' (/Users/zi/Work/Rasa/venv-rasa-oss-3-10-2/lib/python3.10/site-packages/websockets/connection.py) ``` It seems like https://github.com/sanic-org/sanic/pull/2609 addresses it but these changes are not available to the branch for version 21 `21.12LTS`. ### Code snippet _No response_ ### Expected Behavior I would have expected to not see any error when starting the sanic server. I get expected behaviour when using sanic version `22.12.0` and `23.3.0` ### How do you run Sanic? Sanic CLI ### Operating System MacOS ### Sanic Version Sanic 21.12.2; Routing 0.7.2 ### Additional context _No response_
0easy
Title: Update configs to use `dataclasses` Body: We have some specific configs for some algorithms, will be nice to update them from using `dicts`/`TypedDict` to `dataclasses`. The idea here is to do it in a way that does not break things, so we should have an interface (to/from) between `dict` and the `dataclasses`. Example of what we can explore for these methods ```python >>> from dataclasses import dataclass, asdict >>> @dataclass ... class A: ... b: int ... >>> asdict(A(1)) {'b': 1} >>> A(**asdict(A(1))) A(b=1) ``` _Originally posted by @johnnv1 in https://github.com/kornia/kornia/pull/2092#discussion_r1049564760_ List of some configs to be replaced: - [ ] kornia.feature.adalam.core.AdalamConfig - [ ] kornia.contrib.face_detection.FaceDetector.config #2851 - [ ] kornia.feature.keynet.KeyNet_conf - #2254 - [ ] [kornia.feature.loftr.loftr.default_cfg](https://github.com/kornia/kornia/blob/2387a2d165a977f1646332d6cbe6b915a4806e37/kornia/feature/loftr/loftr.py#L25) - [ ] kornia.feature.loftr.loftr_module.fine_preprocess.FinePreprocess.config - [ ] kornia.feature.loftr.loftr_module.transformer.LocalFeatureTransformer.config - [ ] kornia.feature.loftr.utils.coarse_matching.CoarseMatching.config - [ ] kornia.feature.loftr.utils.supervision config - [ ] [kornia.feature.loftr.backbone.resnet_fpn config](https://github.com/kornia/kornia/blob/2387a2d165a977f1646332d6cbe6b915a4806e37/kornia/feature/loftr/backbone/resnet_fpn.py#L50) - [ ] kornia.feature.matching._get_default_fginn_params - [x] [kornia.feature.sold2.backbones.SOLD2Net.cfg](https://github.com/kornia/kornia/blob/2387a2d165a977f1646332d6cbe6b915a4806e37/kornia/feature/sold2/backbones.py#L377) - #2880 - [x] [kornia.feature.sold2.sold2.default_cfg](https://github.com/kornia/kornia/blob/2387a2d165a977f1646332d6cbe6b915a4806e37/kornia/feature/sold2/sold2.py#L18) - [x] [kornia.feature.sold2.sold2_detector.default_cfg](https://github.com/kornia/kornia/blob/2387a2d165a977f1646332d6cbe6b915a4806e37/kornia/feature/sold2/sold2_detector.py#L17) - [ ] #2901 - [ ] #2908
0easy
Title: Warn when chat.postMessage is called without `text` argument Body: It's a best practice to always provide a `text` argument when posting a message, even though the platform doesn't technically require it when `blocks` are provided. The `text` argument is used in places where `blocks` cannot be rendered such as: system push notifications, assistive technology such as screen readers, etc. In order to help apps adhere to this best practice and give users a more accessible experience using Slack, we should add a warning when the `chat.postMessage` (or possibly related methods like `chat.update`) are called without a `text` argument. This warning will be proactive. One day, the platform may also emit a warning in the response metadata. This SDK should already be set up to log response metadata warnings. When this happens, we should remove the proactive warning. ### Category (place an `x` in each of the `[ ]`) - [x] **slack_sdk.web.WebClient** (Web API client) - [ ] **slack_sdk.webhook.WebhookClient** (Incoming Webhook, response_url sender) - [ ] **slack_sdk.models** (UI component builders) - [ ] **slack_sdk.oauth** (OAuth Flow Utilities) - [ ] **slack_sdk.rtm.RTMClient** (RTM client) - [ ] **slack_sdk.signature** (Request Signature Verifier) ### Requirements Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
0easy
Title: Add ResultMixin implementations for Dask native types Body: **Is your feature request related to a problem? Please describe.** We should implement useful implementations of: ```python class ResultMixin(object): """Base class housing the static function. Why a static function? That's because certain frameworks can only pickle a static function, not an entire object. """ @staticmethod @abc.abstractmethod def build_result(**outputs: typing.Dict[str, typing.Any]) -> typing.Any: """This function builds the result given the computed values.""" pass ``` for use with Dask. E.g. returning a Dask native array, dataframe, bag, etc. Currently the default is to return a *pandas dataframe*. See the `build_result` function in `DaskGraphAdapter` for a reference point on how it could be used. **Describe the solution you'd like** These should probably be placed in the `h_dask.py` module for now. Otherwise open to naming. Alternatively, we could include more options in `DaskGraphAdapter`. Open to thinking what way is the most user friendly solution going forward. **Additional context** The addition of these ResultMixins should enable a user who is using Dask, to not have to implement their own version, instead they can use the ones that come with Hamilton.
0easy
Title: CI: Add property-based testing Body: Let's adopt Quickcheck-style property-based testing throughout the library. # Details There are lots of opportunities in cleanlab to do property-based testing, because it has lots of functions where it's easy to write down a relational property about outputs that should always hold. As one simple example: we might have one function to compute the confident joint and another function to compute just the diagonal of the confident joint; we can write down the property `∀ pyx, valid_prob_matrix pyx → diagonal (compute_confident_joint pyx) = compute_confident_joint_diagonal pyx`. Systematically adopting property-based testing could help us catch more bugs. Some [existing tests](https://github.com/cleanlab/cleanlab/blob/master/tests/test_latent_algebra.py#L47) in cleanlab are essentially property-based tests, but they're only being evaluated on a single hard-coded input. Upgrading these to property-based testing should be easy after writing the appropriate generators. I'd suggest using the [Hypothesis](https://hypothesis.readthedocs.io/en/latest/) library for this purpose. It even has some [support for NumPy testing](https://hypothesis.readthedocs.io/en/latest/numpy.html).
0easy
Title: Fix the NPM_FILE_PATTERNS setting to work on windows Body: Instead of hardcoding the "/" to the entries `NPM_FILE_PATTERNS` you need to use `os.path.join`, as per: https://github.com/kevin1024/django-npm/issues/15 For example: ``` NPM_FILE_PATTERNS = { "a17t": [os.path.join("dist", "a17t.css"), os.path.join("dist", "tailwind.css")], "apexcharts": [os.path.join("dist", "apexcharts.min.js")], "litepicker": [os.path.join("dist", "js", "main.js")], "turbolinks": [os.path.join("dist", "turbolinks.js")], "stimulus": [os.path.join("dist", "stimulus.umd.js")], "inter-ui": [os.path.join("Inter (web)", "*")], "@fortawesome": [os.path.join("fontawesome-free", "js", "all.min.js")], } ``` TIA
0easy
Title: Explain how to force add raw data when small & necessary Body: Thanks to @epogrebnyak for reporting.
0easy
Title: leaderboard is showing negative values for f1 metric Body: when i run fit on tabular binary classification dataset i get the following leader board ![image](https://user-images.githubusercontent.com/101978729/198283721-88e00d9e-ecc7-4c3a-8e8d-5aa0bfb0dded.png) when i run model.report() it looks better , where the metrics are positive: ![image](https://user-images.githubusercontent.com/101978729/198284579-4333b77d-d55c-490d-99a7-70406ec61337.png)
0easy
Title: Improve user method of seeing pipelines generated Body: Currently, the easiest way for a user to see the pipelines included in the ensemble is through `estimator.show_models()` which just returns a `str` which needs to be manually parsed and looked through. There could definitely be a nicer format to view any such pipeline and provide easy access.
0easy
Title: Implement `DataTree.persist` Body: > we're still missing an implementation for `DataTree.persist`, which I skipped mostly because I wasn't sure how to test that. _Originally posted by @TomNicholas in https://github.com/pydata/xarray/issues/9670#issuecomment-2435984448_
0easy
Title: Stress test repeated construction and fitting from same process Body: From issue #1302, it appears autosklearn is a bit unstable when run many times in the same script, i.e. in a for loop. ```python for i in range(400): automodel = AutoSklearn(full_resources) automodel.fit(x, y) ``` We currently have no test for this and it would be good to see if we can reproduce the same `connection refused` error.
0easy
Title: Create page Themes Body: Create page themes for Quokka, http://quokkaproject.org/themes/ like http://opthemes.com/
0easy
Title: Show length error when Configs provided are too long Body: We should show an error when Configs provided are too long. This is overall part of the whole process of making the API more strict though.
0easy
Title: More convenient pagination support in SCIM / Audit Logs API clients Body: If it's possible I'd love for the SCIMClient's searches to be able to be paginated like the WebClient does with the SlackResponse. It looks like it could be done with the body returned. ### Category (place an `x` in each of the `[ ]`) - [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client) - [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender) - [ ] **slack_sdk.models** (UI component builders) - [ ] **slack_sdk.oauth** (OAuth Flow Utilities) - [ ] **slack_sdk.socket_mode** (Socket Mode client) - [ ] **slack_sdk.audit_logs** (Audit Logs API client) - [x] **slack_sdk.scim** (SCIM API client) - [ ] **slack_sdk.rtm** (RTM client) - [ ] **slack_sdk.signature** (Request Signature Verifier) ### Requirements Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
0easy
Title: Web Hook Rules check http headers in case sensitive manner Body: ## SUMMARY The case used for the header name in trigger.headers[<headername>] in a web-hook rule is treated in a case sensitive manner. HTTP headers are case insensitive so the case of the name in the headers should not e relevant. ### STACKSTORM VERSION 3.2.0 ##### OS, environment, install method Seen on one-line install and HA ## Steps to reproduce the problem See https://github.com/StackStorm/st2/issues/4995 for initial case. 1. Configure webhookrule with trigger.headers['X-GitHub-Event'] 2. Send in header via curl of X-GitHub-Event to webhook 3. Rule doesn't match 4. Change rule to be trigger.headers['X-Github-Event'] - rule matches ## Expected Results As http headers are case insensitive then it should not matter what case is used in the rule. Therefore no matter what case header is or case of rule then they should match. ## Actual Results Only matched when rule defined as X-Github-Event
0easy
Title: disabled prop for dcc.Interval is not used. Body: We have a `disabled` prop in the prop type definition of the Interval component, but it is not actually used by the code to disable the interval. Either remove it as we have the max_intervals=0 logic to stop the interval or make it stop the interval. https://community.plot.ly/t/interval-component-cannot-be-disabled-via-callback/14455 Proposed solution: - [ ] replace the props argument in setInterval by `this.props`. - [ ] remove the disabled check from the interval. - [ ] check for disabled prop in `componentWillReceiveProps`.stop the loop if false . - [ ] restart the loop on disabled = True in componentWillReceiveProps if it was stopped.
0easy
Title: Fedora 41 compatibility — imghdr removed with python 3.13. Body: ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland? Xorg ### Has this issue already been reported? - [X] I have searched through the existing issues. ### Is this a question rather than an issue? - [X] This is not a question. ### What type of issue is this? None ### Choose one or more terms that describe this issue: - [ ] autokey triggers - [X] autokey-gtk - [X] autokey-qt - [ ] beta - [ ] bug - [ ] critical - [ ] development - [ ] documentation - [X] enhancement - [X] installation/configuration - [ ] phrase expansion - [ ] scripting - [X] technical debt - [ ] user interface ### Other terms that describe this issue if not provided above: fedora python fc41 imghdr ### Which Linux distribution did you use? Moving to fedora 41 introduced a fresh error for me: ModuleNotFoundError: No module named 'imghdr' Python 3.13 no longer includes imghdr which at least in my context, prevented autokey-gtk / autokey-qt from starting. I suspect any autokey install on a distribution that's moving towards python 3.13 will want to implement this (with some research, there may be better ways. I'm a novice). For my own setup this worked: I've swapped out the imghdr-reliant code for imagesize — specifically within: /usr/lib/python3.13/site-packages/autokey/scripting/highlevel.py: Lines 66=>78: I switched out: ``` def get_png_dim(filepath: str) -> int: """ Usage: C{get_png_dim(filepath:str) -> (int)} Finds the dimension of a PNG. @param filepath: file path of the PNG. @returns: (width, height). @raise Exception: Raised if the file is not a png """ if not imghdr.what(filepath) == 'png': raise Exception("not PNG") head = open(filepath, 'rb').read(24) return struct.unpack('!II', head[16:24]) ``` to: ``` def get_png_dim(filepath: str) -> tuple[int, int]: """ Usage: get_png_dim(filepath:str) -> (width: int, height: int) Finds the dimension of a PNG. @param filepath: file path of the PNG. @returns: tuple of (width, height). @raise Exception: Raised if the file is not a png """ # Check if file exists if not os.path.isfile(filepath): raise Exception("File does not exist") # Check if file is PNG using python-magic file_type = magic.from_file(filepath, mime=True) if file_type != 'image/png': raise Exception("not PNG") # Get dimensions using imagesize try: width, height = imagesize.get(filepath) return width, height except: raise Exception("Could not determine PNG dimensions") ``` At the top, replaced 'import imghdr' with 'import imagesize' & then 'pip install imagesize'. Afterwards, autokey should startup like normal. Including the original error just in case it helps folks find this: Traceback (most recent call last): ``` File "/usr/bin/autokey-gtk", line 33, in <module> sys.exit(load_entry_point('autokey==0.96.0', 'console_scripts', 'autokey-gtk')()) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/bin/autokey-gtk", line 25, in importlib_load_entry_point return next(matches).load() ~~~~~~~~~~~~~~~~~~^^ File "/usr/lib64/python3.13/importlib/metadata/__init__.py", line 179, in load module = import_module(match.group('module')) File "/usr/lib64/python3.13/importlib/__init__.py", line 88, in import_module return _bootstrap._gcd_import(name[level:], package, level) ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1387, in _gcd_import File "<frozen importlib._bootstrap>", line 1360, in _find_and_load File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 935, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 1022, in exec_module File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed File "/usr/lib/python3.13/site-packages/autokey/gtkui/__main__.py", line 4, in <module> from autokey.gtkapp import Application File "/usr/lib/python3.13/site-packages/autokey/gtkapp.py", line 39, in <module> from autokey import service, monitor File "/usr/lib/python3.13/site-packages/autokey/service.py", line 36, in <module> import autokey.scripting File "/usr/lib/python3.13/site-packages/autokey/scripting/__init__.py", line 24, in <module> from . import highlevel File "/usr/lib/python3.13/site-packages/autokey/scripting/highlevel.py", line 9, in <module> import imghdr ModuleNotFoundError: No module named 'imghdr' ``` ### Which AutoKey GUI did you use? Both ### Which AutoKey version did you use? _No response_ ### How did you install AutoKey? _No response_ ### Can you briefly describe the issue? Autokey on fedora 41 won't start due to missing imghdr python module. ### Can the issue be reproduced? Always ### What are the steps to reproduce the issue? _No response_ ### What should have happened? _No response_ ### What actually happened? _No response_ ### Do you have screenshots? _No response_ ### Can you provide the output of the AutoKey command? _No response_ ### Anything else? _No response_ <br/> <hr/> <details><summary>This repo is using Opire - what does it mean? 👇</summary><br/>💵 Everyone can add rewards for this issue commenting <code>/reward 100</code> (replace <code>100</code> with the amount).<br/>🕵️‍♂️ If someone starts working on this issue to earn the rewards, they can comment <code>/try</code> to let everyone know!<br/>🙌 And when they open the PR, they can comment <code>/claim #961</code> either in the PR description or in a PR's comment.<br/><br/>🪙 Also, everyone can tip any user commenting <code>/tip 20 @markallasread</code> (replace <code>20</code> with the amount, and <code>@markallasread</code> with the user to tip).<br/><br/>📖 If you want to learn more, check out our <a href="https://docs.opire.dev">documentation</a>.</details>
0easy
Title: Support for setting partitioned cookies Body: ### Is your feature request related to a problem? I need to use partitioned cookies to set cookies in contexts where third-party cookies are otherwise restricted. ### Describe the solution you'd like A recent addition to the Set-Cookie header is the ability to mark cookies as partitioned (see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#partitioned, https://developer.mozilla.org/en-US/docs/Web/Privacy/Privacy_sandbox/Partitioned_cookies). `StreamResponse.set_cookie` should support setting partitioned cookies, e.g. using a `partitioned=True` keyword argument. ### Describe alternatives you've considered I can create a 'Set-Cookie' header manually, but would have to figure out edge cases with encoding and escaping special characters myself. This would be inconvenient and probably error-prone. ### Related component Server ### Additional context _No response_ ### Code of Conduct - [X] I agree to follow the aio-libs Code of Conduct
0easy
Title: [BUG] Error for pandas memoization is confusion Body: **Describe the bug** A user asked questions about the following error: ``` Failed memoization speedup attempt due to Pandas internal hash function failing. Continuing without memoization speedups ``` We can give a more constructive response like `... This is fine, but for speedups around skipping re-uploads of previously seen tables, try identifying which columns have types that Pandas cannot hash, and convert them to hashable types like strings.`
0easy
Title: e_forward() not returning results without IDs named 'src' and 'dest' Body: **Describe the bug** Customer reported bug with GFQL's e_forward() not returning the correct results unless the edge df IDs are named 'src' and 'dest' **To Reproduce** ``` import graphistry import pandas as pd from graphistry import ( # graph operators n, e_undirected, e_forward, e_reverse, e, # attribute predicates is_in, ge, startswith, contains, match as match_re ) import pandas as pd graphistry.register(...) edges_df = pd.read_csv('blueprint_edges.csv') nodes_df = pd.read_csv('blueprint_nodes.csv') edges_df['src'] = edges_df['parent_elid'] edges_df['dest'] = edges_df['elid'] # create two different graphs with the same cols, just named differently: g = graphistry.edges(edges_df, 'parent_elid', 'elid').nodes(nodes_df, 'elid') g2 = graphistry.edges(edges_df, 'src', 'dest').nodes(nodes_df, 'elid') # Test: # expected behavior: be able to use e_forward to get edges # observed behavior: get no edges back unless using e() or e_undirected() def test_eforward(g): guc = g.chain([ n({"elid":"905e3174aa"}), e_forward(), n() ]) print('nodes:\n', guc._nodes, '\n' ) print('edges:\n', guc._edges) test_eforward(g) test_eforward(g2) # notice that the first call with 'g' does not return any edges, where the call with 'g2' does ``` [blueprint_nodes.zip](https://github.com/user-attachments/files/17929257/blueprint_nodes.zip) **Expected behavior** be able to use e_forward to get edges **Actual behavior** get no edges back unless using e() or e_undirected(), or change the IDs to `src` and `dest` **Screenshot** showing differences of counts with same column contents for IDs, but different names: ![image](https://github.com/user-attachments/assets/97cc51d2-355d-4a3b-9806-20ef30cf6a4e) **Graphistry GPU server environment** Hub v2.41.10 **PyGraphistry API client environment** - Where run `Jupyter Lab local` - Version `0.34.17` - Python Version `Python 3.8.5`
0easy
Title: [UI][XXS] Make the logo in the sidebar have rounded edges Body: **Description:** Update the logo in the sidebar to have rounded edges for a smoother appearance. The logo is defined in the following file and line: [Sidebar.jsx](https://github.com/StructuredLabs/preswald/blob/4a04b97229185e6bc22117e0c947196f60b64c6e/frontend/src/components/Sidebar.jsx#L9) **Acceptance Criteria:** 1. The logo in the sidebar should have rounded edges. 2. Verify the update does not introduce any UI/UX regressions.
0easy
Title: Improve the builder "add block" placement algorithm Body: We need a way to place blocks when you add them so that it doesn't move your screen when placing a block if there is no space and don't change the zoom level
0easy
Title: Add flag to only use already cached blocks Body: Allow users to specify the caching dir for the model's blocks **BONUS:** add flag to only use already cached blocks, excluding those which are not already on the server
0easy
Title: Set `timeout-minutes` Body: ### Summary It's a good practice to always set `timeout-minutes` for github actions workflows to fail fast when a hang occurs: ```diff diff --git a/.github/workflows/maintainer-approval.yml b/.github/workflows/maintainer-approval.yml index 083cff6ce..6bd5816f5 100644 --- a/.github/workflows/maintainer-approval.yml +++ b/.github/workflows/maintainer-approval.yml @@ -6,6 +6,7 @@ on: jobs: check: runs-on: ubuntu-latest + timeout-minutes: 5 permissions: pull-requests: read steps: diff --git a/.github/workflows/team-review.yml b/.github/workflows/team-review.yml index 571b6cfd9..def1a9675 100644 --- a/.github/workflows/team-review.yml +++ b/.github/workflows/team-review.yml @@ -8,6 +8,7 @@ jobs: review: runs-on: ubuntu-latest if: ${{ github.event.requested_reviewer.login == 'mlflow-automation'}} + timeout-minutes: 5 permissions: pull-requests: write steps: ``` ### Notes - Make sure to open a PR from a **non-master** branch. - Sign off the commit using the `-s` flag when making a commit: ```sh git commit -s -m "..." # ^^ make sure to use this ``` - Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
0easy
Title: Incorrect AUC value in CatBoost chart with sample_weight Body: I trained a dataset with sample weight using 3 algos: LightGBM, Xgboost, and CatBoost. I found that the learning curve chart for CatBoost doesn't take into account the sample weight but the score in the table does. Maybe you forgot to put sample_weight for CatBoost charts? I also see the problems in the ROC curve chart (but it's the same behavior among all models). Also, could this affect the training result e.g. terminating at the wrong place? Because I saw the model trained for many iterations. ![image](https://user-images.githubusercontent.com/15215732/167322423-1b70896b-890f-493e-ae7c-55bd1c7790c7.png) ![image](https://user-images.githubusercontent.com/15215732/167322603-542f692e-c088-4a4a-bd7b-f36720fb9d38.png)
0easy
Title: Collation support Body: Collation is still missing from ODMantic, I'm pretty sure implementing this would help a lot of users! ### Discussed in https://github.com/art049/odmantic/discussions/157 <div type='discussions-op-text'> <sup>Originally posted by **tylovejoy** July 13, 2021</sup> I use `find().collation(Collation(locale="en_US", numericOrdering=True)` with another library. Is there any support for this in ODMantic?</div>
0easy
Title: docs incorrects for 'how to customize templates' section Body: Hi there! There is an error in the docs, specifically in the [how to customize templates section].(https://django-oscar.readthedocs.io/en/2.1.0/howto/how_to_customise_templates.html) According to issue #1378, the _base.html_ mentioned in the examples should be in '_templates/oscar/base.html_' instead of '_templates/base.html_'. I'm new to django oscar, and I spent a couple of hours before I could realize what were wrong, so I think it would be very helpful to new devs :) Regards!
0easy
Title: Add `Toggle 2D/3D` to the View menu Body: ## 🧰 Task The View menu should include the ability to toggle 2D/3D mode. Adresses part of https://github.com/napari/napari/issues/7611 because then this will be available in the command palette.
0easy
Title: one hot encoder of frequent categories should capture top categories with similar number of observations Body: At the moment the OHE will create binary variables for the top k categories with most observations. However, if 2 categories should have similar number of observations, only 1 of them will be encoded and the other one ignored. We probably want to encode both?
0easy