text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: print_exception: rewrite deprecated code
Body: We have this line:
https://github.com/xonsh/xonsh/blob/55b341d47753967c70a4dcf9ff30690877f66048/xonsh/tools.py#L1056-L1058
But https://docs.python.org/3/library/sys.html:
> These three variables are deprecated;
> sys.last_type
> sys.last_value
> sys.last_traceback
> Use [sys.last_exc](https://docs.python.org/3/library/sys.html#sys.last_exc) instead. They hold the legacy representation of sys.last_exc, as returned from [exc_info()](https://docs.python.org/3/library/sys.html#sys.exc_info) above.
We also have related issue with this code: https://github.com/xonsh/xonsh/issues/5408
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Populate FAQ
Body: Right now it does not have much | 0easy
|
Title: Refactor autonames test to include fixture right inside the test, not fixture
Body: Related: https://github.com/wemake-services/django-test-migrations/pull/34#pullrequestreview-365115366 | 0easy
|
Title: for deep learning, sort feature_responses by magnitude of effect
Body: | 0easy
|
Title: [nlp_data] Add CC-100
Body: ## Description
Add the CC-100 corpus that can be used for pretraining to `nlp_data`.
http://data.statmt.org/cc-100/
| 0easy
|
Title: Handlers for resuming only unmodified/idling/sleeping resources
Body: > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> An issue by [nolar](https://github.com/nolar) at _2019-11-18 20:48:46+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/241
>
## Background
As discussed in #223 comments (https://github.com/nolar/kopf/issues/223#issuecomment-554996297 and below), there is one type of handlers is missing: when an operator restarts, it should process all the objects that existed before, but were neither updated, nor deleted.
Originally, this was a technical solution for `@kopf.on.resume()` handlers in #96. But then it was intentionally changed to #105 & [0.16](https://github.com/nolar/kopf/releases/tag/0.16) to make the resuming handlers for task/thread spawning, i.e. executed no matter what was the object's state on the operator startup (created/updated/deleted). And later fixed in #230 & [0.23rc1](https://github.com/nolar/kopf/releases/tag/0.23rc1) to be actually executed as intended.
## Goal
However, the use-case of "unmodified only" handlers is missing now, and cannot be simulated with any combination of the existing handlers.
Find a good name for it. `@kopf.on.notice()`? `@kopf.on.recall()`? `@kopf.on.existence()`? Anything else?
Add such a handler for "unmodified only" cases on the operator startup.
It must be triggered **only** if CREATE/UPDATE/DELETE cause reasons are **not** applicable. For all these causes, their relevant handlers will be executed, plus on-resume mixed-in handlers.
## Related
A use-case described for these handlers (https://github.com/nolar/kopf/issues/223#issuecomment-555124183) is basically a reconciliation, and may be related to #150 and #19.
In this case, however, the resource should be processed on the operator restarts — but should remain separated from the creation/update/deletion handlers for clarity, unlike `@kopf.on.resume()`, which is mixed in with all of them.
## Implementation hints
This should be easy. First, see `kopf.reactor.causation.Reason.RESUME` — probably, must be renamed to something else to not be confused with the resuming handlers. Second, add a decorator in `kopf.on` with `reason=WHATEVERITISNAMED, initial=True`. This should be enough. Try in action. Add few tests (look for `on.resume` and `Reason.RESUME` tests). Add the docs.
---
> <a href="https://github.com/pshchelo"><img align="left" height="30" src="https://avatars2.githubusercontent.com/u/1408702?v=4"></a> Commented by [pshchelo](https://github.com/pshchelo) at _2019-11-19 15:02:25+00:00_
>
first, thanks for your help with our use-case :+1:
as for naming, I'd suggest `on.exists`, but whatever floats your boat :-) | 0easy
|
Title: Upgrade to v2.0 of the Contributor Covenant code of conduct
Body: An updated version is available here: https://www.contributor-covenant.org/version/2/0/code_of_conduct/code_of_conduct.md
We currently use v1.4, and there are some nice changes and clarifications in v2.0. | 0easy
|
Title: Parallelize and speed up /index custom index commands
Body: Currently, we are using the gpt-index package to power the functionality behind /index commands. gpt-index is not async or parallel by default in its operations or network communications. We need to ensure that all other logic that we have is parallel and/or async such that we can have optimal performance when multiple people are using index commands at the same time. **Currently, /index commands are not async at all and are single-threaded**. | 0easy
|
Title: Burstiness metric API
Body: The canonical definition is here: https://chaoss.community/?p=3447 | 0easy
|
Title: Potential typo bug in pywinauto top-level file.
Body: Line 91 in `pywinauto/__init__.py` we have this code which looks like a typo ("ElementNotFoundError = findwindows.ElementNotFoundError" is duplicated as is), wasn't it supposed to be WindowNotFoundError instead ?
```
WindowAmbiguousError = findwindows.WindowAmbiguousError
ElementNotFoundError = findwindows.ElementNotFoundError
if UIA_support:
ElementNotFoundError = findwindows.ElementNotFoundError
ElementAmbiguousError = findwindows.ElementAmbiguousError
``` | 0easy
|
Title: 🐛fix some error messages
Body: ## Bug Report
To display some error messages, we are doing a condition on the error message that we get from the backend. It is error prone because the backend error message are internationalized.
## What to do
Find another way to display clean error message (code status ?).
## Code location
- [ ] Fix on AddMembers.tsx
https://github.com/numerique-gouv/impress/blob/60120852f502c40da961e07110a7cdafc5c1b3e0/src/frontend/apps/impress/src/features/docs/members/members-add/components/AddMembers.tsx#L88-L104
- [x] Fix on AIButton.tsx
https://github.com/numerique-gouv/impress/blob/60120852f502c40da961e07110a7cdafc5c1b3e0/src/frontend/apps/impress/src/features/docs/doc-editor/components/AIButton.tsx#L365-L374 | 0easy
|
Title: [BUG] 'Settings' object has no attribute 'LANGUAGE_COOKIE_HTTPONLY' with django-cms 3.10 and django 2.2
Body: <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
## Description
Django-CMS 3.10 expects a LANGUAGE_COOKIE_HTTPONLY setting. [This setting is not part of Django in Django 2.2](https://docs.djangoproject.com/en/2.2/ref/settings/) but was [added in Django 3.0](https://docs.djangoproject.com/en/3.0/ref/settings/#language-cookie-httponly).
## Steps to reproduce
1. Install Django-CMS 3.10 with Django 2.2 LTS.
## Expected behaviour
Django-CMS should have a fallback value if LANGUAGE_COOKIE_HTTPONLY is missing.
## Actual behaviour
Django-CMS 3.10 expects the LANGUAGE_COOKIE_HTTPONLY setting, which is not part of Django 2.2, leading to an Internal Server Error.
## Screenshots
<!--If applicable, add screenshots to help explain your problem.
-->
## Additional information (CMS/Python/Django versions)
Django-CMS 3.10
Django 2.2 LTS
Python 3.8.x
```Traceback:
File "/usr/local/envs/[are.ucdavis.edu](http://are.ucdavis.edu/)-3.8/lib64/python3.8/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/usr/local/envs/[are.ucdavis.edu](http://are.ucdavis.edu/)-3.8/lib64/python3.8/site-packages/django/utils/deprecation.py" in __call__
96. response = self.process_response(request, response)
File "/usr/local/envs/[are.ucdavis.edu](http://are.ucdavis.edu/)-3.8/lib64/python3.8/site-packages/cms/middleware/language.py" in process_response
26. httponly=settings.LANGUAGE_COOKIE_HTTPONLY,
File "/usr/local/envs/[are.ucdavis.edu](http://are.ucdavis.edu/)-3.8/lib64/python3.8/site-packages/django/conf/__init__.py" in __getattr__
80. val = getattr(self._wrapped, name)
Exception Type: AttributeError at /
Exception Value: 'Settings' object has no attribute 'LANGUAGE_COOKIE_HTTPONLY'
Request information:
USER: AnonymousUser
GET: No GET data
POST: No POST data
FILES: No FILES data
```
<!--
Add any other context about the problem such as environment,
CMS/Python/Django versions, logs etc. here.
-->
## Do you want to help fix this issue?
<!--
The django CMS project is managed and kept alive by its open source community and is backed by the [django CMS Association](https://www.django-cms.org/en/about-us/). We therefore welcome any help and are grateful if people contribute to the project. Please use 'x' to check the items below.
-->
* [ ] Yes, I want to help fix this issue and I will join #workgroup-pr-review on [Slack](https://www.django-cms.org/slack) to confirm with the community that a PR is welcome.
* [X] No, I only want to report the issue.
| 0easy
|
Title: Dashboard product list is unusable if an image is missing
Body: ### Issue Summary
Thanks for introducing the thumbnailing abstraction. It's something we've always wanted to do! Unfortunately, I think I found a small regression.
On one of my Oscar staging sites, not all images that are in the database are actually on disk. This leads to display errors in the frontend. More importantly, the product list in the dashboard becomes totally unusable, because an exception is thrown.

The exception is thrown if `thumb.height` or `thumb.width` are accessed. And indeed, `product_row_image.html` (the dashboard template) accesses those, and the storefront `gallery.html` doesn't. That's why the storefront only has display errors, but the backend throws an exception.
I wanted to work around this by checking `{% if thumb %}`, but that check always succeeds. My workaround is to remove the height and width section from the template. That template is the only place where `thumb.height` and `width` are accessed.
IIRC, Oscar used to handle this more gracefully. `AbstractProduct.get_missing_image` and the `MissingProductImage` class were introduced back in the days to pass a special hardcoded image through the thumbnailer. I think they're still present, but not used. That way, display errors were avoided.
There are a few ways to fix this. I'm happy to assist, but have not worked on the new thumbnail abstraction, so would like some guidance on what the right fix is. The missing image approach was nice when it worked, but getting the file into the media folder was tricky. So I'm not convinced it's the right approach. What I do know is that accessing a missing image should never keep the site from working.
### Steps to Reproduce
1. Make note of the product PK of the first product with an image in the dashboard product list.
2. Use the Django shell to get the primary key of the `ProductImage` for that product.
3. In the database, change `catalogue_productimage.original` to a non-existent path.
4. Access that product in the frontend, and then in the dashboard product list.
### Technical details
* Python version: 3.8.5
* Django version: 2.2.21
* Oscar version: 2.1.1 (`image_tags.py` and `product_row_image.html` are unchanged on master, so I presume the issue still exists)
| 0easy
|
Title: cli should suggest the appropriate command if there's a typo
Body: e.g.:
```
ploomber exemples
```
Print:
```
'ploomber exemples' is not a valid command. did you mean 'ploomber examples'?
``` | 0easy
|
Title: Add CodeClimate
Body: E.g. If we look at https://github.com/trailofbits/protofuzz we can see the test coverage at the top and a link to code climate.
To fix this we more or less copy the codeclimate.yml and relevant parts of the top of the README.
(So the [easy] issues are good for new people who want to start contributing to look at.) | 0easy
|
Title: [ISSUE] Bot responds to system message and other bots' message
Body: **Describe the bug**
in /index talk and / internet chat Bot responds to system message and other bots' message. Resulting in not able to change thread's name or generating irrelevant responses to other bots message.
**Expected behavior**
Bot should not responds to system message and other bots' message
**Screenshots**

| 0easy
|
Title: Naked `.. code-block:: bash` in some places throughout docs
Body: ## Issue
From [**Overriding configuration from the command line**](https://tox.wiki/en/latest/config.html#overriding-configuration-from-the-command-line):

Apparent source: #3111
This page appears to be generated from [`config.rst`](https://github.com/tox-dev/tox/blob/main/docs/config.rst#overriding-configuration-from-the-command-line). I'm not quite sure what the problem is here, since *some* code blocks work, but others (perhaps those with only a single line) don't. It's interesting to note that GH's renderer suffers from the same problem.
So this _**may**_ be a bug in Sphinx, but if a work-around is available (or maybe selecting an alternative Sphinx version fixes this), it is probably worth exploring for the sake of a polished experience for doc consumers. | 0easy
|
Title: `OrdinalEncoder` could output -1 for unseen categories
Body: The OrdinalEncoder has an errors argument which can either raise an error or output NaNs when encountering new categories. For this particular class, it'd make sense to output -1 when a new category is encoded instead of generating NaNs.
| 0easy
|
Title: add installed pip package versions to get_system_info
Body: ### Is your feature request related to a problem? Please describe.
This is to generate better debug info, some poeple have weird issues because of terrible pip package management.
### Describe the solution you'd like
In interpreter infos show
Package name ( poetry.lock version, installed version)
### Describe alternatives you've considered
_No response_
### Additional context
This is a good first issue, I'm available to help in the discord or here. Keep in mind I'm in European timezone.
| 0easy
|
Title: Documentation link is malformed
Body: The link to github in the Introduction section of the documentation is malformed.
| 0easy
|
Title: Only The Last Output Shows in "Agent Outputs" Panel
Body: The "Agent Outputs" panel currently only shows the final output for Agents that output several times during a single run. <br><br>This panel should instead show all the outputs of that run. | 0easy
|
Title: 关于启用评论采集功能的说明
Body: 新版本5.3评论采集功能没有了,请问是取消了是吗。
旧版本配置了有该功能,退回旧版本并配置了"storage_format": "xlsx",仍然提示未设置storege_format参数 | 0easy
|
Title: [BUG][-] Error: cannot access local variable 'ASSEMBLY_AI_API_KEY' where it is not associated with a value
Body: **Describe the bug**
I am repeatedly getting the following error:
```[-] Error: cannot access local variable 'ASSEMBLY_AI_API_KEY' where it is not associated with a value```
I do have my ASSEMBLY_AI_API_KEY ENV variable set | 0easy
|
Title: Problem about param ml_task="regression",
Body: If I use this param, it will raise an issue below for all of the model, if it is deleted, the model works fine.
'<' not supported between instances of 'numpy.ndarray' and 'str'
Traceback (most recent call last):
File "C:\Users\ZHENGJ\AppData\Local\Programs\Python\Python39\lib\site-packages\supervised\base_automl.py", line 1195, in _fit
trained = self.train_model(params)
File "C:\Users\ZHENGJ\AppData\Local\Programs\Python\Python39\lib\site-packages\supervised\base_automl.py", line 404, in train_model
self.keep_model(mf, model_subpath)
File "C:\Users\ZHENGJ\AppData\Local\Programs\Python\Python39\lib\site-packages\supervised\base_automl.py", line 317, in keep_model
self.select_and_save_best()
File "C:\Users\ZHENGJ\AppData\Local\Programs\Python\Python39\lib\site-packages\supervised\base_automl.py", line 1315, in select_and_save_best
self._best_model = min(
TypeError: '<' not supported between instances of 'numpy.ndarray' and 'str' | 0easy
|
Title: Inconsistent validation with complex number
Body: ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
A field expecting a list of float number is valid for a numpy array of complex number, but a field expecting a float number is not valid for a complex number.
I would expect the behavior to be the same, i.e. the complex number is a valid float number.
The output of the example code below is (forget about the `ComplexWarning`):
```console
/x/python3.11/site-packages/pydantic/main.py:211: ComplexWarning: Casting complex values to real discards the imaginary part
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
Traceback (most recent call last):
File "/x/scratch_10.py", line 10, in <module>
Model(x=1j, y=array([1.]))
File "/x/python3.11/site-packages/pydantic/main.py", line 211, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for Model
x
Input should be a valid number [type=float_type, input_value=1j, input_type=complex]
For further information visit https://errors.pydantic.dev/2.9/v/float_type
```
### Example Code
```Python
from numpy import array
from pydantic import BaseModel
class Model(BaseModel):
x: float
y: list[float]
Model(x=1., y=array([1.]))
Model(x=1j, y=array([1.]))
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.0
pydantic-core version: 2.23.2
pydantic-core build: profile=release pgo=false
install path: /x/python3.11/site-packages/pydantic
python version: 3.11.10 (main, Sep 9 2024, 00:00:00) [GCC 14.2.1 20240801 (Red Hat 14.2.1-1)]
platform: Linux-6.10.9-200.fc40.x86_64-x86_64-with-glibc2.39
related packages: typing_extensions-4.12.2 typing_extensions-4.12.2
commit: unknown
```
| 0easy
|
Title: [BFCL] Get rid of legacy naming convention for LLM generated files
Body: For the BFCL when we generate the LLM output, we currently call the files `gorilla_openfunctions_v1_*`. We should get rid of this additional prefix which is vestigial. | 0easy
|
Title: UIA: Make set_value method available for EditWrapper or for any element with ValuePattern
Body: This was found in issue #590. Currently we have to use `.iface_value.SetValue("some value")`. | 0easy
|
Title: Invalid sections are not represented properly in parsing model
Body: Consider the following test suite:
```
*** Test cases ***
Test
Log 1
*** invalid section ***
Something
```
This will produce following AST:
```
File(
source='/Users/jth/Code/robotframework/tmp/foo.robot',
languages=(),
lineno=1,
col_offset=0,
end_lineno=5,
end_col_offset=23,
errors=(),
sections=[
TestCaseSection(
lineno=1,
col_offset=0,
end_lineno=5,
end_col_offset=23,
errors=(),
header=SectionHeader(lineno=1, col_offset=0, end_lineno=1, end_col_offset=18, errors=(), type='TESTCASE HEADER', tokens=(Token(TESTCASE_HEADER, '*** Test Cases ***', 1, 0),)),
body=[
TestCase(
lineno=2,
col_offset=0,
end_lineno=3,
end_col_offset=11,
errors=(),
header=TestCaseName(lineno=2, col_offset=0, end_lineno=2, end_col_offset=4, errors=(), type='TESTCASE NAME', tokens=(Token(TESTCASE_NAME, 'Test', 2, 0),)),
body=[KeywordCall(lineno=3, col_offset=3, end_lineno=3, end_col_offset=11, errors=(), type='KEYWORD', tokens=(Token(KEYWORD, 'Log', 3, 3), Token(ARGUMENT, '1', 3, 10)))],
),
Error(lineno=5, col_offset=0, end_lineno=5, end_col_offset=23, errors=("Unrecognized section header '*** invalid section ***'. Valid sections: 'Settings', 'Variables', 'Test Cases', 'Tasks', 'Keywords' and 'Comments'.",), type='ERROR', tokens=(Token(ERROR, '*** invalid section ***', 5, 0, "Unrecognized section header '*** invalid section ***'. Valid sections: 'Settings', 'Variables', 'Test Cases', 'Tasks', 'Keywords' and 'Comments'."),)),
],
),
],
)
```
In the AST, their erroneous header is put inside the TestCase body, which is a bit weird from the AST perspective.
It would be better to create a separate AST node, InvalidSection, which would contain the data of the invalid section. | 0easy
|
Title: Drop `cv2` from dev dependencies
Body: ## 🚀 Feature
The cv2 package is just used in one place at tests. So, we can figure out how to test the case without it
https://github.com/kornia/kornia/blob/216aa9d2f10f9300bffe952b1583f03b7f833be5/test/io/test_io_image.py#L4
## Motivation
reduce the number of dependencies at kornia | 0easy
|
Title: tox-docker configuration parsing broken since tox 4.0.13
Body: ## Issue
tox-docker plugin configuration parsing of `ports` is broken since tox 4.0.13.
`ports` configuration option is documented as:
> A multi-line list of port mapping specifications, as `HOST_PORT:CONTAINER_PORT/PROTO`, ...
With 4.0.12, the host port is properly set on the started container and forwarded to the container port.
But with tox 4.0.13, host port of the started container is random.
Given the short changelog for that release, I suspect it to be caused by https://github.com/tox-dev/tox/pull/2744
Same issue occurs with 4.0.15, 4.0.19, and 4.2.2.
## Environment
Provide at least:
- OS: Ubuntu 22.10
- `pip list` of the host Python where `tox` is installed:
```console
# poetry run pip list
Package Version Editable project location
------------------ ----------- -------------------------------------------------------------------------------------
attrs 21.4.0
black 22.3.0
boto3 1.23.6
botocore 1.26.6
cachetools 5.2.0
certifi 2022.5.18.1
cfgv 3.3.1
chardet 5.1.0
charset-normalizer 2.0.12
click 8.1.3
colorama 0.4.6
distlib 0.3.6
docker 5.0.3
filelock 3.9.0
flake8 4.0.1
freezegun 1.2.1
identify 2.5.1
idna 3.3
iniconfig 1.1.1
jmespath 1.0.0
mccabe 0.6.1
mock 4.0.3
mypy-extensions 0.4.3
nodeenv 1.6.0
packaging 22.0
pathspec 0.9.0
pip 22.3.1
platformdirs 2.6.2
pluggy 1.0.0
pre-commit 2.19.0
py 1.11.0
pycodestyle 2.8.0
pyflakes 2.4.0
pyparsing 3.0.9
pyproject_api 1.4.0
pytest 6.2.5
pytest-mock 3.7.0
python-dateutil 2.8.2
PyYAML 6.0
requests 2.27.1
s3transfer 0.5.2
setuptools 65.6.3
six 1.16.0
toml 0.10.2
tomli 2.0.1
tox 4.2.2
tox-docker 4.0.0a2
urllib3 1.26.9
virtualenv 20.17.1
websocket-client 1.3.2
wheel 0.38.4
```
## Output of running tox
Provide the output of `tox -rvv`:
```console
tox -rvv
/home/patrick/workspaces/myproject/.venv/lib/python3.10/site-packages/requests/__init__.py:102: RequestsDependencyWarning: urllib3 (1.26.9) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "
py39: 298 W remove tox env folder /home/patrick/workspaces/myproject/.tox/py39 [tox/tox_env/api.py:321]
py39: 308 I find interpreter for spec PythonSpec(major=3, minor=9) [virtualenv/discovery/builtin.py:56]
py39: 308 D discover exe for PythonInfo(spec=CPython3.10.6.final.0-64, exe=/home/patrick/workspaces/myproject/.venv/bin/python, platform=linux, version='3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]', encoding_fs_io=utf-8-utf-8) in /usr [virtualenv/discovery/py_info.py:437]
py39: 308 D filesystem is case-sensitive [virtualenv/info.py:24]
py39: 310 D got python info of /usr/bin/python3.10 from /home/patrick/.local/share/virtualenv/py_info/1/8a94588eda9d64d9e9a351ab8144e55b1fabf5113b54e67dd26a8c27df0381b3.json [virtualenv/app_data/via_disk_folder.py:129]
py39: 310 I proposed PythonInfo(spec=CPython3.10.6.final.0-64, system=/usr/bin/python3.10, exe=/home/patrick/workspaces/myproject/.venv/bin/python, platform=linux, version='3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
py39: 310 D discover PATH[0]=/home/patrick/workspaces/myproject/.venv/bin [virtualenv/discovery/builtin.py:108]
py39: 311 D got python info of /home/patrick/workspaces/myproject/.venv/bin/python3 from /home/patrick/.local/share/virtualenv/py_info/1/f40e8b45bd8922e09aa5a9356903620c1975d00691ae8ec9747c29426f7f443d.json [virtualenv/app_data/via_disk_folder.py:129]
py39: 311 D discover exe from cache /usr - exact False: PythonInfo({'architecture': 64, 'base_exec_prefix': '/usr', 'base_prefix': '/usr', 'distutils_install': {}, 'exec_prefix': '/usr', 'executable': '/home/patrick/workspaces/myproject/.venv/bin/python', 'file_system_encoding': 'utf-8', 'has_venv': True, 'implementation': 'CPython', 'max_size': 9223372036854775807, 'original_executable': '/usr/bin/python3.10', 'os': 'posix', 'path': ['/home/patrick/.local/pipx/venvs/tox/lib/python3.10/site-packages/virtualenv/discovery', '/usr/lib/python310.zip', '/usr/lib/python3.10', '/usr/lib/python3.10/lib-dynload', '/home/patrick/.local/lib/python3.10/site-packages', '/usr/local/lib/python3.10/dist-packages', '/usr/lib/python3/dist-packages'], 'platform': 'linux', 'prefix': '/usr', 'real_prefix': None, 'stdout_encoding': 'utf-8', 'sysconfig': {'makefile_filename': '/usr/lib/python3.10/config-3.10-x86_64-linux-gnu/Makefile'}, 'sysconfig_paths': {'data': '{base}', 'include': '{installed_base}/include/python{py_version_short}{abiflags}', 'platlib': '{platbase}/{platlibdir}/python{py_version_short}/site-packages', 'platstdlib': '{platbase}/{platlibdir}/python{py_version_short}', 'purelib': '{base}/lib/python{py_version_short}/site-packages', 'scripts': '{base}/bin', 'stdlib': '{installed_base}/{platlibdir}/python{py_version_short}'}, 'sysconfig_scheme': 'posix_prefix', 'sysconfig_vars': {'PYTHONFRAMEWORK': '', 'abiflags': '', 'base': '/usr', 'installed_base': '/usr', 'platbase': '/usr', 'platlibdir': 'lib', 'py_version_short': '3.10'}, 'system_executable': '/usr/bin/python3.10', 'system_stdlib': '/usr/lib/python3.10', 'system_stdlib_platform': '/usr/lib/python3.10', 'version': '3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]', 'version_info': VersionInfo(major=3, minor=10, micro=6, releaselevel='final', serial=0), 'version_nodot': '310'}) [virtualenv/discovery/py_info.py:435]
py39: 311 D discover exe from cache /usr - exact False: PythonInfo({'architecture': 64, 'base_exec_prefix': '/usr', 'base_prefix': '/usr', 'distutils_install': {}, 'exec_prefix': '/usr', 'executable': '/home/patrick/workspaces/myproject/.venv/bin/python3', 'file_system_encoding': 'utf-8', 'has_venv': True, 'implementation': 'CPython', 'max_size': 9223372036854775807, 'original_executable': '/usr/bin/python3.10', 'os': 'posix', 'path': ['/home/patrick/.local/pipx/venvs/tox/lib/python3.10/site-packages/virtualenv/discovery', '/usr/lib/python310.zip', '/usr/lib/python3.10', '/usr/lib/python3.10/lib-dynload', '/home/patrick/.local/lib/python3.10/site-packages', '/usr/local/lib/python3.10/dist-packages', '/usr/lib/python3/dist-packages'], 'platform': 'linux', 'prefix': '/usr', 'real_prefix': None, 'stdout_encoding': 'utf-8', 'sysconfig': {'makefile_filename': '/usr/lib/python3.10/config-3.10-x86_64-linux-gnu/Makefile'}, 'sysconfig_paths': {'data': '{base}', 'include': '{installed_base}/include/python{py_version_short}{abiflags}', 'platlib': '{platbase}/{platlibdir}/python{py_version_short}/site-packages', 'platstdlib': '{platbase}/{platlibdir}/python{py_version_short}', 'purelib': '{base}/lib/python{py_version_short}/site-packages', 'scripts': '{base}/bin', 'stdlib': '{installed_base}/{platlibdir}/python{py_version_short}'}, 'sysconfig_scheme': 'posix_prefix', 'sysconfig_vars': {'PYTHONFRAMEWORK': '', 'abiflags': '', 'base': '/usr', 'installed_base': '/usr', 'platbase': '/usr', 'platlibdir': 'lib', 'py_version_short': '3.10'}, 'system_executable': '/usr/bin/python3.10', 'system_stdlib': '/usr/lib/python3.10', 'system_stdlib_platform': '/usr/lib/python3.10', 'version': '3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]', 'version_info': VersionInfo(major=3, minor=10, micro=6, releaselevel='final', serial=0), 'version_nodot': '310'}) [virtualenv/discovery/py_info.py:435]
```
Way too verbose to post completely.
## Minimal example
If possible, provide a minimal reproducer for the issue:
With this tox.ini:
```ini
[docker:dynamoDB]
image = amazon/dynamodb-local:latest
ports = 8000:8000/tcp
[testenv]
docker =
dynamoDB
commands =
pytest tests/ {posargs}
```
With tox 4.0.12:
```console
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3c1a6403b59 amazon/dynamodb-local:latest "java -jar DynamoDBL…" 2 seconds ago Up 1 second 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp dynamoDB-tox-364494
```
With tox 4.0.13 and higher:
```console
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4de548f795c3 amazon/dynamodb-local:latest "java -jar DynamoDBL…" 2 seconds ago Up 2 seconds 0.0.0.0:32790->8000/tcp, :::32790->8000/tcp dynamoDB-tox-363635
``` | 0easy
|
Title: tox does not work on my Windows environment due to char-code incorrespondence
Body: ## Issue
tox does not work due to try to read with invalid encoding (try to read "UTF-8" but actual encoding is "sjis")
```
py39: commands succeeded
lint: commands succeeded
ERROR: strictlint: undefined
Traceback (most recent call last):
File "C:\Users\username\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\username\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\repopath\.venv\Scripts\tox.exe\__main__.py", line 7, in <module>
File "C:\repopath\.venv\lib\site-packages\tox\session\__init__.py", line 44, in cmdline
main(args)
File "C:\repopath\.venv\lib\site-packages\tox\session\__init__.py", line 69, in main
exit_code = session.runcommand()
File "C:\repopath\.venv\lib\site-packages\tox\session\__init__.py", line 197, in runcommand
return self.subcommand_test()
File "C:\repopath\.venv\lib\site-packages\tox\session\__init__.py", line 225, in subcommand_test
run_sequential(self.config, self.venv_dict)
File "C:\repopath\.venv\lib\site-packages\tox\session\commands\run\sequential.py", line 9, in run_sequential
if venv.setupenv():
File "C:\repopath\.venv\lib\site-packages\tox\venv.py", line 649, in setupenv
status = self.update(action=action)
File "C:\repopath\.venv\lib\site-packages\tox\venv.py", line 282, in update
self.hook.tox_testenv_install_deps(action=action, venv=self)
File "C:\repopath\.venv\lib\site-packages\pluggy\_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "C:\repopath\.venv\lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "C:\repopath\.venv\lib\site-packages\pluggy\_callers.py", line 60, in _multicall
return outcome.get_result()
File "C:\repopath\.venv\lib\site-packages\pluggy\_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "C:\repopath\.venv\lib\site-packages\pluggy\_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "C:\repopath\.venv\lib\site-packages\tox\venv.py", line 803, in tox_testenv_install_deps
venv._install(deps, action=action)
File "C:\repopath\.venv\lib\site-packages\tox\venv.py", line 495, in _install
self.run_install_command(packages=packages, options=options, action=action)
File "C:\repopath\.venv\lib\site-packages\tox\venv.py", line 437, in run_install_command
self._pcall(
File "C:\repopath\.venv\lib\site-packages\tox\venv.py", line 618, in _pcall
return action.popen(
File "C:\repopath\.venv\lib\site-packages\tox\action.py", line 132, in popen
lines = out_path.read_text("UTF-8").split("\n")
File "C:\repopath\.venv\lib\site-packages\py\_path\common.py", line 171, in read_text
return f.read()
File "C:\Users\username\AppData\Local\Programs\Python\Python39\lib\codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x83 in position 10466: invalid start byte
```
It worked correctly anyway by change char code literal in the above stacktrace
```python
lines = out_path.read_text("UTF-8").split("\n")
```
as sjis
```python
lines = out_path.read_text("sjis").split("\n")
```
Is there any way to specify the literal charcode, or any way to the files be UTF-8 (what files?)?
## Environment
Provide at least:
- OS: Windows 10
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv</code></summary>
```console
```
</details>
## Minimal example
<!-- If possible, provide a minimal reproducer for the issue. -->
```console
```
| 0easy
|
Title: `--removekeywords passed` doesn't remove test setup and teardown
Body: I'm using the following commands `rebot --removekeywords passed --output removed.xml output.xml`.
**Robotframework** and **Rebot** version is `6.1`
- It's work for `Suite Setup` and `Suite Teardown` but it's doesn't work in `Test Setup` and `Test Teardown`

| 0easy
|
Title: mouse.wait_for_click should have a return code like the keyboard counterpart
Body: ## Classification:
Feature (New)
## Reproducibility:
Always
## Version
AutoKey version: 0.95.10
Used GUI (Gtk, Qt, or both): Gtk (Gnome)
Installed via: (PPA, pip3, …). Honestly can't remember.
Linux Distribution: Ubuntu 20.04
## Summary
`mouse.wait_for_click` does not have a return code so there is no way to know if it was clicked or if it timed out.
## Steps to Reproduce (if applicable)
- grab return value from `mouse.wait_for_click()`
- log the return value, it is always `None`
## Expected Results
- similar to `keyboard.wait_...` it should have a return code.
## Actual Results
- always return `None` | 0easy
|
Title: Loudly deprecate `[Return]` setting
Body: We added the `RETURN` statement to replace the `[Return]` setting in RF 5.0 (#4078). The reason was that the setting has various limitations explained in the aforementioned issue. At the moment the old setting still works and there's no visible deprecation warning. I believe a more loud deprecation should be added now for the following reasons:
- It is confusing for users that there is both `RETURN` and `[Return]`. Properly deprecating and eventually removing the latter simplifies the syntax.
- Syntax like this requires special handling internally in Robot. Being able to remove it simplifies the code.
- It has been already over 1.5 years since RF 5.0 was released. Users have had time to start taking `RETURN` into use.
- Although using `[Return]` will case a deprecation warning, it will still continue to work during the whole RF 7.x lifetime. It will earliest be removed in RF 8.0.
Notice that the [Robotidy](https://robotidy.readthedocs.io/en/stable/index.html) tool can convert `[Return]` to `RETURN` automatically. | 0easy
|
Title: Notify the user to activate MANAGE_EXTERNAL_STORAGE
Body: **Describe the enhancement you'd like**
It looks like you can only activate MANAGE_EXTERNAL_STORAGE in the settings. Notify the user to do that, if he wants to delete backed up images.
**Describe why this will benefit the LibrePhotos**
This will lead to less confusion on why the app behaves the way it does.
**Additional context**
Follow-up issue from #736 | 0easy
|
Title: Markeplace - Reduce the size of the top menu bar, change the font size & the height of the bar to 64px
Body:
### Describe your issue.
Reduce the size of the top menu bar, change the font size & the height of the bar to 64px.
The menu items should be using h4 text style
font: Poppins
font size: 20
line height: 28
Please use the styling that's on this design file: [https://www.figma.com/design/Ll8EOTAVIlNlbfOCqa1fG9/Agent-Store-V2?node-id=3364-2513&t=ZOP5k3mBOZjGK0Wv-1]
| 0easy
|
Title: Change sudo apt to sudo apt-get in the source code
Body: ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Which Linux distribution did you use?
_No response_
### Which AutoKey GUI did you use?
Both
### Which AutoKey version did you use?
_No response_
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
The source code uses `apt` in a few places where it should use `apt-get` to ensure the use of stable code and to prevent the "apt does not have a stable cli interface" warning.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
_No response_
### What should have happened?
All references to **sudo apt** should be changed to **sudo apt-get** in these files:
* [/debian/build.sh](https://github.com/autokey/autokey/blob/develop/debian/build.sh)
* [github/workflows/python-test.yml](https://github.com/autokey/autokey/blob/develop/.github/workflows/python-test.yml)
* [.github/workflows/build.yml](https://github.com/autokey/autokey/blob/develop/.github/workflows/build.yml)
* [INSTALL](https://github.com/autokey/autokey/blob/develop/INSTALL)
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | 0easy
|
Title: client_session: <aiohttp.client.ClientSession object at 0x7f3307f050d0>
Body: hello, when i run clairvoyance on some targets like this (https://ctm-cssit-ps3-dmz.us.dell.com/) i got and error, how can i fix it ?
error :
root@aliwjpi:# clairvoyance "https://ctm-cssit-ps3-dmz.us.dell.com/graphql" -o schema.json -c 1
2023-04-21 05:12:24 INFO | Starting blind introspection on https://ctm-cssit-ps3-dmz.us.dell.com/graphql...
2023-04-21 05:12:24 INFO | Iteration 1
Traceback (most recent call last):
File "/usr/local/bin/clairvoyance", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.8/dist-packages/clairvoyance/cli.py", line 142, in cli
asyncio.run(
File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/usr/local/lib/python3.8/dist-packages/clairvoyance/cli.py", line 89, in blind_introspection
schema = await oracle.clairvoyance(
File "/usr/local/lib/python3.8/dist-packages/clairvoyance/oracle.py", line 568, in clairvoyance
typename = await probe_typename(input_document)
File "/usr/local/lib/python3.8/dist-packages/clairvoyance/oracle.py", line 487, in probe_typename
return (match.group('typename').replace('[', '').replace(']', '').replace('!', ''))
AttributeError: 'NoneType' object has no attribute 'group'
2023-04-21 05:12:25 ERROR | Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7f651f7840d0>
| 0easy
|
Title: Logging unexecuted keyword has unnecessary overhead if keywords are not found
Body: In an initial effort to make my testing repo compatible with both SeleniumLibrary and Browser, I added some conditional logic to run specific keywords based on the library chosen at test execution. More specifically, I used a global variable ${AUTOMATION}, declared in an argument file, which determines the library used here.
Basic example:
```
IF '${AUTOMATION}' == 'SeleniumLibrary'
SeleniumLibrary.Keyword_1
SeleniumLibrary.Keyword_2
SeleniumLibrary.Keyword_3
ELSE IF '${AUTOMATION}' == 'Browser'
Browser.Keyword_1
Browser.Keyword_2
Browser.Keyword_3
END
```
Initial results with a converted test showed a degrade in performance. If SeleniumLibrary is used, the Browser keywords are not ran, but RF appears to hang on the unmet 'ELSE IF' condition for several seconds in this example:

After some discussion in Slack, I believe it's been confirmed that the root cause is due to the Browser library not being imported, which is potentially causing some overhead on the RF side. When Browser library is not imported, I get the slowness, but if I import Browser, that slowness is eliminated and I get a much more expected result:

Reproducible in both RF 6.1.1 and RF 7.0.
Python 3.10.12
Ubuntu 22.04.3
| 0easy
|
Title: Input: add support for disabled Attribute
Body: As the title suggests, it would be really nice to add the attribute disabled for input. Maybe also to other components (button etc) | 0easy
|
Title: [new] `are_arrays_equal(array1, array2)`
Body: convert arrays to json_string before comparing them as indicated here
https://stackoverflow.com/questions/68398759/check-if-two-arrays-are-exactly-the-same-in-bigquery-merge-statement | 0easy
|
Title: `nest_joins` only works if `join_prefix` is set
Body: **Describe the bug or question**
Attributes from a joined model are not nested if no `join_prefix` is set
**To Reproduce**
Consider two simple models, `Hero` and `Ability`
```python
class Ability(Base, UUIDMixin, TimestampMixin, SoftDeleteMixin):
__tablename__ = "abilities"
name: Mapped[str] = mapped_column(nullable=False)
strength: Mapped[int] = mapped_column(nullable=False)
heroes: Mapped[list["Hero"]] = relationship(back_populates="ability")
class Hero(Base, UUIDMixin, TimestampMixin, SoftDeleteMixin):
__tablename__ = "heroes"
name: Mapped[str] = mapped_column(nullable=False)
ability_id: Mapped[int] = mapped_column(ForeignKey("abilities.id"))
ability: Mapped["Ability"] = relationship(back_populates="heroes")
```
Then, this code does not work as expected:
```python
heroes = await crud_hero.get_multi_joined(db, join_model=Ability, nest_joins=True)
```
Returns this (unrelated columns are removed):
```python
{
"data": [
{
"name": "Diana",
"ability_id": UUID("6e52176e-8a92-4a8d-b0b3-1fcd55acc666"),
"id": UUID("8212bccb-ce20-489a-a675-45772ad60eb8"),
"name_1": "Superstrength",
"strength": 10,
"id_1": UUID("6e52176e-8a92-4a8d-b0b3-1fcd55acc666"),
},
],
"total_count": 2,
}
```
When adding a prefix, it (kinda) works:
```python
heroes = await crud_hero.get_multi_joined(db, join_model=Ability, join_prefix="ability_", nest_joins=True)
```
Result is looking better
```python
{
"data": [
{
"name": "Diana",
"ability": {
"id": UUID("6e52176e-8a92-4a8d-b0b3-1fcd55acc666"),
"name": "Superstrength",
"strength": 10,
"id_1": UUID("6e52176e-8a92-4a8d-b0b3-1fcd55acc666"),
},
"id": UUID("8212bccb-ce20-489a-a675-45772ad60eb8"),
},
],
"total_count": 2,
}
```
| 0easy
|
Title: Improve installation instructions / debugging UX
Body: * Improve readme
* Add warning if lux-widget extension is not installed.
* Check if extension is enabled.
* Add a `lux.debug_info` function that returns relevant package versions. | 0easy
|
Title: pyproject.toml uses non-public __legacy__ setuptools backend
Body: * gevent version: git (as of beb98ce4742181caaa6cec9cedecaaa2a04f910b)
* Python version: n/a
* Operating System: Gentoo Linux
### Description:
Per https://github.com/pypa/setuptools/issues/1689:
> `setuptools.build_meta:__legacy__` was intended _only_ as a default and was not intended to be specified as the build backend.
Please fix the build system to work correctly with the `setuptools.build_meta`. The explicit use of legacy backend is no longer supported on Gentoo and prevents us from packaging the new gevent version. | 0easy
|
Title: `WoEEncoder` should return a list with all the variables that have 0 in the denominator of the WoE
Body: At the moment, the transformer fails when it encounters one variable with 0 in the denominator of the WoE formula. We would like it to assess and raise an error with all the variables that show this behaviour. | 0easy
|
Title: MACD Wilders
Body: Howdy. I notice you have a MACD indicator here... is there instructions somewhere to manipulate this to become Wilders? I think by default this is simple or exponential. I was wondering if there was a Wilders one that could be used. Or if we can manipulate the macd.py to become Wilders. Thanks! | 0easy
|
Title: Test Gunicorn Worker
Body: The idea here is to create a setup to test the `gunicorn` worker classes that we have.
I don't know how is the best way to achieve this, as [`gunicorn` test suite](https://github.com/benoitc/gunicorn/tree/master/tests) didn't give me much insight on how we can test this integration without `unittest.mock`. Suggestions are welcome.
If you are interested on working on this, go ahead. Some clarifications:
1. Avoid integration kind of setup. I want to keep it simple.
2. Avoid `unittest.mock` - if inevitable, explain the reason.
3. Add test coverage up to 100% on the `workers.py`.
## Notes
When doing this, please change those lines: https://github.com/encode/uvicorn/blob/eec7d22ecc0c8926feb365ac7708918e32725f4f/setup.cfg#L39-L43
To something more like: https://github.com/encode/starlette/blob/048643adc21e75b668567fc6bcdd3650b89044ea/setup.cfg#L41-L42
<!-- POLAR PLEDGE BADGE START -->
> [!IMPORTANT]
> - We're using [Polar.sh](https://polar.sh/encode) so you can upvote and help fund this issue.
> - We receive the funding once the issue is completed & confirmed by you.
> - Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/encode/uvicorn/issues/1834">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/encode/uvicorn/issues/1834/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/encode/uvicorn/issues/1834/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
| 0easy
|
Title: Issue when commuting moments
Body: **Description of the issue**
Cirq gives the wrong result when determining whether moments commute if the individual operators within the moments do not commute, but the moment as a whole still commutes. For example, take a pair of single qubit Z gates and a two qubit RXX gate. While [Z, RXX] !=0 (for either single qubit Z), we do in fact have [Z * Z, RXX] = 0. However, when using cirq we have
```
import cirq
qubits = cirq.LineQubits.range(2)
moment_1 = cirq.Moment([cirq.Z(qubits[0], cirq.Z(qubits[1])])
moment_2 = cirq.XXPowGate(exponent=1 / 2)(*qubits)
print(cirq.commutes(moment_1, moment_2))
```
Gives `False`.
Looking through the source code this seems to be because the the `commutes()` function compares pairs of operators form each moment but doesn't consider the case when the moment as a whole commutes even when the individual operators do not.
**How to reproduce the issue**
See above
**Cirq version**
You can get the cirq version by printing `cirq.__version__`. From the command line:
1.5.0
| 0easy
|
Title: Feature: Allow broker creation in on_startup hook
Body: For some reason, I need to delay the creation of an app's broker in an `on_startup` hook. This is mainly because my application has a pretty heavy warmup bootstrap that loads a lot of data into memory so I need to delay this bootstrap (which actually includes broker creation) after the parent process forked into worker subprocesses, otherwise the parent process has an unnecessary high memory footprint (that only worker processes need to have).
E.g.
```python
from faststream import FastStream
from faststream.kafka import KafkaBroker
app = FastStream()
@app.on_startup
def bootstrap():
app.set_broker(KafkaBroker(["localhost:9092"]))
# some warmup of the application pre-loading data into memory
...
```
For now, this isn't working because of this assert https://github.com/airtai/faststream/blob/9b7c33e8765485cf28f850cb485afcd838c12079/faststream/app.py#L40 which occurs at the very begining of the workers run, before `on_startup` hooks are executed.
The example above is overly simplified and one could argue that broker could have been created outside the `bootstrap()` function, but, as per example at https://faststream.airt.ai/latest/getting-started/lifespan/hooks/#lifespan, it could be legitimate to create it based on environment setting specifying not only the broker host but also the kind of broker to use (nats, kafka, ...).
I currently solve the problem by providing an *empty* `KafkaBroker` at app instanciation (`app = FastStream(broker=KafkaBroker())`) before overriding it using `app.set_broker()` in the `bootstrap()` function.
However, as per the base `Application` constructor at https://github.com/airtai/faststream/blob/9b7c33e8765485cf28f850cb485afcd838c12079/faststream/_internal/application.py#L48-L70 which allows application instanciation without a defined broker and this comment in `Application.set_broker()` https://github.com/airtai/faststream/blob/9b7c33e8765485cf28f850cb485afcd838c12079/faststream/_internal/application.py#L117-L120 which claims to be used for creation/init in `on_startup`, I would have expected my use case to be legitimate.
As a consequence, should this assert be removed, moved after startup hooks execution (at https://github.com/airtai/faststream/blob/9b7c33e8765485cf28f850cb485afcd838c12079/faststream/_internal/application.py#L168-L175) or just replaced by a warning when broker is `None` at https://github.com/airtai/faststream/blob/9b7c33e8765485cf28f850cb485afcd838c12079/faststream/_internal/application.py#L176-L177 | 0easy
|
Title: Confusing AttributeError raised when elements can't be found
Body: Splinter raises some hard to diagnose errors like: `AttributeError: 'ElementList' object has no attribute 'click'`
This is caused by the following code:
```
element_list = browser.find_by_id("#non-existing")
element_list.click()
```
This happens because the `ElementDoesNotExist` error is masked in `ElementList.__getattr__()`:
```
def __getattr__(self, name):
try:
return getattr(self.first, name)
except (ElementDoesNotExist, AttributeError):
raise AttributeError(
u"'{0}' object has no attribute '{1}'".format(
self.__class__.__name__, name
)
)
```
Simply removing `except ElementDoesNotExist` would fix this issue | 0easy
|
Title: Bug: Long texts length is not matching output area in Jupyter
Body: <!-- Please use the appropriate issue title format:
BUG REPORT
Bug: {Short description of bug}
SUGGESTION
Suggestion: {Short description of suggestion}
OTHER
{Question|Discussion|Whatever}: {Short description} -->
## Description
Currently, on using long texts with animations in Jupyter results in output wrapping in next line. Fix for this would be to give it a fixed layout width or try and get the width of display area while creating frames.
### System settings
- Operating System: Mac OS
- Terminal in use: NA
- Python version: 2.7.14
- Halo version: HEAD
- `pip freeze` output: NA
### Error
Output wraps in display area for Jupyter notebook when long texts are used.
### Expected behaviour
Output should take width of display area before creating frames.
## Steps to recreate

## People to notify
@JungWinter @ManrajGrover
| 0easy
|
Title: [k8s] Local K8s cluster doesn't work with GPU models containing numbers only.
Body: I followed the troubleshooting guides to check GPU support:
https://docs.skypilot.co/en/latest/reference/kubernetes/kubernetes-troubleshooting.html#checking-gpu-support
- Step B0 - Is your cluster GPU-enabled? ✅
```
llc@LLC:~$ kubectl describe nodes
Name: ai-dev
...
Capacity:
...
nvidia.com/gpu: 1
...
```
- Step B1 - Can you run a GPU pod? ✅
```
llc@LLC:~$ kubectl logs skygputest
Sun Jan 26 07:00:53 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.05 Driver Version: 560.35.05 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off |
| 30% 32C P8 13W / 450W | 2MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
```
- Step B2 - Are your nodes labeled correctly? ✅
```
llc@LLC:~$ kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, labels: .metadata.labels}'
...
skypilot.co/accelerator=4090
...
```
- Step B3 - Can SkyPilot see your GPUs? ✅
```
llc@LLC:~$ sky show-gpus --cloud k8s
Kubernetes GPUs (context: default)
GPU REQUESTABLE_QTY_PER_NODE TOTAL_GPUS TOTAL_FREE_GPUS
4090 1 1 1
Kubernetes per node accelerator availability
NODE_NAME GPU_NAME TOTAL_GPUS FREE_GPUS
ai-dev 4090 1 1
```
- Step B4 - Try launching a dummy GPU task ❌
```
llc@LLC:~$ sky launch -y -c mygpucluster --cloud k8s --gpus 4090:1 -- "nvidia-smi"
Task from command: nvidia-smi
No resource satisfying Kubernetes({'4090': 1}) on Kubernetes.
sky.exceptions.ResourcesUnavailableError: Kubernetes cluster does not contain any instances satisfying the request: 1x Kubernetes({'4090': 1}).
To fix: relax or change the resource requirements.
Hint: sky show-gpus to list available accelerators.
sky check to check the enabled clouds.
``` | 0easy
|
Title: Improvements to PosTagVisualizer
Body: In #768 we rebooted the part-of-speech tag visualizer to make it more practical for NLP machine learning tasks. Here are a few ideas for follow-up features that could be added:
- [x] optional ordering of the plot by tag count (similar to a frequency distribution). _Note: this should NOT be the default behavior since it could prevent users from being able to compare across plots or across corpora_
- [x] optional stacked barchart for each category/class if `y` is supplied
- [ ] option for NER tags with entity recognition (or maybe this should be another visualizer with a shared base class?)
- [x] optional parsing of raw text (only if nltk or spacy is installed) | 0easy
|
Title: Strict mode (explicit syntax)
Body: As we know from [Python-mode vs Subprocess-mode](https://xon.sh/tutorial.html#python-mode-vs-subprocess-mode):
> Take the case of `ls -l`. This is valid Python code, though it could have also been written as `ls - l` or `ls-l`. So how does xonsh know that `ls -l` is meant to be run in subprocess-mode?
> ...
> The determination between Python- and subprocess-modes is always done in the safest possible way. If anything goes wrong, it will favor Python-mode. The determination between the two modes is done well ahead of any execution. You do not need to worry about partially executed commands - that is impossible.
> If you would like to explicitly run a subprocess command, you can always use the formal xonsh subprocess syntax that we will see in the following sections. For example: `![ls -l]`.
I would like to suggest to have the `XONSH_STRICT_MODE` (or `XONSH_EXPLICIT_SYNTAX`) variable to switch parser in to the mode where any command will be interpreted as python forever if there is no explicit syntax. In this mode `ls -l` will be forever python's `ls` variable minus `l` variable and only `![ls -l]` will run subprocess command.
This will help during writing the scripts to avoid mistakes and have additional arguments for users who have concerns around this.
Expected behavior:
```python
echo 1
# 1
$XONSH_STRICT_MODE = True
echo 1
# "NameError: name 'echo' is not defined" OR "SyntaxError: invalid syntax"
![echo 1]
# 1
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: [BUG] dataset link is broken
Body: I try to run the example [here](https://unit8co.github.io/darts/examples/14-transfer-learning.html) but the dataset link is broken. May you fix the link? Thank you!
~~~~
!curl -L https://github.com/unit8co/amld2022-forecasting-and-metalearning/blob/main/data/m3_dataset.xls\?raw\=true -o m3_dataset.xls
!curl -L https://github.com/unit8co/amld2022-forecasting-and-metalearning/blob/main/data/passengers.pkl\?raw\=true -o passengers.pkl
!curl -L https://github.com/unit8co/amld2022-forecasting-and-metalearning/blob/main/data/m4_monthly_scaled.pkl\?raw\=true -o m4_monthly_scaled.pkl
~~~~ | 0easy
|
Title: improve error message when pipeline.yaml does not exist
Body: If a user runs any ploomber command and passes `-e pipeline.yaml` in a directory that doesn't have such a file, the error isn't very clear:
```sh
ploomber status -e pipeline.yaml
```
```pytb
Traceback (most recent call last):
File "/Users/Edu/dev/ploomber/src/ploomber/cli/io.py", line 34, in wrapper
fn(**kwargs)
File "/Users/Edu/dev/ploomber/src/ploomber/cli/status.py", line 15, in main
dag, args = parser.load_from_entry_point_arg()
File "/Users/Edu/dev/ploomber/src/ploomber/cli/parsers.py", line 213, in load_from_entry_point_arg
entry_point = EntryPoint(self.parse_entry_point_value())
File "/Users/Edu/dev/ploomber/src/ploomber/entrypoint.py", line 19, in __init__
self.type = find_entry_point_type(value)
File "/Users/Edu/dev/ploomber/src/ploomber/entrypoint.py", line 65, in find_entry_point_type
raise ValueError(
ValueError: Could not determine the entry point type from value: 'pipeline.yaml'. Expected an existing file with extension .yaml or .yml, existing directory, glob-like pattern (i.e., *.py) or dotted path (i.e., module.sub_module.factory_function). Verify your input.
```
It'd be better to check if the argument "looks like" a path to a yaml file, and if so, say that the path doesn't exist. If it doesn't look like a path to a yaml, print the default error message.
| 0easy
|
Title: WHILE and TRY content are not removed with `--removekeywords all`
Body: Older FOR and IF structures are handled properly, but enhancing `--removekeywords` was apparently forgotten when WHILE and TRY were introduced. Other control structures cannot have body in data, but it's possible that e.g. listeners log something that ends up inside them and thus they have a body in the result model. Need to think do we need to care about that and clear them as well. | 0easy
|
Title: `Log Variables` should not consume iterables
Body: Found in Robot Framework 6.1.1 and 7.0rc2
Python 3.11
Given this example:
```
Cycle Test
${TEST_LIST} Create List beginning 1 2 3 4 end
${CYCLE_TEST} Evaluate itertools.cycle($TEST_LIST)
Set Test Variable ${CYCLE_TEST}
Log Variables
```
When executing this case
Then Robot Framework will pause for a while and eventually throw a Memory Error.
This might be an uncommon situation, but would be nice to handle iterable objects.
My current workaround is to avoid assigning the itertools.cycle() to a RF variable and handle everything within a library. | 0easy
|
Title: Add metatags for into the pages docs
Body: ## 📚 Documentation
Following the instruction from https://docs.readthedocs.io/en/stable/guides/technical-docs-seo-guide.html#use-meta-tags
we should add at least the description for the top pages of each module of kornia. This will help the SEO
- [ ] Augmnetations
- [ ] Color
- [ ] Features
- [ ] Filters
- [ ] Contrib
- [ ] Geometry
- [ ] IO
- [ ] morphology
- [ ] enhance
- [ ] sensors
- [ ] nerf
- [ ] x
etc | 0easy
|
Title: Correct price reporting for GPT-4
Body: Nothing accounts for GPT-4 pricing right now, need to add GPT-4 pricing. | 0easy
|
Title: 部分格式链接无法解析
Body: 不可解析的链接如下
`https://www.xiaohongshu.com/explore/66bf0663000000001e019e8c`
不携带xsec_token无法解析

可以解析的链接如下
`https://www.xiaohongshu.com/explore/66bf0663000000001e019e8c?xsec_token=ABzM15091n9KRCCt45q0u13KXhRP3eAQH3Afd263wArTo=`
携带了`xsec_token`参数 可以解析

| 0easy
|
Title: Pipeline can't be run with env file having .yml suffix
Body: ### Description
Pipeline can't be run with `env.yml` file. It seems it won't get read at all since variables stated in the `env.yml` are unknown to the pipeline run. Renaming file to `env.yaml` solved the issue for me.
### Replication
Files for replication of minimal example are attached [here](https://github.com/ploomber/projects/files/8303640/test.zip).
### Task
It would be nice to not be dependent on the suffix. Adjust the code, so it treats `.yaml` and `.yml` suffixes of `env` file equally. | 0easy
|
Title: Remove support for dupefilters without a fingerprinter
Body: Deprecated in 2.7.0. | 0easy
|
Title: Improve docs regarding error reporting under asgi; add FAQ item
Body: Hello, I feel like i am missing something because falcon/uvicorn doesn't show any error traceback when there's a server error. Why is that? Just to note, i've used Flask for almost five years and am now migrating to falcon, so i have never used before uvicorn/falcon.
```python
import falcon
import falcon.asgi
class ThingsResource:
async def on_get(self, req, resp):
raise ValueError('foo')
app = falcon.asgi.App()
things = ThingsResource()
app.add_route('/things', things)
```
```
uvicorn main:app --reload --log-level debug
```
```
INFO: Will watch for changes in these directories: ['falcon-server']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [6088] using StatReload
INFO: Started server process [15700]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:59567 - "GET /things HTTP/1.1" 500 Internal Server Error # No traceback.
``` | 0easy
|
Title: DAGMapping should ignore non-notebook tasks
Body: DAGMapping currently creates a map of all tasks but it should only do so for notebook/script tasks. e.g., ignore tasks whose source is a function or a SQL script:
Essentially we want to move the `pairs` iteration logic inside the `DAGMapping` constructor:
https://github.com/ploomber/ploomber/blob/c87c60db954f72309b1e421a5399f9f6a426e5fe/src/ploomber/jupyter/manager.py#L185
then add a condition to such iteration logic to ignore tasks if they're not of the `ploomber.tasks.NotebookRunner` class | 0easy
|
Title: [DOC] API Documentation for Chemistry functions is not standardized
Body: # Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the docs do not have a standard docstring format for functions. Some docstrings contain only `Parameters`. Some have `Parameters` and `Method chaining usage`. Some have a combination of `Parameters`, ` Returns`, `Functional usage example`, and `Method chaining example` but not each one of these pieces.
I would like to propose a change, such that now the docs contain a **standardized** docstring suite. All functions should contain (at a minimum) the following:
- `Parameters`
- ` Returns`
- `Functional usage example`
- `Method chaining example`
**NOTE**: This can be done for all functions within the `janitor` directory. For ease of review, this will focus on the `chemistry.py` file and move to other files/functions as time permits.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/chemistry.html)
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/chemistry.py) | 0easy
|
Title: Historical Versions fixing `tensroflow-gpu`
Body: **Current State:** With the recent change to `tensorflow-gpu` in version 2.12, there are now issues with installing prior version of the `dataprofiler` package with `pip`.
**Desired State**: ability to resolve prior versions of the Data Profiler such that the `tensorflow-gpu` requirement in `requirements-ml.txt` is set to `tensorflow-gpu<=2.11.0`. | 0easy
|
Title: [Examples] Move pip installs to uv for faster setup
Body: Move our examples that has long setup time to use `uv` to reduce cold start time. | 0easy
|
Title: [new] `remove_value(arr, value)`
Body: returns an array with all values except value.
Get inspired from https://stackoverflow.com/questions/68580397/bigquery-udf-remove-from-array-how-do-i-anonymously-reference-an-element-in-a-s | 0easy
|
Title: fix pygraphviz installation issue
Body: See https://github.com/ploomber/ploomber/issues/538
recent versions of pygraphviz only work with python 3.8 and higher, however, they didn't pin the dependency and raise an error instead. we should find what's the most recent version that works with python 3.7 and lower, and pin it. But do not pin it if running >=3.8 (there's a way to specify these rules in setup.py) | 0easy
|
Title: Add support for Ordinal Classification/Regression tasks
Body: This is fairly straightforward extension of the current cleanlab functionality. Contributors welcomed! | 0easy
|
Title: bug: The output from the UpdateTable command does not include the SSEDescription
Body: ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The output from the UpdateTable command does not include the SSEDescription for a table which was created with it.
### Expected Behavior
The output from the UpdateTable command should include the SSEDescription when the table has it enabled.
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```bash
# launch localstack
# create a table with KMS
awslocal dynamodb create-table --table-name Example --attribute-definitions AttributeName=key,AttributeType=S --key-schema AttributeName=key,KeyType=HASH --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 --sse-specification Enabled=true,SSEType=KMS,KMSMasterKeyId=some-kms
# describe the table to see that it is KMS enabled
awslocal dynamodb describe-table --table-name Example
# update the table to PAY_PER_REQUEST
awslocal dynamodb update-table --table-name Example --billing-mode PAY_PER_REQUEST
# ^ the output from the update-table command does not include the SSEDescription
# sanity check to see the SSEDescription
awslocal dynamodb describe-table --table-name Example
```
### Environment
```markdown
LocalStack version: 3.8.2.dev70
LocalStack build date: 2024-10-31
LocalStack build git hash: 241bfbb8d
```
### Anything else?
_No response_ | 0easy
|
Title: Synchronous spinner
Body: Support synchronous spinners which switch frames upon explicit request from a working thread like.
```python
with Halo(spinner='dots') as spinner:
for f in files_to_load():
spinner.next(text="Loading %s" %f) # or .step(), or .tick(), or…
do_load_file(f)
```
This can (as far as I understand) be achieved with .frame() but isn't as concise as the above. Possibly don't render new frame more often than interval. | 0easy
|
Title: Marketplace - creator page - change font of creator's user name
Body: ### Describe your issue.
Please change font to the "lead" style in the typography sheet linked: https://www.figma.com/design/Ll8EOTAVIlNlbfOCqa1fG9/Agent-Store-V2?node-id=2759-9596&t=2JI1c3X9fIXeTTbE-1
Currently this section is using PP Neue Montreal TT instead of Geist
<img width="499" alt="Screenshot 2024-12-16 at 20 35 19" src="https://github.com/user-attachments/assets/f0918018-15e0-4df7-a512-d911a1511496" />
### Upload Activity Log Content
_No response_
### Upload Error Log Content
_No response_ | 0easy
|
Title: Enhancement: Be able to create new entity cloning existing one
Body: **Is your feature request related to a problem? Please describe.**
We need to create new records, just like existing ones but 3 or 4 fields need to be changed. It is very tedious to create a new entity and type everything (copy/paste). Especially in non-normalized legacy tables.
**Describe the solution you'd like**
I would like to be able to create a new entity/record, based on an existing one. I would like to just edit the fields that need to be changed.
**Describe alternatives you've considered**
Maybe one more icon, just like the "eye" and the "pencil", which then takes me to a CREATE screen, pre-populated with the cloned values?
Maybe select 1 with the checkbox and choose an action at the top, similar to delete? This approach is tacky because the user could have chosen N records, and this particular action only applies to 1 record. Maybe take the first one selected?
Maybe the VIEW screen could have a CLONE button? It would open a new screen where I would be editing a new entity, populated with the same values. Just the ID field would be different, of course.
**Additional context**
After having typed all options above, I think the most elegant is to have a CLONE button in the VIEW page. Perhaps between EDIT and DELETE.
| 0easy
|
Title: Examples for documentation
Body: We need some new examples, they are in the [projects](https://github.com/ploomber/projects) but I'm linking them here for exposure:
1. https://github.com/ploomber/projects/issues/13
2. https://github.com/ploomber/projects/issues/12
3. https://github.com/ploomber/projects/issues/11
4. https://github.com/ploomber/projects/issues/15
For details: https://github.com/ploomber/projects/blob/master/CONTRIBUTING.md
*Note: these issues require some familiarity with Ploomber* | 0easy
|
Title: Add type information to the visitor API
Body: Visitor methods currently look like
```python
class SuiteVisitor:
def visit_suite(self, suite):
...
```
but we should change them to
```python
from robot.model import TestSuite
class SuiteVisitor:
def visit_suite(self, suite: TestSuite):
...
```
The main benefit is getting automatic completion by IDEs.
Because visitors can be used both with `robot.running` and `robot.result` models, the documentation should mention that concrete visitor implementations may want to import more detailed types:
```python
from robot.api import SuiteVisitor
from robot.result import TestCase
class TestTimePrinter(SuiteVisitor):
def visit_test(test: TestCase):
print(f'{test.name}: {test.elapsedtime}')
```
This ought to be so easy that can be easily done in RF 6.1. Adding needed imports to the top of the module is likely to create a circular import, but we can avoid that by using `if TYPE_CHECKING`. | 0easy
|
Title: Add logging tests
Body: This problem (https://github.com/wemake-services/wemake-django-template/commit/53ae125a0e2eed88f7b36a5edd07591ba0401b24) identified that our logging configuration was and is not tested at all.
We need to add tests for it:
- We need to test regular logging format
- We need to test exception logging format
Please, feel free to send PRs :) | 0easy
|
Title: Docstring of `color.hsv_value` is incomplete
Body: ## Description
`color.hsv_value` is incomplete:
https://github.com/scikit-image/scikit-image/blob/09577eddf2ce0376b25b96291599af313adb1c2d/skimage/color/adapt_rgb.py#L48-L59
`args` and `kwargs` should be documented as well as the return value. It could also be made clearer that `image` expects an RGB(A) image. | 0easy
|
Title: alma doesnt apply distribution_offset
Body: when i set offcet still the column name is 0.85
```python
length=30
distribution_offset=0.5
MyStrategy = ta.Strategy(
name="test",
ta=[
{"kind": "alma", "length": length, "distribution_offset":distribution_offset},
]
)
df.ta.study(MyStrategy)
```

| 0easy
|
Title: `decomposition.umap_reconstruction.UMAPOutlierDetection`
Body: | 0easy
|
Title: Improve property-based test for near-duplicate sets
Body: Property-based tests for near-duplicate sets are randomly failing in CI, when some health-checks don't pass for generated data.
# Stack trace
<!-- If applicable, please include a full stack trace here. If you need to omit
the bottom of the stack trace (e.g. it includes stack frames from your private
code), that is okay. Try to include all cleanlab stack frames. -->
Every so often, CI randomly fails a test with this error:
```
FAILED tests/datalab/issue_manager/test_duplicate.py::TestNearDuplicateSets::test_near_duplicate_sets_empty_if_no_issue_next - hypothesis.errors.FailedHealthCheck: Examples routinely exceeded the max allowable size. (20 examples overran while generating 8 valid ones). Generating examples this large will usually lead to bad results. You could try setting max_size parameters on your collections and turning max_leaves down on recursive() calls.
See https://hypothesis.readthedocs.io/en/latest/healthchecks.html for more information about this. If you want to disable just this health check, add HealthCheck.data_too_large to the suppress_health_check settings for this test.
```
The way the issue manager is constructed in this test rarely passes the health check. It's failing on unrelated PRs, slowing development down.
A temporary fix was to ignore the health check (suppressing the HealthCheck.data_too_large flag). That's not advisable in the long term, so investigating how to improve the data generation will be a great help!
# Task
Improve the way Hypothesis generates the data for the affected test.
## Update
In https://github.com/cleanlab/cleanlab/pull/902/commits/0f36966ef4246836224afe92a5ab00d91f2d2b5c, the health-check in question has been suppressed. So when working on this issue, remember to remove the `HealthCheck.data_too_large` from `suppress_health_check` and make sure we can scale to more examples without issues.
| 0easy
|
Title: Update Jupyter notebook examples
Body: With `v0.3.0`, the Jupyter notebook examples are now outdated. It should be trivial to change the examples to the `v0.3.0` format. | 0easy
|
Title: add RandomizedSearchCV params for regressors
Body: | 0easy
|
Title: Back navigation does not work properly in HTML outputs (log, report, Libdoc)
Body: We are just Upgrading from 3.x to 6.0.2.
Getting back to report.html from log.html requires 3 clicks on Back button. This is quite irritating.
Steps to reproduce:
1) from testreport.html, open a testcase. This takes to log.html, url ends like "log.html#s1-s2-s29-t6".
2) click Back. Still in log.html. url ends like "log.html#"
3) click Back. Still in log.html. url ends like "log.html#s1-s2-s29-t6"
4) click Back. Only now,, we are back in report.html
| 0easy
|
Title: SQLAlchemy + Flask Tutorial
Body: Following the tutorial provided at https://docs.graphene-python.org/projects/sqlalchemy/en/latest/tutorial/
using Python 2.7, ran into an error:
AssertionError: Found different types with the same name in the schema: EmployeeConnection, EmployeeConnection.
Upon investigation, I found the following resource which suggested renaming:
https://github.com/graphql-python/graphene-sqlalchemy/issues/153
Renaming with a similar pattern of EmployeeNode and DepartmentNode fixed the issue. I have attached the new schema.py as a text file here for reference.
[schema.txt](https://github.com/graphql-python/graphene/files/2870189/schema.txt)
Does the tutorial need to be updated to fix this issue?
| 0easy
|
Title: Add Beit segmentation model
Body: # Add Beit to SMP
BEiT-3 is a general-purpose multimodal foundation model developed by Microsoft that excels in various vision and vision-language tasks, including semantic segmentation. It employs a unified architecture with Multiway Transformers, enabling both deep fusion and modality-specific encoding. Pretrained using a masked "language" modeling approach on images ("Imglish"), texts, and image-text pairs, BEiT-3 effectively models images as another language. This design allows it to achieve state-of-the-art performance across a wide range of tasks, such as object detection, image classification, and semantic segmentation.
- Achieves top 1 results on ADE20K-val
Papers with Code:
https://paperswithcode.com/paper/image-as-a-foreign-language-beit-pretraining
Paper:
https://arxiv.org/abs/2208.10442
HF reference implementation:
https://huggingface.co/docs/transformers/model_doc/beit
https://github.com/huggingface/transformers/blob/v4.47.1/src/transformers/models/beit/modeling_beit.py
## Comments
As an example pls see the latest model additions:
- https://github.com/qubvel-org/segmentation_models.pytorch/pull/944
- https://github.com/qubvel-org/segmentation_models.pytorch/pull/926 | 0easy
|
Title: outdated install guide needs to be removed
Body: This can be found along the docs and it should point to the quick-start guide. We need to update those. https://docs.ploomber.io/en/latest/get-started/install.html | 0easy
|
Title: Autokey gentoo layman
Body: ## Classification: Crash
(Pick one of: Bug, Crash/Hang/Data Loss, Performance, UI/Usability, Feature (New), Enhancement)
## Reproducibility: Always
(Pick one of: Always, Sometimes, Rarely, Unable, I Didn't Try)
## Version
AutoKey version: desktop automation utility for Linux and X11
Used GUI (Gtk, Qt, or both):
If the problem is known to be present in more than one version, please list all of those.
Installed via: (PPA, pip3, …). gentoo layman
Linux Distribution:
## Summary
Summary of the problem : Overlay "y2kbadbug" does not exist
## Steps to Reproduce (if applicable)
- I do this : layman -a y2kbadbug
- I do that : layman -o 'https://github.com/autokey/autokey.git/master/raw/luziferius' -f -a y2kbadbug
## Expected Results
- This should happen. Fetching remote list... * Fetch Ok * Adding overlay...
## Actual Results
- Instead, this happens. :( Exception: Overlay "y2kbadbug" does not exist.
* CLI: Errors occurred processing action add
* Exception: Overlay "y2kbadbug" does not exist.
If helpful, submit screenshots of the issue to help debug.\
Debugging output, obtained by launching autokey via `autokey-gtk --verbose` (or `autokey-qt --verbose`, if you use the Qt interface) is also useful.\
Please upload the log somewhere accessible or put the output into a code block (enclose in triple backticks).
```
Example code block. Replace this whith your log content.
```
## Notes
Describe any debugging steps you've taken yourself.
If you've found a workaround, please provide it here.
| 0easy
|
Title: Create Standard Deviation for Linear Regression
Body: I needed to have standard deviation for linear regression and there was not function for it. so i coded it and if you like you can edit it and add to next version.
```python
def stdevlinreg(close, length: int):
x_value = close
stdev = close
slope = ta.linreg(close=close, length=length, slope=True)
tsf = ta.linreg(close=close, length=length, tsf=True)
variance = 0
for i in range(1, length):
variance += (x_value.shift(i) - (tsf - i * slope)) ** 2
stdev = (variance / (length - 1)) ** 0.5
return stdev
```
so we can now use it as:
`df['stdlinreg'] = stdevlinreg(df['close'], length=15)
`
and if add to 'linreg' it can be like:
`df['stdlinreg'] = ta.linreg(df['close'], length=15, stdev=True)` | 0easy
|
Title: Add ability to configure git branch
Body: | 0easy
|
Title: 专业版的分销是用不了,项目是不是不维护了
Body: 1. BUG反馈请描述最小复现步骤
2. 普通问题:99%的答案都在帮助文档里,请仔细阅读https://kmfaka.baklib-free.com/
3. 新功能新概念提交:请文字描述或截图标注
| 0easy
|
Title: Don't run check-release on release
Body: ## Description
This is what our `check-release` workflow looks like:
```
name: Check Release
on:
push:
branches: ["*"]
pull_request:
branches: ["*"]
release:
types: [published]
schedule:
- cron: "0 0 * * *"
```
The problem is that running this on release always fails, probably because the branch was updated too recently for this workflow to include the release commit. See: https://github.com/jupyterlab/jupyter-ai/actions/workflows/check-release.yml?query=is%3Afailure
Running this on release is unnecessary anyways since a release always results in a commit being pushed, which triggers the `on: push` hook.
| 0easy
|
Title: ImageClassifier no longer has final_fit (AttributeError: 'ImageClassifier' object has no attribute 'final_fit')
Body:
### Bug Description
---------------------------------------------------------------------------
AttributeError: 'ImageClassifier' object has no attribute 'final_fit'
---------------------------------------------------------------------------
I see examples such as:
clf = ImageClassifier(verbose=True)
clf.fit(x_train, y_train, time_limit=12 * 60 * 60)
clf.final_fit(x_train, y_train, x_test, y_test, retrain=True)
y = clf.evaluate(x_test, y_test)
print(y)
But when I run something similar I get the error:
AttributeError Traceback (most recent call last)
<ipython-input-8-4e011cd6ccac> in <module>
----> 1 clf.final_fit(trainX, trainY, testX, testY, retrain=True)
2 keras_model = clf.export_model()
AttributeError: 'ImageClassifier' object has no attribute 'final_fit'
As suggested here: https://github.com/keras-team/autokeras/issues/186
I have tried adding the activation function and retraining on the data... but to do so I still end up choosing many hyper parameters such as LR and optimizer...
I thought that was sorta the point of Auto ML?
What am I missing?
| 0easy
|
Title: Check for shape consistency between model and guide
Body: In SVI latent random variables sampled from the guide completely mask those sampled in the model during inference. However, nothing prevents us from specifying different shapes for such a sample site in model and guide respectively. This makes it easy to introduce confusing bugs when code in the model expects a certain shape that is different than what guide provides. This can easily happen if the model needs an adaptation but the user forgets to reflects that in the guide.
The problem then is not as much that this results in an error, but where that error occurs: In the code following the sampling of the RV in the model (when the actual error might have been sampling a wrong shape in the guide). Furthermore, the model function itself is sane, i.e., if the user searches for the error there, they will not find anything, thus leading to much confusion.
Long story short, I think it would be beneficial to check that shapes of sample sites referring to the same random variable in guide and model are consistent and at least issue a warning (if not outright raise an error) if this is not the case, as that would make it much easier to spot this type of error. Thoughts? | 0easy
|
Title: [Feature] Add examples for token in token out for LLM
Body: ### Checklist
- [x] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 2. Please use English, otherwise it will be closed.
### Motivation
Refer to this file:
`test/srt/test_skip_tokenizer_init.py`
write an example here:
`examples/runtime/engine`
And link it here:
<img width="1086" alt="Image" src="https://github.com/user-attachments/assets/25af5dc4-d9bf-46f3-822f-79d9993afa1b" />
### Related resources
_No response_ | 0easy
|
Title: Moment.resolve_parameter not resolving all the expressions
Body: **Description of the issue**
**How to reproduce the issue**
```python
q0 = cirq.LineQubit(0)
c = cirq.Circuit(cirq.Rz(rads=sympy.Mul(sympy.Rational(2, 3), sympy.pi)).on(q0))
cirq.resolve_parameters(c, {'pi': np.pi})
```
results in:
```
1: ───Rz(2*pi/3)───
```
as per discussion with @NoureldinYosri, `Moment.resolve_parameter` has a bug which can be bypassed by:
```python
def resolve_parameters(circuit, resolver):
return cirq.Circuit.from_moments(*[[cirq.resolve_parameters(op, resolver) for op in m] for m in circuit])
```
```
resolve_parameters(c, {'pi': np.pi})
```
results in the expected:
```
1: ───Rz(0.667π)───
```
**Cirq version**
'1.5.0.dev'
| 0easy
|
Title: `CountFrequencyEncoder` could have a parameter to group categories with few observations
Body: Useful to handle rare categories in highly cardinal variables.
If a category is present in less than a certain threshold of observations it should be replaced by a certain value. Check page 91 of Alice Zhengs book and the CountEncoder from category encoders.
| 0easy
|
Title: Marketplace - Fix margin underneath "Top agents" increase it to 37px
Body:
### Describe your issue.
Fix this margin, increase it to 37px

| 0easy
|
Title: Considering any variable that has at least one upper case letter as env var
Body: ## Problem
If we define a var containing an upper case letter, like:
```yaml
vars:
apiKey: abc123
```
and we try to use it later as `${apiKey}` it will raise the error:
```
ERROR:scanapi.evaluators.string_evaluator:'apiKey' environment variable not set or badly c
```
## Possible Solution
To check if there is any lower case in the word here:
https://github.com/camilamaia/scanapi/blob/master/scanapi/evaluators/string_evaluator.py#L48
```python
if any(letter.islower() for letter in variable_name):
continue
``` | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.