text
stringlengths
20
57.3k
labels
class label
4 classes
Title: Named-only arguments are not trace logged with user keywords Body: Hi in robotframework==6.0.2 If arguments are given as shown on the screenshot, the values after list are not displayed in traces, but the values are processed and everything works. This is the line ``` Combobox should have options ${EMPTY} Range 1 Range 2 convert_unit=ovca ``` ![thumb-Clipboard - May 9, 2024 3_35 PM](https://github.com/robotframework/robotframework/assets/169369145/1238f437-0b3d-426f-abe6-1ad327adeb9d)
0easy
Title: 有没有arm64架构的docker镜像? Body: 有没有arm64架构的docker镜像?
0easy
Title: [Feature request] Add apply_to_images to CenterCrop Body:
0easy
Title: [Feature Request] A way to reevaluate all cells Body: Add a command that will rerun all cells. This would be especially useful after loading a save file. To quickly restore the kernel state to way it was before. - `:MoltenReevaluateAll` stolen issue from: https://github.com/dccsillag/magma-nvim/issues/29
0easy
Title: Replace word `additional` with word `more` inside docs Body: vale suggests to replace `additional` with `more` to remove complexity ![Screenshot 2024-09-19 at 17 24 05](https://github.com/user-attachments/assets/7600068a-4853-480e-842d-b44d022cf1a8) todo: implement the change where suitable
0easy
Title: Option to include all tags and attrs in LinkExtractor with specified exclusions Body: <!-- Thanks for taking an interest in Scrapy! If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/. The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself. Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md The following is a suggested template to structure your pull request, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#writing-patches and https://doc.scrapy.org/en/latest/contributing.html#submitting-patches --> ## Summary Add the option to the LinkExtractor class to consider all tags and attributes (e.g. if you pass `None` then consider all tags/attributes), and `deny_tags` and `deny_attrs` arguments or similar so you can additionally consider all tags and attributes with the exception of those explicitly passed. ## Motivation It allows adopting a strategy of extracting all links by default and then specifically excluding the tags and attributes you don't want considered. Currently, it seems the user has to figure out all the specific tags and attributes where they're desired links appear and explicitly pass them to `tags` and `attrs` to have them considered. ## Describe alternatives you've considered For including all tags, you could use the Selector class instead of LinkExtractor and select all e.g. `href` attributes regardless of which tag they appear in, e.g. `response.xpath('//@href')`. Using Selector results in losing the various convenient arguments in LinkExtractor and requires manually processing them with regex etc instead, and it requires manually converting relative links into absolute links when you want to use regexes that match the entire URL whereas LinkExtractor already handles that automatically. ## Additional context Any additional information about the feature request here.
0easy
Title: openai.error.InvalidRequestError: An API version is required for the Azure API type. Body: ### Describe the bug Although the azure api version is set, the option is not linked to litellm. ### Reproduce 1. setting config`interpreter --config` ```yaml model: "azure/[my_deproyment_name]" api_base: "https://my-azure-endpoint.openai.azure.com/" api_key: "my_azure_api_key" api_version: "2023-07-01-preview" ``` 2. execute interpreter `interpreter --debug_mode` 3. Execution is interrupted due to an error ```sh openai.error.InvalidRequestError: An API version is required for the Azure API type. ``` ### Expected behavior I expected the interpreter to run without any errors. ### Screenshots _No response_ ### Open Interpreter version 0.1.10 ### Python version 3.11.5 ### Operating System name and version Windows10 ### Additional context `llm\setup_openai_coding_llm.py`の79行目に以下を追記すると問題を解消できる ```diff # Optional inputs if interpreter.api_base: params["api_base"] = interpreter.api_base if interpreter.api_key: params["api_key"] = interpreter.api_key + if interpreter.api_version: + params["api_version"] = interpreter.api_version if interpreter.max_tokens: params["max_tokens"] = interpreter.max_tokens if interpreter.temperature: params["temperature"] = interpreter.temperature ```
0easy
Title: Document SQL runtime parameters Body: We recently added a new feature to allow runtime SQL parameters. This lets users read products from upstream tasks, compute some parameters, and use these parameters to render the SQL script. However, this feature hasn't been documented. For context, see #466 For a code example, see: https://github.com/ploomber/ploomber/blob/84d836fa5a5f2368fe96db4a9181b5b2d6687d2b/tests/tasks/test_tasks_sql.py#L189
0easy
Title: Deprecate MSSQL enable_identity_insert by default, gate it behind a flag Body: Discussed in #11613 Add a boolean flag in the mssql dialect, the default will be set to `warn` meaning that a warning will be raised when `_enable_identity_insert` is detected set to true. User can opt into the current behaviour or disable it
0easy
Title: Virtual table names are incorrectly escaped Body: **Describe the bug** When virtual tables are added/created, they are defined as: ``` table_name = escape(uri) self._cursor.execute( f'CREATE VIRTUAL TABLE "{table_name}" USING {adapter.__name__}({formatted_args})', ) ``` see: https://github.com/betodealmeida/shillelagh/blob/e379e8ac0b3dc2f45217be5084b01934c8a489d3/src/shillelagh/backends/apsw/db.py#L294 In particular, the table name is put in double quotes. The escape method applied before, replaces _single_ quotes rather than _double_ quotes: ``` def escape(value: str) -> str: """Escape single quotes.""" return value.replace("'", "''") ``` see: https://github.com/betodealmeida/shillelagh/blob/e379e8ac0b3dc2f45217be5084b01934c8a489d3/src/shillelagh/lib.py#L224 **Expected behavior** As the table name is quotes in double quotes, double quotes should be escaped in the table name. There are multiple ways to fix this: 1. Switch `escape` to replace double quotes. Likely not desired as the method seems to be used in other places. 2. Quote the table name in single quotes when creating. Is this breaking anything existing? 3. Ad-hoc escape the double quotes in the table name (likely best option if the impact of 2 can't be assessed). Happy to send a pull request.
0easy
Title: [BUG] render=False does not propagate url_params Body: **Describe the bug** Render equals false URL_params values do not seem to get set such as play=0 . This prevents core workflows like specifying a custom layout for embedded web apps. **To Reproduce** See https://graphistry-community.slack.com/archives/C014ESCDDU0/p1677651935706109?thread_ts=1677651935.706109&cid=C014ESCDDU0 **Expected behavior** What should have happened **Actual behavior** What did happen **Screenshots** If applicable, any screenshots to help explain the issue **Browser environment (please complete the following information):** - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] **Graphistry GPU server environment** - Where run [e.g., Hub, AWS, on-prem] - If self-hosting, Graphistry Version [e.g. 0.14.0, see bottom of a viz or login dashboard] - If self-hosting, any OS/GPU/driver versions **PyGraphistry API client environment** - Where run [e.g., Graphistry 2.35.9 Jupyter] - Version [e.g. 0.14.0, print via `graphistry.__version__`] - Python Version [e.g. Python 3.7.7] **Additional context** Add any other context about the problem here.
0easy
Title: Fix style issues mentioned in static analysis Body: ## Feature request ### Description of the feature <!-- A clear and concise description of what the new feature is. --> To increase the quality of the project we are using static analysis to find out style issues in the project. A detailed list of the issues can be found [here](https://deepsource.io/gh/scanapi/scanapi/issues/?category=style) 💡 The Issue requires multiple PRs so more than one person can contribute to the issue.
0easy
Title: Remove deprecated PythonItemExporter.binary Body: The binary mode of `PythonItemExporter` was deprecated in Scrapy 1.1.0 but we've missed it in previous deprecation removal rounds.
0easy
Title: refactor: move pydantic.BaseModel usages to dataclasses Body: We should refactor all `pydatic.BaseModel` occurrences like [this](https://github.com/airtai/faststream/blob/main/faststream/rabbit/shared/schemas.py) to regular `dataclasses.dataclass` First of all, we have some exceptions in various dependencies libraries pairs due this code. Pydantic steel has unstable API and we should minimalize it's usage to necessary only. Also, I have a plan to make `pydantic` optional in the future (FastDepends already supports it).
0easy
Title: Add project description for PyPI Body: Currently PyPI's description of Notebooker is threadbare. It can be updated by fiddling with pyproject.toml. See more info here - https://packaging.python.org/en/latest/tutorials/packaging-projects/#creating-pyproject-toml
0easy
Title: DeprecationWarning in webdriver.switch_to_window Body: I am using code such as the following, to switch the 'current' window: ``` browser.windows.current = browser.windows[1] ``` This works fine, but I see the following warning in the terminal: ``` $VIRTUAL_ENV/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py:530: DeprecationWarning: use driver.switch_to.window instead warnings.warn("use driver.switch_to.window instead", DeprecationWarning) ``` Using Selenium 2.52.0 (installed from pypi)
0easy
Title: The `reload_delay` setting is invalid when using `WatchFiles`? Body: ### Discussed in https://github.com/encode/uvicorn/discussions/1831 <div type='discussions-op-text'> <sup>Originally posted by **shoucandanghehe** January 4, 2023</sup> This is my demo code👇 <details><summary>code</summary> ``` import sys import logging import uvicorn from fastapi import FastAPI app = FastAPI() logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', stream=sys.stdout, ) logger = logging.getLogger('uvicorn') LOG_CONFIG = { 'version': 1, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'formatter': 'generic', } }, 'formatters': { 'generic': { 'format': '%(asctime)s [%(process)d] [%(levelname)s] %(message)s', 'datefmt': '[%Y-%m-%d %H:%M:%S]', 'class': 'logging.Formatter', } }, 'loggers': { 'uvicorn': { 'handlers': ['console'], 'level': 'INFO', 'propagate': False, }, } } @app.get('/') def read_root(): return {'Hello': 'World'} if __name__ == '__main__': uvicorn.run( '__main__:app', host='0.0.0.0', port=8000, reload=True, reload_delay=120.0, log_config=LOG_CONFIG ) ``` </details> If I understand correctly, `uvicorn` should reload after 120s of file changes, but actually `uvicorn` seems to start reloading right away. (I can't tell if it's immediately or the default 0.25s) Here are some logs👇 <details><summary>logs</summary> ``` [2023-01-05 05:03:08] [24864] [INFO] Will watch for changes in these directories: ['D:\\code\\test'] [2023-01-05 05:03:08] [24864] [INFO] Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) [2023-01-05 05:03:08] [24864] [INFO] Started reloader process [24864] using WatchFiles [2023-01-05 05:03:09] [11648] [INFO] Started server process [11648] [2023-01-05 05:03:09] [11648] [INFO] Waiting for application startup. [2023-01-05 05:03:09] [11648] [INFO] Application startup complete. [2023-01-05 05:03:19] [24864] [WARNING] WatchFiles detected changes in 'reload_test.py'. Reloading... [2023-01-05 05:03:19] [34588] [INFO] Started server process [34588] [2023-01-05 05:03:19] [34588] [INFO] Waiting for application startup. [2023-01-05 05:03:19] [34588] [INFO] Application startup complete. ``` </details></div>
0easy
Title: Difference in Flux scheduler configuration max_shift Body: ### Describe the bug Could you please check if the value of 1.16 here... https://github.com/huggingface/diffusers/blob/658e24e86c4c52ee14244ab7a7113f5bf353186e/src/diffusers/pipelines/flux/pipeline_flux.py#L78 ...is intentional or maybe a typo? `max_shift` is 1.15 both in the model configuration... https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/scheduler/scheduler_config.json ...and in the original inference code by BFL: https://github.com/black-forest-labs/flux/blob/d06f82803f5727a91b0cf93fcbb09d920761fba1/src/flux/sampling.py#L214 ### Reproduction - ### Logs ```shell ``` ### System Info - ### Who can help? @yiyixuxu @DN6
0easy
Title: iTerm2 shell integration with xonsh Body: iTerm2 has shell integration feature - https://iterm2.com/documentation-shell-integration.html It will be cool to have xonsh. ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
0easy
Title: Fix dag status width formatting issues Body: "ploomber status" outputs a hard-to-read table when the width of the terminal isn't large enough: ``` name Last run Outdated? Product Doc (short) Location ------ ------------ ------------- ------------- ------------- ------------- raw Has not been Source code MetaProduct({ /Users/Edu/de run 'data': File( v/projects-pl 'output/data. oomber/spec- csv'), 'nb': api- File('output/ python/raw.py raw.ipynb')}) clean Has not been Source code & MetaProduct({ /Users/Edu/de run Upstream 'data': File( v/projects-pl 'output/clean oomber/spec- .csv'), 'nb': api-python/cl File('output/ ean.py clean.ipynb') }) plot Has not been Source code & File('output/ /Users/Edu/de run Upstream plot.ipynb') v/projects-pl oomber/spec- api-python/pl ot.py ``` The product column is barely readable, we need to apply some formatting to. Easiest way is to use an existing python formatted (like black or yapf, maybe even detect if any of those is already installed to prevent adding another dependency to ploomber).
0easy
Title: Expose ASGI scope on Request object Body: **Is your feature request related to a problem? Please describe.** I'm trying to write an application framework that works across ASGI compatible server implementations and to do this I'd like to make the ASGI scope available to users. Sanic does not make the scope publicly available though. **Describe the solution you'd like** I'd like the `sanic.request.Request` object to provide a `scope` attribute which is the current `ASGIScope` **Additional context** Presently I'm working around this by accessing `request.app._asgi_app.transport.scope`
0easy
Title: Proper `HEAD` support for static routes Body: Implement proper `HEAD` support for static file serving. The response should follow the same logic as the default `GET` behaviour (like now), but not open any file streams (because we do not need any as the response to a `HEAD` request cannot have any body anyway). As a bonus, I think we need to block unsupported methods such as `POST` and `PUT`, and render the correct `Allow` header in response to `OPTIONS`.
0easy
Title: Breaks when trying to read an ODIM HDF5 file with missing tilts. Body: We had a problem at the BOM with some ODIM HDF5 files. Some scan can be missing some data types. For example, on one of the file, tilt 10 does not contain reflectivity. The file itself is valid, but the volume scan does not contain all the data it normally would. This can happen for lots of reasons but the most common one is that the communications line to the radar was running slowly so some of the moments were dropped. It would be nice if the reading of ODIM HDF5 files was robust enough to still cope with files like this. In general, it is not guaranteed that every tilt will always contain the same set of data types. One solution could be to change l.323 of pyart/aux_io/odim_h5.py to: ```python try: sweep_data = _get_odim_h5_sweep_data(fid[dset][h_field_key]) except Exception: sweep_data = np.zeros((rays_in_sweep, nbins)) sweep_data = np.ma.masked_where(sweep_data == 0, sweep_data) ```
0easy
Title: Need to update setup.py description Body: In our setup.py file, we say we are using markdown: ``` - description="Convert trained traditional machine learning models into tensor computations", - long_description=long_description, - long_description_content_type="text/markdown", ``` Pypi was not loving this. ``` Uploading hummingbird_ml-0.4.11-py2.py3-none-any.whl 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 174.8/174.8 kB • 00:00 • 1.1 MB/s INFO Response from https://upload.pypi.org/legacy/: 400 The description failed to render in the default format of reStructuredText. See https://pypi.org/help/#description-content-type for more information. <html> <head> <title>400 The description failed to render in the default format of reStructuredText. See https://pypi.org/help/#description-content-type for more information.</title> </head> <body> <h1>400 The description failed to render in the default format of reStructuredText. See https://pypi.org/help/#description-content-type for more information.</h1> The server could not comply with the request since it is either malformed or otherwise incorrect.<br/><br/> The description failed to render in the default format of reStructuredText. See https://pypi.org/help/#description-content-type for more information. </body> </html> ERROR HTTPError: 400 Bad Request from https://upload.pypi.org/legacy/ The description failed to render in the default format of reStructuredText. See https://pypi.org/help/#description-content-type for more information. ``` For now just deleted these 3 lines, but we should change the format away from markdown. See [docs](https://pypi.org/help/#description-content-type)
0easy
Title: Add support to Python 3.9+ Body: It does not work. No errors, no warnings, no tests running. ``` pytest --picked Changed test files... 0. [] Changed test folders... 0. [] Test session starts (platform: darwin, Python 3.9.10, pytest 6.2.5, pytest-sugar 0.9.4) django: settings: dmp.settings (from ini) rootdir: /Users/hipertracker/dev/app, configfile: pytest.ini plugins: Faker-11.3.0, picked-0.4.6, xdist-2.5.0, forked-1.4.0, html-3.1.1, django-3.10.0, sugar-0.9.4, metadata-1.11.0, notifier-1.0.4, testmon-1.2.2 collecting ... Results (0.01s): ``` pip freeze > requirements.lock.txt ``` ... pytest==6.2.5 pytest-django==3.10.0 pytest-forked==1.4.0 pytest-html==3.1.1 pytest-metadata==1.11.0 pytest-notifier==1.0.4 pytest-picked==0.4.6 pytest-sugar==0.9.4 pytest-testmon==1.2.2 pytest-watch==4.2.0 pytest-xdist==2.5.0 python-dateutil==2.8.2 ``` ``` $ python -V Python 3.9.10 ```
0easy
Title: Airbyte provider 415 issue Body: ### Apache Airflow Provider(s) airbyte ### Versions of Apache Airflow Providers airbyte provider version == 5.0.0 ### Apache Airflow version 2.10.5 ### Operating System linux-arm64 ### Deployment Official Apache Airflow Helm Chart ### Deployment details We're using the official helm chart with the Airflow `2.10.5` image in an `EKS 1.31.0` environment (arm based) with the Airbyte operator version `5.0.0` installed. Airbyte OSS is also deployed on the same cluster **with auto disabled**. ### What happened When trying to access Airbyte using the Airbyte sync operator using the default connection set as follows: `airbyte://http://<svc name>.<namespace>.svc.cluster.local/api/public/` (by following [this manual](https://airflow.apache.org/docs/apache-airflow-providers-airbyte/stable/connections.html)) we get error 415 (unsupported media type), which basically means there is an issue with the headers being set to Airbyte. The Airbyte connection requires a client key and secret, as stated in the manual I attached above, but Airflow is set to `no auth`. Could this be the issue? ### What you think should happen instead The Airbyte operator should send the right headers and sync the connector successfully. ### How to reproduce 1. Install the official helm chart with the Airflow `2.10.5` image in an `EKS 1.31.0` environment (arm based) with the Airbyte operator version `5.0.0` installed. 2. Deploy Airbyte OSS version `1.2.0` and set it to "no auth" 3. Create a basic connector of any kind in Airbyte and copy its ID. 4. Create a default connection for Airbyte in the airflow environment by setting the appropriate env var. 5. Write a sync dag for Airbyte. 6. Trigger the dag and watch the results. ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
0easy
Title: Docs update & extension Body: - [x] last update was for version 0.9.1, take latest changes into account - [x] review and extend general instructions for command-line interface - [x] check if all required info on `htmldate` and `courlan` is provided (available options etc.) - [x] add example on how to handle cookies, inspiration: - https://github.com/urllib3/urllib3/issues/2140 - https://github.com/urllib3/urllib3/pull/2464 - https://github.com/urllib3/urllib3/pull/2474 - [x] installation: say what the optional modules in `trafilatura[all]` are about - [x] explain how to use the Internet Archive as a fallback - [x] mention potential bug #89 _(possibly more to come)_
0easy
Title: 【建议】KEEP导出的GPX数据时区格式不规范修复方法 Body: # 问题描述 KEEP导出的GPX数据时间为UTC时间,但是却多了毫秒小数点,导致GPX导入Starva时将时区识别错误,时间混乱。 ## 数据样本 ```xml <trkpt lat="32.1877403071313" lon="118.70919897992724"> <time>2022-10-25T23:18:10.175000Z</time> </trkpt> ``` # 修复方式 ## 测试 将小数点后的数据去掉 ```shell echo -e "<time>2022-09-19T12:08:37.731000Z</time>" | sed -n 's/\.[[:digit:]]\{3\}0\{3\}Z/Z/ p' ``` ## 批量修复脚本 ```shell #!/bin/bash ls *.gpx | xargs sed -i 's/\.[[:digit:]]\{3\}0\{3\}Z/Z/g' ``` # 引申问题 同样存在时区问题的还有行者软件导出的GPX时间也是不规范的 行者软件导出的时间格式为UTC格式,但是其真实事件却是东八区时间,这就导致导入Strava时出现了时区错误的问题。需要对时间格式进行转换。 Strava认可的时区格式为`+08:00` 比如: 行者软件导出的GPX如下: ```xml <time>2022-10-17T07:14:49Z</time> ``` 这是UTC格式,但其时间却是东八区时间 正确的格式应该如下,声明时区 ```xml <time>2022-10-17T07:14:49+08:00</time> ``` ## 行者GPX时区修复 ```shell #!/bin/bash ls *.gpx | xargs sed -i 's/Z</+08:00</g' ``` # 参考资料 按照GPX规范,GPX中的时间值如果没有指定时区,则默认应该按照UTC世界标准时间理解: 如:下面GPX四个时间串,表示的是一样的时间: 2015-05-19T20:31:37 : 没有指定时区,按照UTC时间理解(默认) 2015-05-19T20:31:37Z : 指定为UTC时间理解 2015-05-20T04:31:37+0800 : 指定了时区,即:北京时间,按照北京时间理解 2015-05-20T04:31:37+08:00 :同上 按照上面的逻辑,GPX中的时间就是一个唯一确定的时间时间,而不依赖与当时所处的时区。 GPX文件的时间格式遵守[ISO-8601标准](https://baike.baidu.com/item/ISO%208601/3910715): 小时、分和秒都用2位数表示,对UTC时间最后加一个大写字母Z,其他时区用实际时间加时差表示。如UTC时间下午2点30分5秒表示为14:30:05Z或143005Z,当时的北京时间表示为22:30:05+08:00或223005+0800,也可以简化成223005+08。
0easy
Title: typo in swirl default mode Body: ### Description: function [swirl](https://github.com/scikit-image/scikit-image/blob/main/skimage/transform/_warps.py#L520): `def swirl(...mode='reflect',...):` ``` """Perform a swirl transformation. ... mode : {'constant', 'edge', 'symmetric', 'reflect', 'wrap'}, optional Points outside the boundaries of the input are filled according to the given mode, with 'constant' used as the default. Modes match the behaviour of `numpy.pad`. ``` The declaration and the documentation do not match: "mode=**'reflect**'" and "with **'constant'** used as the default" ### Way to reproduce: _No response_ ### Version information: _No response_
0easy
Title: Simple bug on downloading data using example Backtesting with vectorbt Body: Just tried to go step by step with the example [https://github.com/twopirllc/pandas-ta/blob/main/examples/VectorBT_Backtest_with_Pandas_TA.ipynb](url) It didn't work, received and error: UnboundLocalError: local variable 'df' referenced before assignment
0easy
Title: Specify path to save .gv files Body: ## Current & Expected behavior I wish there was a way of specifying the path for saving the DAG (.gv). I had two sets of DAGs and the file gets overwritten as test_output/execute.gv. ## Library & System Information python version= 3.7.11, hamilton library version =1.1.1, linux= yes, Ubuntu 20 installed via WSL 1
0easy
Title: Remove `selenium` dependency Body: Hey, Driverless isnt using Selenium (anymore?), but its still an dependency: > Traceback (most recent call last): > File "C:\Users\...\...\main.py", line 3, in <module> > from selenium_driverless import webdriver > File "C:\Users\...\...\.venv\Lib\site-packages\selenium_driverless\webdriver.py", line 43, in <module> > from selenium_driverless.scripts.switch_to import SwitchTo > File "C:\Users\...\...\.venv\Lib\site-packages\selenium_driverless\scripts\switch_to.py", line 29, in <module> > from selenium_driverless.types.target import TargetInfo, Target > File "C:\Users\...\...\.venv\Lib\site-packages\selenium_driverless\types\target.py", line 26, in <module> > from selenium_driverless.utils.utils import safe_wrap_fut > File "C:\Users\....\....\.venv\Lib\site-packages\selenium_driverless\utils\utils.py", line 10, in <module> > import selenium > ModuleNotFoundError: No module named 'selenium' Would be nice to be able to get rid of Selenium :)
0easy
Title: provide available configurations (what the user can write in the yaml file) in the docs Body: ### Description This issue is about providing the configurations in the docs that the user can use in the yaml file. This includes a description of what each key/value pairs can do. At the moment I just wrote these in the Readme file but it would be great to have an overview about it in the docs. I think this would be a great issue for new comers
0easy
Title: Hide stdout when running `py.test` Body: Can py.test capture stdout when running tests by default? Currently you get to see a lot of output when running the tests instead of just Fail/Pass.
0easy
Title: checkout.ThankYouView always displays the first order if user is superuser Body: ### Issue Summary The code that allows superusers to force an order thank-you page for testing, is ignoring real orders if the user is a superuser. ### Steps to Reproduce 1. Set user as superuser 2. Insert at least 2 orders You'll always end up with the first order, even if we are inserting a new one. ### Technical details * Python version: 3.8.6 * Django version: 3.2.7 * Oscar version: 3.1
0easy
Title: Hexapod object doesn't check whether the angles are within range as set in `settings.py` Body: Should it? The IK solver checks it, and the widget slider constraints the angles as well.
0easy
Title: support (or don't support) freeform parameters with `--extra-vars` Body: ### Summary if you use a string for your `--extra-vars` that doesn't start with `{` or `[` and doesn't contain `=`, this is considered by the `parse_kv` splitter as a "free-form parameter", and gets thrown into a variable `_raw_params`, which is meant to be used by some modules to support a more terse syntax ([source code](https://github.com/ansible/ansible/blob/4a710587ddd043ee729d85ab987c85193f9885c7/lib/ansible/parsing/splitter.py#L88-L91)). If you use multiple such `--extra-vars`, The global `_raw_params` variable stores only the last one. ```console $ ansible localhost -e foobar -m debug -a 'var=foobar' localhost | SUCCESS => { "foobar": "VARIABLE IS NOT DEFINED!" } ``` ```console $ ansible localhost -e foobar -m debug -a 'var=_raw_params' localhost | SUCCESS => { "_raw_params": "foobar" } ``` ```console $ ansible localhost -e foobar -e barfoo -m debug -a 'var=_raw_params' localhost | SUCCESS => { "_raw_params": "barfoo" } ``` I think it would be a nice feature if free form `--extra-args` would set the specified variable to `true`. Currently the only way to set a variable to `true` from the command line involves typing out a whole JSON dictionary in a shell argument, avoiding quotation issues. I think free form `--extra-vars` should at least be an error if it won't be supported. Storing a CLI arg as global `_raw_params` and clobbering CLI args over each other isn't very good behavior. ### Issue Type Feature Idea ### Component Name splitter ### Additional Information At my workplace we do something sort of like `-e run_special_tasks=true`, and then reference that variable using `when: run_special_tasks | default(false) | bool`. This has issues: * `to_bool` assumes literally anything is false except for a short list of truey strings and `1`. this means that `2` is false, `"lorem ipsum"` is false, lists are false, dictionaries are false. [source code](https://github.com/ansible/ansible/blob/4a710587ddd043ee729d85ab987c85193f9885c7/lib/ansible/plugins/filter/core.py#L83) * we have to remember to never reference the value of this variable without type casting it first. We can also create a second variable that contains the boolean value of the 1st, but then we have to remember never to use the 1st. How can this be enforced? Custom extensions to `ansible-lint`? Instead, if this was supported, we could do `-e run_special_tasks`, and reference that variable as `when: run_special_tasks | default(false)`. I think this would be intuitive and elegant. ### Code of Conduct - [x] I agree to follow the Ansible Code of Conduct
0easy
Title: Enhance BDD support (GIVEN/WHEN/THEN) for French language Body: Hi, According to the Gherkin Syntax, there's a lot of ways to translate the BDD keywords in French. Cf: https://cucumber.io/docs/gherkin/languages/ As an exemple, GIVEN => 16 possibilities ![image](https://github.com/robotframework/robotframework/assets/996732/b8ed00e2-166c-47d4-bba2-571086682b91) Currently, only one seems implemented in Robot Framework: 'Étant donné' This has impacts: - emphasis on capital letter is not easy to type - this involves sometimes to midy the name of the step so that the general meaning of the sentence is correct To resolves this, will it be possible to implements all the possibilities ? Or at least "Etant donné", "Etant donné que", "Étant donné que". Regards
0easy
Title: BUG: `.convert_dtypes(dtype_backend="pyarrow")` strips timezone from tz-aware pyarrow timestamp Series Body: ### Pandas version checks - [X] I have checked that this issue has not already been reported. - [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python >>> import pandas as pd >>> s = pd.Series(pd.to_datetime(range(5), utc=True, unit="h"), dtype="timestamp[ns, tz=UTC][pyarrow]") >>> s 0 1970-01-01 00:00:00+00:00 1 1970-01-01 01:00:00+00:00 2 1970-01-01 02:00:00+00:00 3 1970-01-01 03:00:00+00:00 4 1970-01-01 04:00:00+00:00 dtype: timestamp[ns, tz=UTC][pyarrow] >>> s.convert_dtypes(dtype_backend="pyarrow") 0 1970-01-01 00:00:00 1 1970-01-01 01:00:00 2 1970-01-01 02:00:00 3 1970-01-01 03:00:00 4 1970-01-01 04:00:00 dtype: timestamp[ns][pyarrow] ``` ### Issue Description Calling `.convert_dtypes(dtype_backend="pyarrow")` on a Series that is already a timezone aware pyarrow timestamp dtype strips the timezone information. Testing on older versions, this seems to be a regression introduced sometime between versions 2.0.3 and 2.1.0rc0 ### Expected Behavior No change should be made to the dtype ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 3f7bc81ae6839803ecc0da073fe83e9194759550 python : 3.12.2 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.19045 machine : AMD64 processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : English_United States.1252 pandas : 3.0.0.dev0+1654.g3f7bc81ae6 numpy : 2.1.3 dateutil : 2.9.0.post0 pip : 24.3.1 Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None psycopg2 : None pymysql : None pyarrow : 18.0.0 pyreadstat : None pytest : None python-calamine : None pytz : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2024.2 qtpy : None pyqt5 : None </details>
0easy
Title: long-description caption for action Body: - Add a “long-description” field that explains recommendation in detail for each action - Support dropdown for ordering options (for each action) - For example, filters can be sorted to be based on similar filter attributes for organization - option to display statistics or scores side-by-side with vis
0easy
Title: 小红书笔记正文可以下载吗,txt格式的,放在笔记同一个文件夹里 Body: 大佬,软件很好用,感谢!(已赞赏),另外,想了解下小红书笔记正文可以支持下载吗,试了下目前好像还不行
0easy
Title: [DOC-FIX] Document supported formats for `artifact_uri` in `mlflow.artifacts.download_artifacts` Body: ### Willingness to contribute No. I cannot contribute a documentation fix at this time. ### URL(s) with the issue https://www.mlflow.org/docs/latest/python_api/mlflow.artifacts.html#mlflow.artifacts.download_artifacts ### Description of proposal (what needs changing) The documentation for <code>mlflow.artifacts.download_artifacts</code> mentions that the <code>artifact_uri</code> parameter can take on various forms, such as <code>runs:/500cf58bee2b40a4a82861cc31a617b1/my_model.pkl</code>, <code>models:/my_model/Production</code>, or <code>s3://my_bucket/my/file.txt</code>. However, it would be helpful to provide a more explicit documentation of the supported formats for <code>artifact_uri</code> using regex expressions. Additionally, if the following forms are also valid for artifact_uri, they should be explicitly mentioned in the documentation: ``` https://<host>:<port>/mlartifacts http://<host>/mlartifacts mlflow-artifacts://<host>/mlartifacts mlflow-artifacts://<host>:<port>/mlartifacts mlflow-artifacts:/mlartifacts models:/<name>/<version>/path/to/model models:/<name>@alias/path/to/model ``` ### Benefits Providing a clear and explicit documentation of the supported formats for artifact_uri would help users understand what types of URIs are accepted by the function, reducing errors and improving overall usability. ### Reference https://mlflow.org/docs/latest/tracking/server.html#using-the-tracking-server-for-proxied-artifact-access https://github.com/mlflow/mlflow/blob/facda38aebcb458119ac2d9b9636f3b35b4105d2/mlflow/store/artifact/models_artifact_repo.py#L81-L84
0easy
Title: Pandas TA MACD not in line with TA Lib Body: pandas_ta not in line with talib probably because of wrong calculation of EMA9 for signal line (consequence of bug in EMA calculation, see above ![image](https://user-images.githubusercontent.com/22365509/126035795-6e224cc4-ca71-4e9c-810e-6134892486f1.png)
0easy
Title: [Bug] Looks the system prompts incomplete Body: see: https://github.com/browser-use/browser-use/blob/f0b9522ef403e6d8d644ec0ceb8709457c5f1d16/browser_use/agent/prompts.py#L96
0easy
Title: [Feature] Containerize GraphQLer Body: For people that want to run GraphQLer in a container, we should give them a dockerfile to do so
0easy
Title: Image generation or manipulation in /converse Body: Currently, we are able to upload a picture to /converse and able to ask questions about the picture using natural language as though a real person. that's an excellent feature and worked really well. I think the natural progression to this is the ability to use natural language to instruct the bot (just by using natural language) to generate an image. A further extension to this is the ability for for the user to use natural language to suggest changes to the generated (or uploaded) picture. Request: Nice to have
0easy
Title: Use `httpx` as client on WebSocket tests Body: The idea would be to eliminate `websockets` client from the test suite, and use what is described here: https://github.com/encode/httpx/issues/304#issuecomment-1325333860 The motivation is to simplify the test suite. <!-- POLAR PLEDGE BADGE START --> > [!IMPORTANT] > - We're using [Polar.sh](https://polar.sh/encode) so you can upvote and help fund this issue. > - We receive the funding once the issue is completed & confirmed by you. > - Thank you in advance for helping prioritize & fund our backlog. <a href="https://polar.sh/encode/uvicorn/issues/2012"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/encode/uvicorn/issues/2012/pledge.svg?darkmode=1"> <img alt="Fund with Polar" src="https://polar.sh/api/github/encode/uvicorn/issues/2012/pledge.svg"> </picture> </a> <!-- POLAR PLEDGE BADGE END -->
0easy
Title: Scheduler initializes disk queue even if `JOBDIR` is empty string Body: # Description By default, `JOBDIR` is the empty string per documentation. This makes `SpiderState` extension to be disabled by default and `RFPDupeFilter` to not save fingerprints to file `requests.seen`. However, request scheduler still creates disk queue. The leads to inconsistences: pending requests will be saved to the disk, but their fingerprints and spider state will not. Actually, there is no default value for `JOBDIR` at all, and the documentation is wrong. This setting is never set. And by design of `BaseSettings` unset settings are `None`. # Steps to Reproduce 1. Create new project. 2. Create simple spider which yields one requests and then exists. 3. Open terminal and cd to the root of the project. 4. Start it with `scrapy crawl spider_name -s JOBDIR=''`. 5. Wait till spider closure. Sample spider code: ```python class TestSpider(BaseSpider): name = '_test' def start_requests(self, /): return Request('data:,', self.check) def check(self, _, /): pass ``` **Expected behavior:** Root directory of the project does not contain directory `requests.queue`. **Actual behavior:** Root directory of the project contains directory `requests.queue`. **Reproduces how often:** always. # Versions ``` Scrapy : 2.11.0 lxml : 4.9.3.0 libxml2 : 2.10.3 cssselect : 1.2.0 parsel : 1.8.1 w3lib : 2.1.2 Twisted : 22.10.0 Python : 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)] pyOpenSSL : 23.2.0 (OpenSSL 3.1.3 19 Sep 2023) cryptography : 41.0.4 Platform : Windows-10-10.0.19044-SP0 ``` # The fix The reason of this behavior is very simple. `SpiderState` and `RFPDupeFilter` performs truth checks for `JOBDIR`, but request scheduler checks whether it is not `None`. I suggest the next steps to eliminate the issue: 1. In documentation change `JOBDIR` default value from `''` to `None` because it is the actual default value. 2. In scheduler also perform truth check. [Here](https://github.com/scrapy/scrapy/blob/9b06f6b316b2759b35b1ec39ef93ccb563458c9c/scrapy/core/scheduler.py#L355) change `if jobdir is not None:` to `if jobdir:`. The next suggestions are optional, but can help eliminate any future issues: 1. Change function [`jobdir`](https://github.com/scrapy/scrapy/blob/9b06f6b316b2759b35b1ec39ef93ccb563458c9c/scrapy/utils/job.py#L7) to return `None` if `JOBDIR` is the empty string. - This change allows to check job directory presence by truth and `is not None`. 2. Explicitly set `JOBDIR` in default settings to `None`. - This change enables displaying custom `JOBDIR` in the `Overridden settings` log.
0easy
Title: Remove warning from ToolInvoker Body: The current implementation of the ToolInvoker component contains a warning ```python msg = "The `ToolInvoker` component is experimental and its API may change in the future." warnings.warn(msg) ``` We should remove this warning with the next release, 2.10.
0easy
Title: Don't show worker nodes when pytest is run quiet Body: I have a manycore machine (32 threads), and when I use pytest-xdist, the output of the worker node status is a bit bothersome due to showing 32 node status messages multiple times. This is the output of running pytest-xdist with a subset of the mypy test suite (a full screen CMD window on a 4k monitor): ![image](https://user-images.githubusercontent.com/9504279/48320945-924a4080-e5d3-11e8-9deb-b5588784a7f2.png) The setup messages take up about 90% of the window. It would be nice if this could be hidden when pytest is run in quiet mode, as it makes the time to print out the test setup info non-trivial (in addition it fills up the screen making scroll back more annoying).
0easy
Title: More adapters Body: Implement adapters: - [x] WeatherAPI - [x] CSV files - [x] Google Sheets - [x] Socrata - [x] Pandas - [x] Datasette - [x] Github repositories - [x] [XML](https://github.com/betodealmeida/shillelagh/issues/389) - [ ] Google calendar - [ ] OpenAPI (use https://pypi.org/project/openapi3/?) - [ ] Clubhouse / Jira / Trello / etc. - [ ] Superset? - [ ] Slack? - [ ] OPeNDAP?
0easy
Title: Passing a random number generator to simulator/sampler methods should VS maintaing an internal random state Body: All Cirq simulators maintain an internal `np.random.RandomState` https://github.com/quantumlib/Cirq/blob/e1b03ef63af4270d6a185df3db6e43c8232c6a71/cirq-core/cirq/sim/sparse_simulator.py#L130 This is fine when running in a single thread, however we are starting to have more places where use multiprocessing/multithreads (e.g. using `multiprocessing.Pool`, `concurrent.futures.ThreadPoolExecutor`, or other multiprocessing/multithreading libraries) and in these cases this internal random state negatively affects the simulations in two ways - the internal random state becomes a shared state that causes the different threads/processes blocked on each other. - the results of simulations become correlated. --- Suggested solution: Start to prefer passing `prng`s to methods/functions over maitaining an internal state. this `prng` should be an `np.random.Generator` instead of an `np.random.RandomState` so that we get a `spawn` method to use when starting threads/processes. related: https://github.com/quantumlib/Cirq/issues/6531
0easy
Title: Looking for a dev(s) to help data collection process - OASST Body: Hello! Open Assist is trying to build a larger dataset for fine-tuning models. Specifically, we are looking for a couple of developers that can assist in building out new UI on the open-assistant website for new tasks. Some of the tasks we see being important and needing UI are as follows: - implement a function from a doc-string or description - document or explain code - refactor code - write a unit-test - convert/translate from programming language A to language B - Mathematics - etc. If you're interested, please let me know! (looking for 2 people!) Current dataset - https://huggingface.co/datasets/OpenAssistant/oasst1 Open Assist website - https://open-assistant.io/
0easy
Title: Request to Include Named Entity Recognition and Relation Extraction Model Finetuning Examples and Guidance Request Body: ### Feature request As an NLP enthusiast working on Named Entity Recognition (NER) and Relation Extraction (RE) tasks, I would like to request the inclusion of NER and RE-related examples, best practices, and **guidance for fine-tuning** models specifically for these tasks on your GitHub page. The documentation on fine-tuning models for NER and RE would help guide researchers in developing state-of-the-art models without having to reinvent the wheel.Currently Finetuning examples for NER and RE tasks are not available there.These techniques have relevance in identifying technical terms,chemical names,etc from texts and recognize the relationship between identified domain specified technical terms. ### Motivation ner models are critical in various technologies to identify particular domain related words from an input text,Especially if we are dealing with business terms,chemical names,particle names,and other domain specific technical terms.2nd importance is identify the relation between identified entities.Eg: sentence:Apple acquired Beats for $3 billion in 2014;Entities:Apple, Beats, $3 billion,2014;Relation:acquired ### Pre-requisite First there must be an appropriate dataset for finetuning, including 1)sentences,2)entities corresponding to that sentence in next column and 3)finally relation between identified entities.Dataset must include diverse domain specific datas like finance,medical,chemical,news,politics,etc then only a general kind of model can identify entities generally.If the purpose is domain specific then particulr domain specific dataset is a must.
0easy
Title: [BUG] pygwalker bug report Body: **Describe the bug** in config>format you can set a formatting string for numbers but it is used only for charts, not for tables. **To Reproduce** Simply set a formatting string, for example '.2f' then set mak type to table: nothing seems formatted. **Expected behavior** Numbers formatted in table same as in charts **Versions** - pygwalker version: 0.3.11 - python version: 3.11 - browser
0easy
Title: Marketplace - agent page - change font of "Build AI agents and share your vision" Body: ### Describe your issue. <img width="1378" alt="Screenshot 2024-12-16 at 21 53 20" src="https://github.com/user-attachments/assets/311e64fb-ae99-4b02-89e5-1f16e731df43" /> Change typography to the following specs: font-family: Poppins; font-size: 48px; font-weight: 600; line-height: 54px; letter-spacing: -0.012em; text-align: center; text-underline-position: from-font; text-decoration-skip-ink: none;
0easy
Title: Miner url link Body: Would it be possible to have the url link on the IP address of the miner go right to the ant miner status page? I always am clicking right to the status page. **currently links** to http://1.1.1.1 would it be possible not sure if each version of miners supported would this work: **suggested link** http://1.1.1.1/cgi-bin/minerStatus.cgi
0easy
Title: reimplement multi-threaded downloads with async Body: there are a few places where we are using multithreading to download files, but looks like using asyncio is a [better option](https://www.velotio.com/engineering-blog/async-features-in-python). we'd first have to do a quick implementation and see how much benefit it brings to consider migrating
0easy
Title: automate detection of classifier and regressor for tree based transformers Body: At the moment, in the categorical tree encoder and the tree discretiser, we have an argument is_regression that the user needs to fill in in order to detect if the user is aiming to perform classification or regression. Sklearn has an automated process with the is_classification (see Decision tree source code). Can we bring this functionality to feature-engine? I think we can :p
0easy
Title: Add documentation to Camera model about how input angles might be converted Body: ## 📚 Documentation Euler angles in 3D don't uniquely specify a direction. (For example, if I have a vector pointing from the middle of the earth to a point in the equator, if I want to rotate it to point to the north pole, I can rotate it 90º along the longitude axis, or I can rotate it any amount I want along the equator, and then rotate it 90º along *that* longitude.) It turns out that in napari, when we set `viewer.camera.angle`, this gets passed to VisPy, which can normalise the angle after passing through a quaternion, as demonstrated in [this image.sc thread](https://forum.image.sc/t/how-does-napari-handle-3d-camera-angle-setting-unexpected-behavior/97646): ```python import napari import numpy as np viewer = napari.Viewer(ndisplay=3) viewer.add_image(np.random.random((5, 5, 5))) viewer.camera.angles = (0, 176, -90) print(viewer.camera.angles) napari.run() print(viewer.camera.angles) ``` prints: ```python (0.0, 176.0, -90.0) (180.0, 4.000000000000012, 90.0) ``` Interestingly, you need both the image layer and the napari.run to see the effect. There's nothing intrinsically wrong with this, but it can be surprising. Therefore we should probably document it in the Camera docstring, if not in other narrative docs.
0easy
Title: [ENH] detection notebook - move data loading for examples to functions in the `datasets` module Body: In the detection notebook 7 (added in https://github.com/sktime/sktime/pull/7284), @alex-jg3 has used a couple common toy data sets to demonstrate the detectors. The data is loaded manually, but it would be nice to have a `load_datasetname` function for each, and proper documentation. As a recipe: 1. look at every block in the notebook 7, where a csv is loaded 2. try to turn this into a function `load_dataname(param1, param2)`. Possibly with a `return_mtype` argument that allows the user to specify the format, using `convert` from `datatypes`, but this is optional. 3. obtain the description of the dataset from this readme https://github.com/sktime/sktime-tutorial-ODSC-Europe-2024/blob/main/data/README.md and include it in the docstring 4. put the function in the appropriate location in the `datasets` module
0easy
Title: [BUG]: `rio.CodeBlock` display_controls=False doesn't align properly Body: ### Describe the bug The `rio.CodeBlock` component's `display_controls = False` option doesn't align properly when rendered. ### Expected Behavior The rio.CodeBlock component should align properly regardless of the `display_controls` setting ### Steps to Reproduce 1. Create a rio.CodeBlock component. 2. Set the display_controls attribute to False. 3. Render the component and observe the alignment issues. ### Screenshots/Videos <img width="958" alt="image" src="https://github.com/rio-labs/rio/assets/41641225/f7f952c7-d4c8-456b-a887-6dde0f46186a"> ### Operating System Windows, MacOS, Linux ### What browsers are you seeing the problem on? Chrome, Safari, Edge ### Browser version _No response_ ### What device are you using? Desktop ### Additional context _No response_
0easy
Title: Japanese translation for #405 document updates Body: Once https://github.com/slackapi/bolt-python/pull/405 is merged, we can start working on the Japanese version. ### The page URLs * https://slack.dev/bolt-python/ja-jp/tutorial/getting-started ## Requirements Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
0easy
Title: Add unit tests for the `__repr__` method of the RequestNode class Body: ## Unit Test ### Description Add unit tests for the `__repr__` method of the **RequestNode** class: https://github.com/scanapi/scanapi/blob/main/scanapi/tree/request_node.py#L57 [ScanAPI Writing Tests Documentation](https://github.com/scanapi/scanapi/wiki/Writing-Tests)
0easy
Title: helping users when setting their key Body: I got a report from a cloud user that they had a "malformed API key error", somewhere in our docs, we have the following instructions: ```sh ploomber cloud set-key {your-key} ``` And the user was setting it with the `{}` characters. We should detect if the passed key begins with `{` and ends with `}`. If so, ignore those characters (or maybe show an error telling them to remove them?)
0easy
Title: Chart title property should be dynamic Body: ### What went wrong? 🤔 Updating dynamically the title property of a chart visual element does not work. Here is an example where I try to change the title of a chart using a button. Pressing the button should change the title but does nothing. ```python import pandas as pd import taipy.gui.builder as tgb from taipy.gui import Gui data = pd.DataFrame({"Product": ["Shovel", "Rake", "Hoe"], "Price": [10, 5, 7]}) title = "Undefined" def update_title(state): state.title = "Price of Gardening Tools" with tgb.Page() as page: tgb.button("Update Title", on_action=update_title) tgb.chart(data="{data}", title="{title}") Gui(page).run() ``` Using layout is a workaround: ```python import pandas as pd import taipy.gui.builder as tgb from taipy.gui import Gui data = pd.DataFrame({"Product": ["Shovel", "Rake", "Hoe"], "Price": [10, 5, 7]}) layout = {"title": "Undefined"} def update_title(state): state.layout = {"title": "Price of Gardening Tools"} with tgb.Page() as page: tgb.button("Update Title", on_action=update_title) tgb.chart(data="{data}", layout="{layout}") Gui(page).run() ``` ![image](https://github.com/user-attachments/assets/4ea21037-5e6e-4a24-a6b1-33a390444a27) ### Runtime Environment Windows 11 ### Browsers Chrome ### OS Windows ### Version of Taipy 4.0.2 ### Acceptance Criteria - [ ] A unit test reproducing the bug is added. - [ ] Any new code is covered by a unit tested. - [ ] Check code coverage is at least 90%. - [ ] The bug reporter validated the fix. - [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated. ### Code of Conduct - [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [ ] I am willing to work on this issue (optional)
0easy
Title: tox 4: passenv can no longer use factors Body: My tox.ini files are affected by a change to the parsing of `passenv` in tox 4: I can no longer use factors to conditionalize which environment variables are passed: ``` passenv = sagepython, sagewheels: SAGE_VENV sagewheels: SAGE_SPKG_WHEELS ``` (from https://github.com/sagemath/sage/blob/develop/pkgs/sagemath-categories/tox.ini#L29) _Originally posted by @mkoeppe in https://github.com/tox-dev/tox/discussions/2699#discussioncomment-4403536_
0easy
Title: device-side assert triggered at /opt/conda/conda-bld/pytorch_1535493744281/work/aten/src/THC/generic/THCStorage.cpp:36 Body: I use the training code supplied without changing.But it shows this error. I have found blogs for this error and known it mainly because the label computing loss may have negative value. But I download the VID dataset in the official web and use the applied code to change it to JSON. Why there is still this error ?
0easy
Title: Add pipenv installation guide to docs, specifying to allow prereleases Body: # Instructions for how to install pyjanitor via pipenv <!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs page, and what you would like to see it become. Example starter wording is provided. --> Some folks might use pipenv for environment management. The recent update requires a prerelease dependency (black, as mentioned in [760](https://github.com/ericmjl/pyjanitor/issues/760)). I'd like to update the installation page of the documentation so that it A) includes pipenv installation and B) let's users know that they need to allow prereleases. Something to the effect of : >Installation of `pyjanitor` through pipenv requires you to allow prereleases: >```bash >pipenv install --pre pyjanitor >``` # Relevant Context <!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available to get you started. --> - [Installation page](https://pyjanitor.readthedocs.io/installation.html) - [README (I assume this is the source of the installation page?)](https://github.com/ericmjl/pyjanitor/blob/dev/README.rst)
0easy
Title: Check all links in docs and make them more accessible Body: Guidelines for links are as follows: * We should comply with https://vizro.readthedocs.io/en/stable/pages/development/documentation-style-guide/#language * Links should always work. ### Task (1) Fix all links Before we have perfection, we need to run through all the docs pages and fix issues like this one https://github.com/mckinsey/vizro/pull/422#discussion_r1566440210 where they arise. ### Task (2) Set up style checking for future content One way to enforce this ongoing would be to use Vale. I've asked how to do this: https://github.com/errata-ai/vale/discussions/807 ### Task (3) Add external link checking (if not already running) to CI. We already have this for internal links in that we build with `--strict` but need to have something check links to Dash etc as Kedro does. This ticket doesn't need technical writing skills nor does it need Vale knowledge. Good first issue for a new contributor!
0easy
Title: Fix image URLs on PyPI Body: Image URLs are broken here: https://pypi.org/project/notebooker/
0easy
Title: CMSPageRenderer shall add data to context only once Body: Currently the method `shop.rest.renderers.CMSPageRenderer.render` merges the serialized data, once into the `template_context` and once as extra attribute `template_context['data']`. This is confusing and may cause errors by polluting the global context namespace. Instead the serialized data shall only be added to `template_context['data']`. Additionally, the View class using that renderer shall be allowed to override the attribute name `data`. I would propose to add `context_data_name = 'my_data'` to the View class (in analogy to `context_object_name` in Django View classes).
0easy
Title: Remove `if __name__ == "__main__"` block from `_layout.py` Body: This is legacy code we don't want any more.
0easy
Title: [New feature] Add apply_to_images to ChannelDropout Body:
0easy
Title: Change Request Reviews metric API Body: The canonical definition is here: https://chaoss.community/?p=4712
0easy
Title: Fix PytestDeprecationWarning: TerminalReporter.writer Body: To reproduce it, run `pytest` in this repo: ``` tests/test_pytest_picked.py: 13 tests with warnings /home/ana/workspace/pytest-picked/venv/lib/python3.8/site-packages/pytest-5.4.1-py3.8.egg/_pytest/terminal.py:287: PytestDeprecationWarning: TerminalReporter.writer attribute is deprecated, use TerminalReporter._tw instead at your own risk. See https://docs.pytest.org/en/latest/deprecations.html#terminalreporter-writer for more information. warnings.warn( -- Docs: https://docs.pytest.org/en/latest/warnings.html ```
0easy
Title: Documentation Body:
0easy
Title: Switching to AWS account with insufficient permissions with AWS enabled crashes `sky launch` Body: <!-- Describe the bug report / feature request here --> Repro - Have AWS, Azure, .. enabled - Now ` export AWS_SECRET_ACCESS_KEY=...` and `export AWS_ACCESS_KEY_ID=`, an account with insufficient permissions, e.g., no `ec2:DescribeRegions` - Run: ``` » sky launch RuntimeError: Failed to retrieve AWS regions. Please ensure that the `ec2:DescribeRegions` action is enabled for your AWS account in IAM. Ref: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html ``` - Expected: shows other clouds in optimizer table; Actual: uncaught RuntimeError. <!-- If relevant, fill in versioning info to help us troubleshoot --> _Version & Commit info:_ * `sky -v`: PLEASE_FILL_IN * `sky -c`: 71a95f4bf
0easy
Title: emojis are not tokenized very well Body: Maybe add something like this as a pre or post processing step? It might make sense to download the emoji list and store it as part of the build, so people do not need to load the emojis module, ... ``` import emoji from emoji import unicode_codes import re EMOJI_UNICODE = unicode_codes.EMOJI_UNICODE['en'] emojis = sorted(EMOJI_UNICODE.values(), key=len, reverse=True) print (emojis, sep='\n') emoji_regexp = f"({'|'.join(re.escape(u) for u in emojis)})" EMOJI_XL = re.compile(rf"\B({emoji_regexp})", flags=re.UNICODE) EMOJI_XR = re.compile(rf"({emoji_regexp})\B", flags=re.UNICODE) EMOJI_WL = re.compile(rf"(\w)({emoji_regexp})", flags=re.UNICODE) EMOJI_WR = re.compile(rf"({emoji_regexp})(\w)", flags=re.UNICODE) EMOJI_REGEX = re.compile(rf"({emoji_regexp})", flags=re.UNICODE) def split_emoji(text): text = EMOJI_REGEX.sub(r' \1 ', text) text = text.replace(' ', ' ') return text test = "🤔 🙈 me así, se😌 ds 💕👭👙 hello 👩🏾‍🎓 emoji hello 👨‍👩‍👦‍👦 how are 😊 you today🙅🏽🙅🏽" #test = "They are going to start a direct flight soon😠" print(test) print(split_emoji(test)) ```
0easy
Title: Add Google Gemini support Body: Hi, is there a way to make this work with Google Gemini?
0easy
Title: Error: 'NoneType' object has no attribute 'group' Body: hi, sorry if this is a simple question, but when running it, it sometimes gives me Error: 'NoneType' object has no attribute 'group' and sometimes [-] Error: 'videos' does anyone know how to fix this? thanks! ![grafik](https://github.com/FujiwaraChoki/MoneyPrinter/assets/157311680/c1c3f677-fb36-421b-a2bb-ba849b8bcd6c) ![grafik](https://github.com/FujiwaraChoki/MoneyPrinter/assets/157311680/7a62bffa-affe-497f-a176-bcc4cc81134d) ![grafik](https://github.com/FujiwaraChoki/MoneyPrinter/assets/157311680/586b80af-1b0f-437d-a489-0e7bb5e36be9) [+] Cleaned ../temp/ directory [+] Cleaned ../subtitles/ directory [Video to be generated] Subject: testing video **Testing Video Script** "In this video, we will conduct a series of tests to demonstrate the performance and functionality of our new product. First, we will examine the durability of the materials used by subjecting the product to various impact tests. Next, we will evaluate the product's reliability by conducting stress tests under different environmental conditions. Additionally, we will showcase the product's user interface and highlight its intuitive design through a series of demonstrations. Finally, we will conclude by summarizing our findings and highlighting the key benefits of the product. Stay tuned as we put our product to the test and uncover its capabilities." ```json [ "product testing video", "performance testing footage", "reliability test stock video", "stress test demonstration", "product durability showcase" ] ``` [*] GPT returned an unformatted response. Attempting to clean... [-] Error: 'NoneType' object has no attribute 'group' 127.0.0.1 - - [02/Feb/2024 21:39:43] "POST /api/generate HTTP/1.1" 200 - tree: Auflistung der Ordnerpfade für Volume Data Volumeseriennummer : E82D-14AA D:. ├───Backend │ └───__pycache__ ├───fonts ├───Frontend ├───subtitles └───temp in .env IMAGEMAGICK_BINARY="C:\\Program Files\\ImageMagick-7.1.1-Q16-HDRI\\magick.exe" # Download from https://imagemagick.org/script/download.php (I did double backslash but github formats it as a single one for some reason)
0easy
Title: Error during notebook initialization. argument of type 'WindowsPath' is not iterable. Body: From email from user: > When trying to “watch” a notebook, the following error occurs: Error during notebook initialization. argument of type 'WindowsPath' is not iterable. > Somewhere in the settings.py file (mercury). A string was expected, but a path was provided. > I cast all x / “y” to str(x / “y”) e.g. STATIC_ROOT = str(BASE_DIR / "static"). > This seemed to fix the error.
0easy
Title: Unformatted help text is popped out when peers for intances are changed Body: ### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. - [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `[email protected]` instead.) ### Bug Summary Following unformatted help text (includes `{0}` and `{1}`) is popped out on the top-right corner when I change peers for any intances. > Peers update on {0}. Please be sure to run the install bundle for {1} again in order to see changes take effect. ![image](https://github.com/ansible/awx/assets/2920259/91cb7395-d965-42b0-a93a-0d3b99c80f86) ### AWX version 23.8.1 ### Select the relevant components - [X] UI - [ ] UI (tech preview) - [ ] API - [ ] Docs - [ ] Collection - [ ] CLI - [ ] Other ### Installation method kubernetes ### Modifications no ### Ansible version N/A ### Operating system CentOS Stream 8 ### Web browser Chrome ### Steps to reproduce Unformatted help text will be popped out on the steps 4 and 5. 1. Add `hop01.example.com` as new hop node 2. Add `exec01.example.com` as new execution node, with port `27199` 3. Open `hop01.example.com` and move on to the `Peers` tab 4. Associate `exec01.example.com` as a peer 5. Remove `exec01.example.com` from peers ### Expected results The help text should be > Peers update on hop01.example.com. Please be sure to run the install bundle for hop01.example.com again in order to see changes take effect. ### Actual results Help text contains `{0}` and `{1}`. > Peers update on {0}. Please be sure to run the install bundle for {1} again in order to see changes take effect. ### Additional information _No response_
0easy
Title: Not getting more than 256 clusters Body: Hello, I have a data frame with more than 11k records and I know that at least 700 unique rows are there. But still I am getting only 256 clusters in spite of passing the argument n_clusters=500. What can be the issue? I was checking the kmodes code, I am only suspecting the issue in the function get_unique_rows. There is only one place where n_clusters get reassigned in the code. Regards, Nilkesh
0easy
Title: Enhance recursion detection Body: It is easy to create infinite recursion with Robot: ```robotframework *** Test Cases *** Example Recursion *** Keywords *** Recursion Recursion ``` To prevent recursive execution failing on Python side for `RecursionError` in an unexpected location, which typically breaks output.xml, we have a hard limit of 100 recursive keyword or control structures. The limit was added in RF 2.7 (#551) and it was initially 42, but it was raised to 100 in RF 5.0 (#4191) when we didn't need to anymore think about Jython. A recent change to listeners (#5268) allowed listeners to get notifications from actions they initiated which may cause recursion with listeners. For example, this one keeps calling `Log` forever (i.e. until recursion is forcefully stopped): ```python from robot.libraries.BuiltIn import BuiltIn def start_keyword(data, result): BuiltIn().run_keyword('Log', 'Recursion!') ``` Interestingly in this usage the current mechanism to detect recursion doesn't work. As the result, execution using the above listeners fails so that output.xml is corrupted. The reason seems to be that there are more Python stack frames used when listeners are involved and a simple fix would be lowering the current limit of 100 started keywords or control structures to something like 75. I guess that would be fine, but there could be someone who's affected by the change. An alternative fix is coming up with a better way to detect recursion. Instead of having a hard limit of started keywords or control structures, we could look are we close to Python's recursion limit. That was actually mentioned as an option already in #551, but [mentioned in a comment](https://github.com/robotframework/robotframework/issues/551#issuecomment-47478924), using `len(inspect.stack())` would be really slow. Good news is that I found a more performant way to do that using [sys._getframe](https://docs.python.org/3/library/sys.html#sys._getframe). `sys._getframe` isn't guaranteed to exist on all Python implementations, but it exists at least in PyPy and it's easy to handle it not existing in others.
0easy
Title: Consider showing an error when the parameters cell defines other variables than upstream and product Body: It might indicate a missing cell separator, hence those lines will happen before the injected parameters and might break The problem is that other variables might be present if the Task receives non-empty `param`, especially if static analysis is on.
0easy
Title: Ranking task UI improment Body: 1. Add an option to toggle between vertical and horizontal display messages. 2. Add option to remove message content limit 3. Add slider from 1 to n messages per row when messages are displayed horizontally, default is n 4. Store 1 and 2 in local storage
0easy
Title: The problem of SiamMask training Body: Hi, when I try to train SiamMask, there is an output error: File "train_pysot.py", line 106, in build_opt_lr 'lr': cfg.TRAIN.LR.BASE_LR}] File "/home/wqq/anaconda3/envs/pysot/lib/python3.7/site-packages/yacs/config.py", line 141, in __getattr__ raise AttributeError(name) AttributeError: BASE_LR It seems that there is something wrong in the config.yaml file in experiments/siammask_r50_l3. Any suggestions? Thanks.
0easy
Title: Quality code: Resolve all the issues showed by trunk check Body: If you run trunk check -a, there are several issues to resolve.
0easy
Title: Replace `git checkout` with `git switch` where possible Body: ### Description: A minor suggestion for a good first issue, which might improve our documentation from the perspective of contributors which are new to git. **If you are a new contributor looking for a good first issue**: if this issue is more than a few days old and there hasn't been disagreement, please feel welcome to go ahead and [create a pull request](https://scikit-image.org/docs/dev/development/contribute.html). The task: - replace every occurrence of `git checkout <existing-branch>` with `git switch <existing-branch>` - replace every occurrence of `git checkout -b <new-branch>` with `git switch -c <new-branch>` in our documentation (files ending in `.md`, `.rst`, and `.txt`). Don't hesitate to reach out before if you have questions. :)
0easy
Title: [UX] Skip `STOPPED` cluster when calling `sky stop` Body: <!-- Describe the bug report / feature request here --> We should filter out those clusters in `STOPPED` status when calling `sky stop`. e.g. for the example below, `sky stop -a` should only stop 2 clusters `lmf-l4-4-machine-image` and `lmf-gcp-l4-4`. ```bash $ sky status Clusters NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND lmf-l4-4-machine-image 14 mins ago 1x GCP(g2-standard-48, {'L4': 4}, image_id={'us-east4': 'projects/skyp... UP - sky launch -c lmf-l4-4-ma... lmf-gcp-l4-4 31 mins ago 1x GCP(g2-standard-48, {'L4': 4}) UP - sky launch -c lmf-gcp-l4-4... lmf-machine-image 2 hrs ago 1x GCP(g2-standard-24, {'L4': 2}, image_id={'us-east4': 'projects/skyp... STOPPED - sky launch -c lmf-machine... lmf-1-epoch 4 hrs ago 1x GCP(g2-standard-24, {'L4': 2}) STOPPED - sky launch -c lmf-1-epoch... lmf 18 hrs ago 1x GCP(g2-standard-24, {'L4': 2}) STOPPED - sky start lmf sky-344a-txia 1 month ago 1x Azure(Standard_NV18ads_A10_v5, {'A10': 0.5}) STOPPED - sky exec sky-344a-txia sl... Managed jobs No in-progress managed jobs. (See: sky jobs -h) Services No live services. (See: sky serve -h) $ sky stop -a Stopping 6 clusters: lmf-l4-4-machine-image, lmf-gcp-l4-4, lmf-machine-image, lmf-1-epoch, lmf, sky-344a-txia. Proceed? [Y/n]: ```
0easy
Title: How to get unordered results when using `async with`? Body: ### Description I have a lot of async jobs (>10000). I want to process results as soon as they are available, so I am using: ```python async for result in pool.map(run_one_job, jobs_data): ``` However, some of them take way more time than others. So it happens that many results are already ready, but the above `for` loop is blocked and is waiting because it can only give results in order. I would like to have something like `.map_unordered()` It seems to me that it is currently impossible to do it, and it would require implementing this functionality in `aiomultiprocess`. ### Details * OS: Ubuntu on Win10 WSL * Python version: 3.9 * aiomultiprocess version: 0.9.0 * Can you repro on master? --- * Can you repro in a clean virtualenv? ---
0easy
Title: Support Resistance Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.** Pandas Ta version : 0.2.42b0 I am using an indicator for automatic drawing support and resistance levels. It's working well. Can you add to pandas ta? I am pasting pine script code. ```javascript --------------------------------------------------------------------------------------------------- // This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/ //@version=4 study("Support/Resistance", shorttitle="S/R", overlay=true, scale = scale.right, linktoseries = true) line_width = input(4, type = input.integer, title="SR Level line Width") level_min_lengh = input(4, type = input.integer, title="Set minimum number of bars from level start to qualify a level") y = input("Orange", "Line Color", options=["Red", "Lime", "Orange", "Teal", "Yellow", "White", "Black"]) line_extend = input(false, type = input.bool, title = "Extend Level line Right") ? extend.right : extend.none sr_tf = input("", type = input.resolution, title="SR Timeframe (Beta)") //color function colour(z) => z=="Red"?color.red:z=="Lime"?color.lime:z=="Orange"?color.orange:z=="Teal"? color.teal:z=="Yellow"?color.yellow:z=="Black"?color.black:color.white //Legacy RSI calc rsi_src = close, len = 9 up1 = rma(max(change(rsi_src), 0), len) down1 = rma(-min(change(rsi_src), 0), len) legacy_rsi = down1 == 0 ? 100 : up1 == 0 ? 0 : 100 - (100 / (1 + up1 / down1)) //CMO based on HMA length = 1 src1 = hma(open, 5)[1] // legacy hma(5) calculation gives a resul with one candel shift, thus use hma()[1] src2 = hma(close, 12) momm1 = change(src1) // Difference between current value and previous, x - x[y] (sources series - length integer momm2 = change(src2) f1(m, n) => m >= n ? m : 0.0 f2(m, n) => m >= n ? 0.0 : -m m1 = f1(momm1, momm2) m2 = f2(momm1, momm2) sm1 = sum(m1, length) sm2 = sum(m2, length) percent(nom, div) => 100 * nom / div cmo_new = percent(sm1-sm2, sm1+sm2) //Legacy Close Pivots calcs. len5 = 2 h = highest(len5) h1 = dev(h, len5) ? na : h hpivot = fixnan(h1) l = lowest(len5) l1 = dev(l, len5) ? na : l lpivot = fixnan(l1) //Calc Values rsi_new = rsi(close,9) lpivot_new = lpivot // use legacy pivots calculation as integrated pivotlow/pivothigh functions give very different result hpivot_new = hpivot sup = rsi_new < 25 and cmo_new > 50 and lpivot_new res = rsi_new > 75 and cmo_new < -50 and hpivot_new calcXup() => var xup = 0.0 xup := sup ? low : xup[1] calcXdown() => var xdown = 0.0 xdown := res ? high : xdown[1] //Lines drawing variables tf1 = security(syminfo.tickerid, sr_tf, calcXup(), lookahead=barmerge.lookahead_on) tf2 = security(syminfo.tickerid, sr_tf, calcXdown(), lookahead=barmerge.lookahead_on) //SR Line plotting var tf1_line = line.new(0, 0, 0, 0) var tf1_bi_start = 0 var tf1_bi_end = 0 tf1_bi_start := change(tf1) ? bar_index : tf1_bi_start[1] tf1_bi_end := change(tf1) ? tf1_bi_start : bar_index if change(tf1) if (line.get_x2(tf1_line) - line.get_x1(tf1_line)) < level_min_lengh line.delete(tf1_line) tf1_line := line.new(tf1_bi_start, tf1, tf1_bi_end, tf1, color = colour(y), width = line_width, extend = line_extend) line.set_x2(tf1_line, tf1_bi_end) var tf2_line = line.new(0, 0, 0, 0) var tf2_bi_start = 0 var tf2_bi_end = 0 tf2_bi_start := change(tf2) ? bar_index : tf2_bi_start[1] tf2_bi_end := change(tf2) ? tf2_bi_start : bar_index if change(tf2) if (line.get_x2(tf2_line) - line.get_x1(tf2_line)) < level_min_lengh line.delete(tf2_line) tf2_line := line.new(tf2_bi_start, tf2, tf2_bi_end, tf2, color = colour(y), width = line_width, extend = line_extend) line.set_x2(tf2_line, tf2_bi_end) alertcondition(change(tf1) != 0 or change(tf2) != 0 , message = "New S/R line" ) ```
0easy
Title: TypeError: isinstance() argument 2 cannot be a parameterized generic Body: ## Issue Stacktrace with tox when loading plugin configuration. Based on the error displayed I believe that is not the plugin fault. The loaded configuration option is a list, like below: ``` [ansible] skip = py3.7 py3.8 2.9 2.10 2.11 2.12 2.13 ``` I am still trying to fully understand this python typing issue in order to know how to avoid it. ## Environment Provide at least: - OS: <details open> <summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary> ```console ROOT: Using a default tox.ini file with tox-ansible plugin is not recommended. Consider using a tox-ansible.ini file and specify it on the command line (`tox --ansible -c tox-ansible.ini`) to avoid unintentionally overriding the tox-ansible environment configurations. Traceback (most recent call last): File "/Users/ssbarnea/.asdf/installs/python/3.12.3/bin/tox", line 8, in <module> sys.exit(run()) ^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/tox/run.py", line 20, in run result = main(sys.argv[1:] if args is None else args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/tox/run.py", line 42, in main result = provision(state) ^^^^^^^^^^^^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/tox/provision.py", line 86, in provision MANAGER.tox_add_core_config(state.conf.core, state) File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/tox/plugin/manager.py", line 79, in tox_add_core_config self.manager.hook.tox_add_core_config(core_conf=core_conf, state=state) File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/pluggy/_hooks.py", line 513, in __call__ return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/pluggy/_manager.py", line 120, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/pluggy/_callers.py", line 139, in _multicall raise exception.with_traceback(exception.__traceback__) File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/pluggy/_callers.py", line 103, in _multicall res = hook_impl.function(*args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ssbarnea/c/a/tox-ansible/src/tox_ansible/plugin.py", line 168, in tox_add_core_config env_list = add_ansible_matrix(state) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ssbarnea/c/a/tox-ansible/src/tox_ansible/plugin.py", line 266, in add_ansible_matrix env for env in env_list.envs if all(skip not in env for skip in ansible_config["skip"]) ~~~~~~~~~~~~~~^^^^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/tox/config/sets.py", line 116, in __getitem__ return self.load(item) ^^^^^^^^^^^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/tox/config/sets.py", line 127, in load return config_definition.__call__(self._conf, self.loaders, ConfigLoadArgs(chain, self.name, self.env_name)) # noqa: PLC2801 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/tox/config/of_type.py", line 103, in __call__ value = loader.load(key, self.of_type, self.factory, conf, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/tox/config/loader/api.py", line 144, in load converted = self.build(key, of_type, factory, conf, raw, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/tox/config/loader/ini/__init__.py", line 85, in build converted = self.to(prepared, of_type, factory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ssbarnea/.asdf/installs/python/3.12.3/lib/python3.12/site-packages/tox/config/loader/convert.py", line 42, in to if isinstance(raw, of_type): # already target type no need to transform it ^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: isinstance() argument 2 cannot be a parameterized generic``` </details> ## Output of running tox <details open> <summary>Output of <code>tox -rvv</code></summary> ```console ``` </details> ## Minimal example <!-- If possible, provide a minimal reproducer for the issue. --> ```console ``` https://github.com/ansible-collections/cisco.nxos/actions/runs/9277917412/job/25528009504
0easy
Title: Update sphinx and the related packages Body: **Description of the issue** **Problem:** The current requirements pin sphinx to version `3.2.*` which is too old for recent releases of its alabaster theme package - see the CI error [here](https://github.com/quantumlib/Cirq/actions/runs/7452039565/job/20274451752?pr=6399#step:5:15) **Solution:** Update sphinx and any dependent packages to their recent versions. Sphinx is currently at version 7.2.6, https://pypi.org/project/Sphinx/#history. **Cirq version** 1.4.0.dev at d33b1a71ac9721e762a2fa1b9e9871edfe187a3c.
0easy
Title: Oauth2, FastAPI, and Solara Body: I did my best to search the docs, but I wasn't able to find an answer. I want to implement Solara into a FastAPI application. I also want to use Auth0 in the Solara application. I admittedly not super knowledgeable about auth. I tried following the docs on Auth0, and I cannot seem to get it to work. I suspect it is because the `get_login_url` is effectively hard coding the login URL. This means that if I mount Solara to a different path, then the auth endpoint seems like it might not work. Is this correct? Is there a better workaround for this? Also, love the project. I have built APIs in the past, and I have always thought a good Python frontend framework was lacking. I was considering using Anvil, but I hate using anything that isn't open source and free to use. Keep up the good work.
0easy
Title: Remove an API call for drawing within chats Body: Currently if you converse with gpt vision, it will always interpret if you want to draw something or not using a separate API call, this is a second API call and they're costly so we want to be able to turn it off, but better yet, we want to be able to just remove the second API call I was a bit (lot) stupid when designing this initially, we can avoid the second api call by just putting some instructions in the system pretext that ask it to use some sort of special syntax to denote a drawing prompt when it responds to a user if it picks up on intent to draw, and then we would just kick off the drawing from there onward, removing the api call entirely. This also increases conversation speed. All the scaffolding for drawing is already in the code, what needs to be done is: - the LLM api call that evaluates the last few messages of history just needs to be removed - Some pretext needs to be created that can get gpt to respond with some sort of syntax when it picks up a users intent to draw for example you can try being like ``` If the user wants to draw something, retain a prompt for what the user wants to draw such that it can go into a generator such as DALL-E, and after responding to the user message, type the prompt for what to draw within a special sequence of characters: #%^c, for example, #%^a dog#%^ ``` This drawing syntax pretext would have to be appended programmatically and not just in the main text of the pretext because it should work for other openers too. There should be some sort of recurring system message every X messages that acts as a reminder to do this identification of drawing intent, and in the future we can expand our agent's capabilities further on this basis There are some drawbacks to this, the independent evaluator system should theoretically be more accurate, as it is more focused without the entire conversation context.
0easy
Title: on_delete should be required Body: **Is your feature request related to a problem? Please describe.** I encountered that when declaring ForeignKeyField on_delete = CASCADE, I believe that this is dangerous and in some scenarios may lead to undesirable consequences. ``` def ForeignKeyField( model_name: str, related_name: Union[Optional[str], Literal[False]] = None, on_delete: OnDelete = CASCADE, db_constraint: bool = True, null: bool = False, **kwargs: Any, ) -> "ForeignKeyRelation[MODEL] | ForeignKeyNullableRelation[MODEL]": ``` **Describe the solution you'd like** The best solution would be to make on_delete a mandatory parameter, I believe that actions with data should be explicit. ``` def ForeignKeyField( model_name: str, on_delete: OnDelete, related_name: Union[Optional[str], Literal[False]] = None, db_constraint: bool = True, null: bool = False, **kwargs: Any, ) -> "ForeignKeyRelation[MODEL] | ForeignKeyNullableRelation[MODEL]": ```
0easy
Title: Improve existing unit tests Body: * Improve efficiency: Many of the existing unit tests (in cleanlab/tests/) can be made more efficient without fundamentally changing the quality of the test. We recommend timing the unit tests to identify which ones take the longest and focusing on those. Usually these will be tests that rely on some toy dataset. One way to speed such tests up is to reduce the size of the dataset, but be careful that the test remains useful and still checks for the key properties it is assessing! * Improve quality of checks: Some of the existing unit tests only verify certain code runs properly. These tests can be improved to verify the outputs from the code match the expected outputs (both in terms of object characteristics like array shape or Python type, as well as the values themselves).
0easy
Title: Implement `range_color` Body: ## Add support for `range_color` For example, if a color range of `[100, 200]` is used with the `["blue", "green", "red"]` color scale, then any instance with a color value of 100 or less will be blue, and 200 or more will be red. Anything in between will be interpolated as usual. Ref: https://plotly.com/python/colorscales/#explicitly-setting-a-color-range
0easy