text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Need error messages and documentation that you must "fit" before convert
Body: In issue #422/#423, users brought up that it's not clear from the error messages that you must fit before convert for most models.
We suspect that with KNN we could maybe also work if the model is not trained, but in general (e.g., with RandomForests) this won't work.
We need help documenting this, and also generating proper error messages. (You can see an example of an unhelpful error message in #423.) | 0easy
|
Title: Add the ability to test and sort timestamps to be monotonic in a pandas data frame
Body: # Brief Description
Following up on #703, this issue seeks to introduce the ability to sort the timestamps in a pandas data frame monotonically
I would like to propose...
# Example API
```python
def _test_for_monotonicity(
df: pd.DataFrame,
column_name: str = None,
direction: str = 'increasing'
) -> bool:
"""
Tests input data frame for monotonicity.
Check if the data is monotonically increasing or decreasing.
Direction is dependent on user input.
Defaults to increasing
:param df: data frame to be tested for monotonicity
:param column_name: needs to be specified if and only if the date time is not in index.
Defaults to None.
:param direction: specifies the direction in which monotonicity is being tested for.
Defaults to 'increasing'
:return: single boolean flag indicating whether the test has passed or not
"""
def sort_monotonically(
df: pd.DataFrame,
column_name: str = None,
direction: str ='increasing'
) -> pd.DataFrame:
"""
Sorts data frame monotonically.
It assumes the data frame has an index of type pd.DateTimeIndex when index is datetime.
If datetime is in a column, then the column is expected to be of type pd.Timestamp
:param df: data frame to sort monotonically
:param column_name: needs to be specified if and only if the date time is not in index.
Defaults to None
:param direction: specifies the direction in which monotonicity is being tested for.
Defaults to 'increasing'
:return: data frame with its index sorted
"""
# more examples below
# ...
```
| 0easy
|
Title: BPE's default alpha with sentencepiece
Body: ## Description
As BPE-dropout is applied to sentencepiece recently, it can be tokenized based on sampling.
alpha = 1 is for optimally training not for inference.
Default `alpha=1` isn't appropriate because most users and models provided by gluonnlp expect to deterministically tokenize.
https://github.com/google/sentencepiece/issues/371
## To Reproduce
```
path = gluon.utils.download('https://kobert.blob.core.windows.net/models/kogpt2/tokenizer/kogpt2_news_wiki_ko_cased_818bfa919d.spiece')
tok = nlp.data.SentencepieceTokenizer(path)
tok('์๋
ํ์ธ์.')
['โ', '์', '๋
', 'ํ', '์ธ', '์', '.']
tok = nlp.data.SentencepieceTokenizer(path, 0, 0.5)
tok('์๋
ํ์ธ์.')
['โ', '์', '๋
', 'ํ', '์ธ์', '.']
tok('์๋
ํ์ธ์.')
['โ์', '๋
', 'ํ', '์ธ์', '.']
tok('์๋
ํ์ธ์.')
['โ์๋
', 'ํ', '์ธ์', '.']
tok('์๋
ํ์ธ์.')
tok = nlp.data.SentencepieceTokenizer(path, num_best=0, alpha=0)
tok('์๋
ํ์ธ์.')
['โ์๋
ํ์ธ์', '.']
``` | 0easy
|
Title: Codecov CI job for this project hangs
Body: Starting this month, the codecov CI job never succeeds. I am still unsure about the root cause of it but it seems that GitHub Actions runtime is unable to complete all the tests for some reason. Example: https://github.com/slackapi/bolt-python/actions/runs/4805729060/jobs/8552434332
Since unit test execution job is still working, let me disable the codecov job for now. We will look into it later.
| 0easy
|
Title: Branch Lifecycle metric API
Body: The canonical definition is here: https://chaoss.community/?p=3590 | 0easy
|
Title: [Type 3] Don't let signin requests be sent if they use http when server expects https
Body: We very often see users getting an inexplicable error because they entered a http url and the server expected https: 405 Method Not Allowed (https://tableau.github.io/server-client-python/docs/sign-in-out#405000-method-not-allowed-response)
Instead, the client library should ping the unauthenticated endpoint and check if ssl is enabled first, and then switch to ssl if supported before sending the initiating request. Donโt even allow someone to send http if https is enabled. (I think Tableau Desktop already does that) | 0easy
|
Title: Update your project(s) building on CircleCI 1.0 with this configuration tool
Body: We noticed that the projects listed below are building on CircleCI 1.0 without a configuration file.
https://circleci.com/gh/manahl/arctic
As a result of sunsetting CircleCI 1.0 on August 31st, 2018, the project(s) listed above require action from your team in order to continue building on CircleCI.
CircleCI 2.0 delivers a number of improvements, but has changed our configuration approach to be more deterministic. However, CircleCI 2.0 does not support projects without a configuration file. This means that all projects will now need a .circleci/config.yml file to continue building.
To create your configuration, you have two options:
Option 1: Manual setup:
If you prefer to create your configuration file yourself, see our documentation on configuring CircleCI 2.0 for your project.
Option 2: Generated setup:
Generate a CircleCI 2.0 configuration file based on the recent build history of your project using our configuration generator script. The tool should explicitly output a configuration similar to what our system was previously implicitly doing. While not always perfect, this will give you a starting point from which to optimize and customize to your build.
Once youโve completed all the steps, you should see a new job in CircleCI on the `circleci-20-test` branch that runs the generated configuration. If you run into problems please submit a support ticket here.
Note: If youโve already migrated any of the above projects to CircleCI 2.0, disregard this message and happy building!
Thanks,
The CircleCI Team
| 0easy
|
Title: Update `pre-commit-config.yaml` to enforce `CHANGELOG.md` is formatted
Body: ### Summary
```diff
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 693a051365..66db6eaf25 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -10,7 +10,7 @@ repos:
rev: "v2.7.1"
hooks:
- id: prettier
- files: '^(?!CHANGELOG.md|mlflow/pypi_package_index.json|docs/|mlflow/server/js/).+\.(js|md|json|ya?ml)$'
+ files: '^(?!mlflow/pypi_package_index.json|docs/|mlflow/server/js/).+\.(js|md|json|ya?ml)$'
args: ["--no-config", "--print-width", "100"]
require_serial: true
- repo: local
```
### Notes
- Make sure to open a PR from a **non-master** branch.
- Sign off the commit using the `-s` flag when making a commit:
```sh
git commit -s -m "..."
# ^^ make sure to use this
```
- Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
| 0easy
|
Title: Goin' Fast URL IPv6 address is not bracketed
Body: Sanic says:
```
sanic myprogram.app -H ::
Goin' Fast @ http://:::8000
```
The correct formatting for IPv6 would be:
```
Goin' Fast @ http://[::]:8000
```
Fixing the Goin' fast banner in `sanic/app.py` would be an easy enough task for someone wishing to start hacking Sanic. Existing code from `sanic/models/server_types.py` class `ConnInfo` could be useful, as there already is handling for adding brackets to IPv6 addresses.
| 0easy
|
Title: i18n babel can not extract lazy text strings correct
Body: ### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
Any
### Python version
3.10
### aiogram version
3.3.0
### Expected behavior
I make en example according this document: https://docs.aiogram.dev/en/dev-3.x/utils/i18n.html
```
...
from aiogram.utils.i18n import gettext as _
from aiogram.utils.i18n import lazy_gettext as __
from aiogram.utils.i18n import I18n, ConstI18nMiddleware
...
@dp.message(F.text == __("Start")) # lazy text
async def handler_1(message: Message) -> None:
await message.answer(_("Welcome, {name}!").format(name=html.quote(message.from_user.full_name)))
await message.answer(_("How many messages do you have? Input number, please:"))
@dp.message(F.text)
async def handler_2(message: Message) -> None:
# this is plural and wrapped by double underscore
await message.answer(__("You have {} message!", "You have {} messages!", 2).format(message.text))
...
def main() -> None:
...
i18n = I18n(path="locales", default_locale="en", domain="my-super-bot")
dp.message.outer_middleware(ConstI18nMiddleware(locale='en', i18n=i18n))
...
```
I create lines lazy text and gettext wrapped strings.
Also I add plural forms.
Extracting strings using Babel should return this strings in `.pot` template file:
```
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.13.1\n"
#: lesson1.py:12
msgid "Start"
msgstr ""
#: lesson1.py:14
msgid "Welcome, {name}!"
msgstr ""
#: lesson1.py:15
msgid "How many messages do you have? Input number, please:"
msgstr ""
#: lesson1.py:19
msgid "You have {} message!"
msgid_plural "You have {} messages!"
msgstr[0] ""
msgstr[1] ""
```
### Current behavior
After extraction we loose some strings.
Babel command
```
pybabel extract -o locales/messages.pot --keywords="__:1,2" --input-dirs=.
```
extracts either this:
```
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.13.1\n"
<---- So here we loose lazy text string "Start", see expected behavior.
#: lesson1.py:14
msgid "Welcome, {name}!" # regular gettext
msgstr ""
#: lesson1.py:15
msgid "How many messages do you have? Input number, please:" # regular gettext
msgstr ""
#: lesson1.py:19
msgid "You have {} message!" # alias ngettext as gettext in \aiogram\utils\i18n\context.py
msgid_plural "You have {} messages!"
msgstr[0] ""
msgstr[1] ""
```
or
```
pybabel extract -o locales/messages.pot --keywords="__" --input-dirs=.
```
extracts that:
```
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.13.1\n"
#: lesson1.py:12
msgid "Start"
msgstr ""
#: lesson1.py:14
msgid "Welcome, {name}!"
msgstr ""
#: lesson1.py:15
msgid "How many messages do you have? Input number, please:"
msgstr ""
#: lesson1.py:19
msgid "You have {} message!"
msgstr ""
<---- Here we loose plural forms, see expected behavior.
```
Here we loose plural forms.
Ether this or that but not all together.
The use of these keys` __:1,2` ะพr `_:1,2` `_` `__` and their various combinations leads to an even more disastrous result.
### Steps to reproduce
`pybabel extract -o locales/messages.pot --input-dirs=.`
`pybabel extract -o locales/messages.pot --keywords="__" --input-dirs=.`
`pybabel extract -o locales/messages.pot --keywords="__:1,2" --input-dirs=.`
`pybabel extract -o locales/messages.pot --keywords="__ __:1,2"--input-dirs=.`
### Code example
```python3
from aiogram import Bot, Dispatcher, F, html
from aiogram.types import Message
from aiogram.utils.i18n import gettext as _
from aiogram.utils.i18n import lazy_gettext as __
from aiogram.utils.i18n import I18n, ConstI18nMiddleware
TOKEN = "token"
dp = Dispatcher()
@dp.message(F.text == __("Start"))
async def handler_1(message: Message) -> None:
await message.answer(_("Welcome, {name}!").format(name=html.quote(message.from_user.full_name)))
await message.answer(_("How many messages do you have? Input number, please:"))
@dp.message(F.text)
async def handler_2(message: Message) -> None:
await message.answer(__("You have {} message!", "You have {} messages!", 2).format(message.text))
def main() -> None:
bot = Bot(TOKEN, parse_mode="HTML")
i18n = I18n(path="locales", default_locale="en", domain="my-super-bot")
dp.message.outer_middleware(ConstI18nMiddleware(locale='en', i18n=i18n))
dp.run_polling(bot)
if __name__ == "__main__":
main()
```
### Logs
_No response_
### Additional information
_No response_ | 0easy
|
Title: test_verbosity_guess_miss_match fails with sources from PyPI
Body: If you take tox sources from PyPI, there is no `tox.ini` file in them, which makes `test_verbosity_guess_miss_match` fail because tox complains about missing config file and therefore the output is not as expected.
The output is:
```
tests/config/cli/test_parse.py::test_verbosity_guess_miss_match FAILED
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
capsys = <_pytest.capture.CaptureFixture object at 0x7fc51dd0b350>
def test_verbosity_guess_miss_match(capsys: CaptureFixture) -> None:
result = get_options("-rv")
assert result.parsed.verbosity == 3
assert logging.getLogger().level == logging.INFO
for name in ("distlib.util", "filelock"):
logger = logging.getLogger(name)
assert logger.disabled
logging.error("E")
logging.warning("W")
logging.info("I")
logging.debug("D")
out, err = capsys.readouterr()
> assert out == "ROOT: E\nROOT: W\nROOT: I\n"
E AssertionError: assert 'ROOT: No tox... W\nROOT: I\n' == 'ROOT: E\nROOT: W\nROOT: I\n'
E + ROOT: No tox.ini or setup.cfg or pyproject.toml found, assuming empty tox.ini at /builddir/build/BUILD/tox-4.2.6
E ROOT: E
E ROOT: W
E ROOT: I
capsys = <_pytest.capture.CaptureFixture object at 0x7fc51dd0b350>
err = ''
logger = <Logger filelock (INFO)>
name = 'filelock'
out = 'ROOT: No tox.ini or setup.cfg or pyproject.toml found, assuming empty tox.ini at /builddir/build/BUILD/tox-4.2.6\nROOT: E\nROOT: W\nROOT: I\n'
result = Options(parsed=Parsed(colored='no', verbose=3, quiet=0, exit_and_dump_after=0, config_file=None, work_dir=None, root_d...acy': <function legacy at 0x7fc51df1e0c0>, 'le': <function legacy at 0x7fc51df1e0c0>}, log_handler=<ToxHandler (INFO)>)
tests/config/cli/test_parse.py:35: AssertionError
``` | 0easy
|
Title: `Lists Should Be Equal` does not work as expected with `ignore_case` and `ignore_order` arguments
Body: Looks like 'Lists Should Be Equal' does not work in one of the scenarios as mentioned below
```
${list1} Create List AbC AEF
${list2} Create List AEF ABC
Lists Should Be Equal ${list1} ${list2} ignore_case=${True} ignore_order=${True}
```
We are expecting this to be passed since the lists are same as we are ignoring the order and the case. However, It is failing.
Can someone explain reason behind it please? Does it have something to do with ASCII codes
I would be glad to provide additional details if required. Thanks | 0easy
|
Title: [BUG] Cms check show wrong error
Body: <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
## Description
Wrong error with MIDDLEWARE_CLASSES after check command, with wrong middleware.
## Steps to reproduce
First of:
```
Package Version
-------------------------------- -------
...
Django 5.0
django-cms 4.1.0
```
When you accidentally write the wrong Middleware like:
```
MIDDLEWARE = [
...
"django:django.middleware.locale.LocaleMiddleware", # not installed by default
...
]
```
instead of middleware:
`django.middleware.locale.LocaleMiddleware`
after that, the command:
`python manage.py cms check`
will show an error:
```
Middlewares
===========
- django.middleware.locale.LocaleMiddleware middleware must be in MIDDLEWARE_CLASSES [ERROR]
```
but it is not true, because MIDDLEWARE_CLASSES is deprecated in Django
## Expected behaviour
error should be:
```
Middlewares
===========
- django.middleware.locale.LocaleMiddleware middleware must be in MIDDLEWARE [ERROR]
```
## Actual behaviour
error:
```
Middlewares
===========
- django.middleware.locale.LocaleMiddleware middleware must be in MIDDLEWARE_CLASSES [ERROR]
```
## Screenshots
If wrong:


if not wrong:



## Additional information (CMS/Python/Django versions)
```
pip list
Package Version
-------------------------------- -------
annotated-types 0.6.0
asgiref 3.7.2
crispy-bootstrap5 2023.10
Django 5.0
django-admin-rangefilter 0.12.0
django-bootstrap-datepicker-plus 5.0.5
django-classy-tags 4.1.0
django-cleanup 8.0.0
django-cms 4.1.0
django-crispy-forms 2.1
django-debug-toolbar 4.3.0
django-formtools 2.5.1
django-sekizai 4.1.0
django-treebeard 4.7.1
djangocms-admin-style 3.3.0
packaging 23.2
Pillow 10.1.0
pip 22.0.2
psycopg 3.1.14
psycopg-binary 3.1.14
pydantic 2.6.0
pydantic_core 2.16.1
setuptools 59.6.0
sqlparse 0.4.4
typing_extensions 4.9.0
wheel 0.37.1
```
settings.py
```
INSTALLED_APPS = [
'e5_app',
"djangocms_admin_style",
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rangefilter',
'django_cleanup.apps.CleanupConfig',
'bootstrap_datepicker_plus',
'crispy_forms',
'crispy_bootstrap5',
'debug_toolbar',
"django.contrib.sites",
"cms",
"menus",
"treebeard",
"sekizai"
]
MIDDLEWARE = [
'cms.middleware.utils.ApphookReloadMiddleware'
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'debug_toolbar.middleware.DebugToolbarMiddleware',
"django.middleware.locale.LocaleMiddleware", # not installed by default
"cms.middleware.user.CurrentUserMiddleware",
"cms.middleware.page.CurrentPageMiddleware",
"cms.middleware.toolbar.ToolbarMiddleware",
"cms.middleware.language.LanguageCookieMiddleware",
]
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
"cms.context_processors.cms_settings",
"django.template.context_processors.i18n",
"sekizai.context_processors.sekizai",
],
},
},
]
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'e5_db',
'USER': 'django',
'PASSWORD': 'django',
'HOST': 'localhost',
'PORT': '5432',
}
}
LANGUAGES = [
("ru", "Russian"),
("en", "English"),
]
LANGUAGE_CODE = 'ru'
# For django-cms
SITE_ID = 1
CMS_CONFIRM_VERSION4 = True
X_FRAME_OPTIONS = "SAMEORIGIN"
```
## Do you want to help fix this issue?
<!--
The django CMS project is managed and kept alive by its open source community and is backed by the [django CMS Association](https://www.django-cms.org/en/about-us/). We therefore welcome any help and are grateful if people contribute to the project. Please use 'x' to check the items below.
-->
* [ ] Yes, I want to help fix this issue and I will join #workgroup-pr-review on [Slack](https://www.django-cms.org/slack) to confirm with the community that a PR is welcome.
* [x] No, I only want to report the issue.
| 0easy
|
Title: Incorrect typehints in route shorthand methods
Body: **Describe the bug**
Route decorators in `sanic/mixins/routes.py` have incorrect typehints for the ` version` parameter.
**Expected**
https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L58
**Actual**
https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L194 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L259 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L296 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L332 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L367 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L411 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L456 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L501 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L538 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L579
| 0easy
|
Title: setup.py missing six dependency
Body: Commit dcfba0ea808dda8224d2f52e64f099a2bbf2b5f8 started using the six module, but setup.py wasn't changed to add six to install_requires. | 0easy
|
Title: [BUG-REPORT] str.replace(..., regex=True) : can't use capture group in repl str
Body: vaex.from_arrays(s=['a,b']).s.str.replace(r'(\w+)',r'--\g<1>==',regex=True)
when using capture group in <repl> str, it fails, while str_pandas.replace() is correct

Name: vaex
Version: 4.6.0
Summary: Out-of-Core DataFrames to visualize and explore big tabular datasets
Home-page: https://www.github.com/vaexio/vaex
Author: Maarten A. Breddels
Author-email: [email protected]
License: MIT
Location: /home/support/miniconda3/lib/python3.9/site-packages
Requires: vaex-astro, vaex-core, vaex-hdf5, vaex-jupyter, vaex-ml, vaex-server, vaex-viz
Required-by: | 0easy
|
Title: Add utlity functions to get URLs of dashboards for specific KPIs
Body: ## Tell us about the problem you're trying to solve
Currently, URLs to KPIs are constructed directly in f-strings using the URL prefix. This leads to repetition of the same code across many files and any change in the URL needs to be changed at all places. For example: https://github.com/chaos-genius/chaos_genius/blob/04cfe2ee0e82ff627fa29f8f50ccc08b6ba52bef/chaos_genius/alerts/base_alerts.py#L693
## Describe the solution you'd like
- [ ] Create a new file `webapp_url.py` in `chaos_genius/utils/`
- [ ] Add function to get the anomaly dashboard URL given KPI ID and Dashboard ID. `def anomaly_dashboard_url(kpi_id: int, dashboard_id: int = 0):`
- [ ] Add function to get the DeepDrills dashboard URL given KPI ID and Dashboard ID. `def deepdrills_dashboard_url(kpi_id: int, dashboard_id: int = 0):`
- [ ] Add function to get dashboard URL given dashboard ID. `def dashboard_url(dashboard_id: int = 0):`
- [ ] Add function to get the anomaly edit URL given KPI ID and Dashboard ID. `def anomaly_edit_url(kpi_id: int, dashboard_id: int = 0):`
## Describe alternatives you've considered
N/A
## Additional context
Note: Dashboard ID 0 has all KPIs by default. KPIs cannot be deleted from this dashboard.
---
Please leave a reply or reach out to us on our [community slack](https://github.com/chaos-genius/chaos_genius#octocat-community) if you need any help.
| 0easy
|
Title: Testing: self.resp_options.default_media_type
Body: Test that setting self.resp_options.default_media_type overrides the default media type passed into the falcon.API() and falcon.wsgi.App() initializers. | 0easy
|
Title: %info ubuntu 22 no output
Body: ### Describe the bug

### Reproduce
interpreter -l -m openai/lmstudio
%info
### Expected behavior
Print out debug info
### Screenshots
_No response_
### Open Interpreter version
0.2.4
### Python version
3.10
### Operating System name and version
Ubuntu 22.04
### Additional context
_No response_ | 0easy
|
Title: Compute predictions from selected model
Body: Details in discussion https://github.com/mljar/mljar-supervised/discussions/421 | 0easy
|
Title: Marketplace - Change the margins from 64px to 25px between the divider line and the section title
Body:
### Describe your issue.
Change the margins from 64px to 25px between the divider line and the section title

| 0easy
|
Title: [NEW TRANSFORMER] exponential width discretiser
Body: when a variable is in a logarithmic scale, it might make sense to create the intervals based on a log scale instead of linear scale.
Quote:
"
When the numbers span multiple magnitudes, it may be better to group by powers of
10 (or powers of any constant): 0โ9, 10โ99, 100โ999, 1000โ9999, etc. The bin widths
grow exponentially
"
the idea is taken from: Feature Engineering for Machine Learning" Alice Zheng, O'Reilly. | 0easy
|
Title: Feature Request: Improve Notes display
Body: Give alternate views (table/column etc), sort notes by date by default
| 0easy
|
Title: Update to newest version of webdataset
Body: | 0easy
|
Title: In Sample Program - Instantiation of Browser class should also include the browser name parameter
Body: **_What is the Issue ?**_
In https://splinter.readthedocs.io/en/latest/tutorial.html. When creating an browser instance we need to pass it a parameter, i.e the name of the browser. But the ReadMe doesn't state anything about it. We have this issue when the issue when the use has more than one browser installed.
**_The Error that we see if no parameter is passed during Browser instantiation is:**_
```
Traceback (most recent call last):
File "./sample.py", line 6, in <module>
browser = Browser()
File "/usr/local/lib/python2.7/site-packages/splinter/browser.py", line 63, in Browser
return driver(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/splinter/driver/webdriver/firefox.py", line 39, in __init__
self.driver = Firefox(firefox_profile)
File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/firefox/webdriver.py", line 81, in __init__
self.binary, timeout)
File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/firefox/extension_connection.py", line 51, in __init__
self.binary.launch_browser(self.profile, timeout=timeout)
File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/firefox/firefox_binary.py", line 67, in launch_browser
......
```
**_How to resolve this:**_
It would be better to clearly state in the line no. 3 of the sample program ( https://splinter.readthedocs.io/en/latest/tutorial.html ) that the user might need to pass the name of the browser as a parameter, something like this:
`browser = Browser('chrome')` or
`browser = Browser() # you might need to pass parameter(i.e your browser name) during Browser instance creation Browser('name of browser')`
I am reporting this because I recently started using the Splinter API and I found it hard to figure out why the sample program code was not running.
Hope this helps someone else as well ๐
Thank you!
| 0easy
|
Title: ruff: adopt `D` rule for docstrings
Body: since github.com/PyCQA/docformatter has some wrong configuration for pre-commit in the newer versions of the hardcoded rev we are using, we may migrate it to use ruff instead. some changes/updates may be necessary, I am not expecting to only uncomment (https://github.com/kornia/kornia/blob/52cee42a15f4828adf729e28931479ec5375ed20/pyproject.toml#L108) and remove the docformatter hook and it just works.
instead of fixing all our docs in one go, we should add ignores in the rules as well
| 0easy
|
Title: [QUESTION] XGBModel fit method in Darts raises "sample_weight_eval_set's length does not equal eval_set's length" error when using val_sample_weight
Body: **Describe the issue linked to the documentation**
I'm encountering an issue with Darts' `XGBModel` when trying to fit it with `val_sample_weight` provided.
Specifically, when using `val_sample_weight`, I receive a `ValueError` indicating that **"sample_weight_eval_set's length does not equal eval_set's length"**, even though the shapes and lengths of the training and validation datasets appear to match.
Based on the [Darts documentation for XGBModel](https://unit8co.github.io/darts/generated_api/darts.models.forecasting.xgboost.html#darts.models.forecasting.xgboost.XGBModel.fit), the fit method accepts `series`, `past_covariates`, `sample_weight`, `val_series`, `val_past_covariates`, and `val_sample_weight`, among others. While `sample_weight `works fine, adding `val_sample_weight` consistently raises an error.
Hereโs a minimal code snippet that reproduces the issue (X is pd.DataFrame that contains covariates, targets and weights. So, splitting first into X_cov, y and w):
```
y = X.filter(regex='^Targets_')
w = X.filter(regex='^Weights_')
X_cov = X.filter(regex='^Covariates_')
y_train, y_val = train_test_split(y, test_size=0.4, shuffle=False)
w_train, w_val = train_test_split(w, test_size=0.4, shuffle=False)
X_cov_train, X_cov_val = train_test_split(X_cov, test_size=0.4, shuffle=False)
y_train_ts = TimeSeries.from_times_and_values(y_train.index, y_train.values, columns=y_train.columns)
w_train_ts = TimeSeries.from_times_and_values(w_train.index, w_train.values, columns=w_train.columns)
X_cov_train_ts = TimeSeries.from_times_and_values(X_cov_train.index, X_cov_train.values, columns=X_cov_train.columns)
y_val_ts = TimeSeries.from_times_and_values(y_val.index, y_val.values, columns=y_val.columns)
w_val_ts = TimeSeries.from_times_and_values(w_val.index, w_val.values, columns=w_val.columns)
X_cov_val_ts = TimeSeries.from_times_and_values(X_cov_val.index, X_cov_val.values, columns=X_cov_val.columns)
xgb_model = XGBModel(
lags=16,
lags_past_covariates=16,
output_chunk_length=1,
objective="binary:logistic",
booster="gbtree",
)
xgb_model.fit(
y_train_ts,
past_covariates=X_cov_train_ts,
val_series=y_val_ts,
val_past_covariates=X_cov_val_ts,
sample_weight=w_train_ts,
val_sample_weight=w_val_ts,
)
```
The above code produces the following error :
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[10], line 15
12 w_val_ts = TimeSeries.from_times_and_values(w_val.index, w_val.values, columns=w_val.columns)
13 X_cov_val_ts = TimeSeries.from_times_and_values(X_cov_val.index, X_cov_val.values, columns=X_cov_val.columns)
---> 15 xgb_model.fit(
16 y_train_ts,
17 past_covariates=X_cov_train_ts,
18 val_series=y_val_ts,
19 val_past_covariates=X_cov_val_ts,
20 sample_weight=w_train_ts,
21 val_sample_weight=w_val_ts,
22 )
(... lines omitted for brevity ...)
File ~/miniforge3/envs/testdev/lib/python3.9/site-packages/xgboost/sklearn.py:548, in _wrap_evaluation_matrices(missing, X, y, group, qid, sample_weight, base_margin, feature_weights, eval_set, sample_weight_eval_set, base_margin_eval_set, eval_group, eval_qid, create_dmatrix, enable_categorical, feature_types)
545 return meta
547 if eval_set is not None:
--> 548 sample_weight_eval_set = validate_or_none(
549 sample_weight_eval_set, "sample_weight_eval_set"
550 )
551 base_margin_eval_set = validate_or_none(
552 base_margin_eval_set, "base_margin_eval_set"
553 )
554 eval_group = validate_or_none(eval_group, "eval_group")
File ~/miniforge3/envs/testdev/lib/python3.9/site-packages/xgboost/sklearn.py:541, in _wrap_evaluation_matrices.<locals>.validate_or_none(meta, name)
539 return [None] * n_validation
540 if len(meta) != n_validation:
--> 541 raise ValueError(
542 f"{name}'s length does not equal `eval_set`'s length, "
543 + f"expecting {n_validation}, got {len(meta)}"
544 )
545 return meta
ValueError: sample_weight_eval_set's length does not equal `eval_set`'s length, expecting 1, got 175310
```
**Additional context**
If I give `None` as validation datasets, there is no problem to run `fit`.
```
xgb_model.fit(
y_train_ts,
past_covariates=X_cov_train_ts,
val_series=None,
val_past_covariates=None,
sample_weight=w_train_ts,
val_sample_weight=None
)
```
If I only set `val_sample_weight` to `None`, there is no problem to run `fit`. So, the problem seems to happen only when I set `val_sample_weight` to TimeSeries.
```
xgb_model.fit(
y_train_ts,
past_covariates=X_cov_train_ts,
val_series=y_val_ts,
val_past_covariates=X_cov_val_ts,
sample_weight=w_train_ts,
val_sample_weight=None,
)
```
In addition, the same issue does not occur when using `LightGBMModel` in a similar setup, suggesting a difference in how `val_sample_weight` is handled for `XGBModel` versus `LightGBMModel`.
**Am I missing something in the code? Any help to resolve this issue would be greatly appreciated.** | 0easy
|
Title: Move n_features from supported conf to internal
Body: `N_FEATURES` is not in `supported.py` but I think it should not be surfaced to users but just used internally (it is much easier to pass a test dataset rather than having specific configurations for the each algorithm). | 0easy
|
Title: Add Quart framework adapter
Body: want to support the asyncio loop in quart apps | 0easy
|
Title: [DOC] API Policy
Body: I would like to put the following API policy in place in the docs:
> pyjanitor only extends or aliases the pandas API (and other dataframe APIs), but will never fix or replace them.
> Undesirable pandas behaviour should be reported upstream in the pandas repository.
> If at some point the pandas devs decide to take something from pyjanitor and internalize it as part of the official pandas API, then we will deprecate it from pyjanitor, while acknowledge the original contributors' contribution as part of the official deprecation record.
Taken from [here](https://github.com/ericmjl/pyjanitor/issues/612#issuecomment-557250963) in issue #612. | 0easy
|
Title: Change paths passed to listener v3 methods to `pathlib.Path` instances
Body: Listeners have methods like `output_file` and `log_file` that are called when result files are ready. At the moment they get the path to the file as a string, but I believe with listener v3 methods we should use [pathlib.Path](https://docs.python.org/3/library/pathlib.html) instead. `Path` instances are more convenient to use and we are now enhancing v3 listeners also otherwise.
This change is backwards incompatible, but in most cases `str` and `Path` work the same way, so it's unlikely that the change causes big problems. For example, our acceptance tests passed after I changed the type locally and I needed to add special checks to make sure the received path is actually `Path`.
Although the change is pretty safe, with listener v2 it's anyway safer to keep using `str`. We are also otherwise just using base types with them. | 0easy
|
Title: invoke setup may break due to psycopg2
Body: | 0easy
|
Title: [new]: `send_teams_message(message, webhook_url)`
Body: ### Check the idea has not already been suggested
- [X] I could not find my idea in [existing issues](https://github.com/unytics/bigfunctions/issues?q=is%3Aissue+is%3Aopen+label%3Anew-bigfunction)
### Edit the title above with self-explanatory function name and argument names
- [X] The function name and the argument names I entered in the title above seems self explanatory to me.
### BigFunction Description as it would appear in the documentation
duplicate send_slack_message.yaml.
Change doc and assertion in code.
Teams incoming webhook doc:
https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook?tabs=javascript
### Examples of (arguments, expected output) as they would appear in the documentation
- | 0easy
|
Title: ใๅจๆๅๅธ้ฎ้ขใๅผๆญฅๅฝๆฐ้ๅญๅจๅๆญฅ่ฏทๆฑ็่ฐ็จ
Body: https://github.com/Nemo2011/bilibili-api/blob/11c33b16003f3111684421fb9ad4cbea5833ef18/bilibili_api/utils/picture.py#L164C29-L164C37
ไผๅๅ่
```python
def __set_picture_meta_from_bytes(self: Picture, imgtype: str):
img = Image.open(io.BytesIO(self.content))
self.size = img.size
self.height = img.height
self.width = img.width
self.imageType = self.imageType = img.format.lower() if img.format else imgtype
async def upload_file(self: Picture, credential: Credential) -> "Picture":
"""
ไธไผ ๅพ็่ณ B ็ซใ
Args:
credential (Credential): ๅญๆฎ็ฑปใ
Returns:
Picture: `self`
"""
from bilibili_api.dynamic import upload_image
res = await upload_image(self, credential)
self.height = res["image_height"]
self.width = res["image_width"]
self.url = res["image_url"]
async with httpx.AsyncClient() as client:
resp = await client.get(
self.url,
headers=HEADERS,
cookies=credential.get_cookies(),
)
self.content = resp.read()
return self
``` | 0easy
|
Title: [new] `recover_table(fully_qualified_table, timestamp)`
Body: Undelete a table
https://stackoverflow.com/questions/27537720/how-can-i-undelete-a-bigquery-table | 0easy
|
Title: Feature: AsyncAPI HTTP support
Body: Using **AsgiFastStream** object we can register some HTTP routes our application serve.
We should draw such routes in **AsyncAPI** specification as well. This feature should be togglable for sure in the basic `include_in_schema=True/False` way, but I am not sure about default value
```python
@get(include_in_schema=False)
async def liveness_ping(scope):
return AsgiResponse(b"", status_code=200)
app = FastStream(broker).as_asgi(
asgi_routes=[
("/liveness", liveness_ping),
("/readiness", make_ping_asgi(broker, timeout=5.0, include_in_schema=False)),
],
)
```
AsyncAPI HTTP bindings - https://github.com/asyncapi/bindings/blob/master/http/README.md | 0easy
|
Title: ๆทปๅ ๅ้ไธๆ ็api
Body: ### Discussed in https://github.com/Nemo2011/bilibili-api/discussions/741
<div type='discussions-op-text'>
<sup>Originally posted by **adk23333** April 12, 2024</sup>
[ๅๅนถๅพๆ็่ฎฎ้ข](https://github.com/Nemo2011/bilibili-api/issues/662)
ๆฏๆๆถๆฒกๆถ้ดๆทปๅ ่ฟๆฏ่ฟ้้ขๆๅ๏ผๆ ๅ็่ฏ๏ผๆๅป่ฏ่ฏ๏ผ</div> | 0easy
|
Title: Refactoring: xontribs
Body: This is metaissue for list of tasks on refactoring.
Medium priority:
* I noticed that reading the xontribs list for autoloaders takes a bit of time. It will be cool to avoid this completely e.g. when there is no xontrib module in site-packages (https://github.com/xonsh/xonsh/pull/5586).
## For community
โฌ๏ธ **Please click the ๐ reaction instead of leaving a `+1` or ๐ comment**
| 0easy
|
Title: Error handling in contract parsing
Body: We found that there is no proper handling for unmatched regexes in `โscrapy.contracts.ContractsManager.extract_contracts()`, so e.g. calling `from_method()` on a method with `@ foo` in the docstring produces an unhandled exception. I think we should just skip lines that don't match. | 0easy
|
Title: Refactor: pkg_resources will be deprecated, instead use importlib.resources
Body: There are currently, many instances where we use `pkg_resources`, but it will be deprecated. Instead, let's use `importlib` as suggested.
Remove it also from `requirements.txt`.
This exists in many labeler files and tests. Replace all usages.
Example from `dataprofiler/labelers/base_data_labeler.py`
```python
# remove this
import pkg_resources
# instead use this (note that this is built-in
import importlib
```
```python
# replace this
default_labeler_dir = pkg_resources.resource_filename("resources", "labelers")
# with this
default_labeler_dir = importlib.resources.files("resources").joinpath("labelers")
``` | 0easy
|
Title: Feature: add `broker.ping()` method
Body: We should add a mechanism to check real connection to broker liveness as a step of #1181 impl
First of all, we should add abstract method to [`BrokerUsecase` class](https://github.com/airtai/faststream/blob/main/faststream/broker/core/usecase.py#L48)
Then we should add this method impl to **KafkaBroker**/**RabbitBroker**/etc (using `self._connection` or `self._producer` object)
Also, we should add tests for that functional smth there: https://github.com/airtai/faststream/blob/main/tests/brokers/base/connection.py | 0easy
|
Title: Loudly deprecate old Python 2/3 compatibility layer and other deprecated utils
Body: We "quietly" deprecated Python 2/3 compatibility layer in RF 5.0 (#4150) and `TRUE/FALSE_STRINGS` in RF 6.0 (#4500). Should deprecate them more loudly in RF 7.0. As discussed in #4150, deprecating constants is easiest with module `__getattr__`. | 0easy
|
Title: `CountFrequencyEncoder` could output zeros for new categories
Body: The `CountFrequencyEncoder` has an `errors` argument which can either raise an error or output NaNs when encountering new categories. For this particular class, it'd make sense (perhaps even as a default behavior) to output zeros when a new category is encoded instead of generating NaNs. | 0easy
|
Title: Enhance `Revealer` Clickable Area for Collapsing
Body: ### Description
Currently, the `Revealer` component is only collapsible when clicking on the first row, where the revealer sign is located. To improve the user experience, it would be beneficial to make the entire `Revealer` box clickable for collapsing.
### Current Behavior
The `Revealer` component collapses/expands only when clicking on the first row (where the revealer sign is).
### Desired Behavior
The `Revealer` component should be collapsible/expandable when clicking anywhere within the `Revealer` box, not just the "first row".
### Benefits
- Enhanced usability and user experience.
- More intuitive interaction for users.
### Image:
<img width="719" alt="image" src="https://github.com/rio-labs/rio/assets/41641225/e36409f3-4407-4182-9be3-9610d4eb5331">
| 0easy
|
Title: [Bug] fix gemma-2-2b-it-FP8 accuracy
Body: ### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 5. Please use English, otherwise it will be closed.
### Describe the bug
```
neuralmagic/gemma-2-2b-it-FP8 | 0.512 | 0.6
```
https://github.com/sgl-project/sglang/actions/runs/13800885290
### Reproduction
N/A
### Environment
N/A | 0easy
|
Title: [ENH] support .query()-style strings in df.update_where()
Body: Given the `update_where` example:
```python
# The dataframe must be assigned to a variable first.
data = {
"a": [1, 2, 3, 4],
"b": [5, 6, 7, 8],
"c": [0, 0, 0, 0]
}
df = pd.DataFrame(data)
df = (
df
.update_where(
condition=(df['a'] > 2) & (df['b'] < 8),
target_column_name='c',
target_val=10)
)
```
It would be nice to be able to use `.query`'s style instead to avoid having to use `df.pipe()` if `df` has been modified somewhere along your method chain block:
```python
df = (
pd.DataFrame(data)
.update_where(
condition='a > 2 and b < 8',
target_column_name='c',
target_val=10)
)
)
``` | 0easy
|
Title: Create development guide
Body: <!-- Please use the appropriate issue title format:
BUG REPORT
Bug: {Short description of bug}
SUGGESTION
Suggestion: {Short description of suggestion}
OTHER
{Question|Discussion|Whatever}: {Short description} -->
## Description
Currently, there is no documentation on how to set up the environment for developing and working on the library. This should be fixed.
### System settings
- Operating System: NA
- Terminal in use: NA
- Python version: NA
- Halo version: NA
- `pip freeze` output: NA
### Error
<!-- Put error here. Exceptions, and full traceback if possible. -->
NA
### Expected behaviour
<!-- Put expected behaviour here -->
NA
## Steps to recreate
<!-- Describe the steps here -->
NA
## People to notify
<!-- Please @mention relevant people here:-->
NA
| 0easy
|
Title: Decide and document XDG media type
Body: I've created an issue to XDG mimetypes repo about adding a mediatype for robot framework
https://gitlab.freedesktop.org/xdg/shared-mime-info/-/issues/198
I think having mediatype `text/robotframework` would be ok. This means any text editor would know it's a text file, and those which have syntax highlight etc. support for Robot files can enable that as well.
| 0easy
|
Title: [FEA] Radial and linear hierarchical layouts
Body: From discussions with @sevickson :
The radial and linear layout modes are helpful for API usage, so we should make it easy for pygraphistry/api users to make them!
This comes down to a few things, and can be done in stages:
### Components
*PyGraphistry layout*: mapping a `graph` -> linear/radial `coordinates` + `axis` object
Input:
* g: nodes/edges df with src/dst/node bindings
* `orientation :: linear | radial
* `?sort_key :: str`
* `axis :: [ { label : str, ?key_val: any, ?sort: [ { ?key: str, ascending: bool } ], color : str | int, ?thickness: int, ?width : int, stroke : solid | dotted | ..., ?spacing : int } ]`
Output:
* g.nodes set with x/y
* axis objects list
*PyGraphistry hooks for specifying the layout*
* Graph: g.layout(g.layout_radial(...))`, where `g.layout_radial` internally is just `g * ... -> g
* Hypergraph: ???
* `.plot()`: Uses complexEncodings API to pass along the axis?
### Rollout
* Phase 1: Run the layout locally via PyGraphistry <-- community-contributable for the layout alg part, and we should carry the water for plugging into pygraphistry and the complex encodings api
* Phase 2: Support for all API users and on bigger scale via mirroring on the backend
* Phase 3: ... with post-load in-UI editing
| 0easy
|
Title: docs: fullwidth.html doesn't exist?
Body: ### Discussed in https://github.com/django-cms/django-cms/discussions/8003
<div type='discussions-op-text'>
<sup>Originally posted by **jfmatth** September 19, 2024</sup>
Hi, trying to get django-cms working and the tutorial mentions that fullwidth.html is the first of my templates in settings.py but that file is NO WHERE To be found, do i make it?
The install for non-docker is very minimal and DOES not match the tutorial
Again, unless I'm not getting it?</div> | 0easy
|
Title: Handle start index with `FOR IN ENUMERATE` loops already in parser
Body: We added support for the optional start index in RF 4.0 (#3781). It was implemented so that only the runner object detected that the last iterated value has `start=` prefix and the parser doesn't know anything about it. When similar configuration was added to `WHILE` (`limit`) and `EXCEPT` (`type`), they were already recognized by the parser making the behavior with `start` inconsistent. This should be changed and `start` with `FOR IN ENUMERATE` loops handled by the parser as well.
Handling `start` in the parser means that the `For` model object in the `TestSuite` structure needs `start` attribute as well. The same `For` is used with all `FOR` loop types and it getting an attribute that only affects one of them is a bit questionable, but I don't consider it too big a problem. We could consider adding separate `ForInEnumerate`, `ForInRange` and `ForInZip` model object, but I consider that too much work compared to benefits at least right now. We also need to add `start` as an attribute to `output.xml`. Changing it always has backwards incompatibility concerns, but tools processing XML files shouldn't be bothered by new attributes so I consider this safe.
One reason to do this is that we plan to make `FOR IN ZIP` configurable as well. It's configuration options should definitely be handled by the parser and in that case `FOR IN ENUMERATE` behaving differently would be really weird. | 0easy
|
Title: New Libdoc UI translations
Body: Libdoc got a support for localizing the UI in HTML outputs in RF 7.2 (#3676). Let's use this issue to track new translations in RF 7.3.
- [ ] Italian (PR #5342) | 0easy
|
Title: Time to First Response metric API
Body: The canonical definition is here: https://chaoss.community/?p=3448 | 0easy
|
Title: Some functions not showing on the API documentation page
Body: Noticed that some functions are missing from the API documentation page. ```unionize_dataframe_categories``` and ```move``` functions are not listed on the API docs page. Not sure which other ones are missing. Not sure how this can be fixed; if anyone can point me on how to fix it (if I can), that would be great. | 0easy
|
Title: [BUG] django-cms and django-modeltranslation 0.19.3+
Body: ## Description
Using these two together causes `allow_children` and `child_classes` to be ignored, and `disable_child_plugins` is always set in the structure editor. And maybe more.
## Steps to reproduce
Install django-cms 3.11.X, and django-modeltranslation 0.19.3+. Try the structure editor, plugins can be placed within others - only plugins that have a `child_classes = ["APlugin"]` work correctly. Plugins with a parent are all locked, and cnnot be moved outside the parent.
## Expected behaviour
As before, structure limits respected, no locked plugins.
## Actual behaviour
I tracked the issue to this template: https://github.com/django-cms/django-cms/blob/3.11.6/cms/templates/cms/toolbar/dragitem.html#L3
Using `{{ plugin.get_plugin_class.allow_children|pprint }}` shows `attribute error: XYPlugin has no attribute admin_site`. Not using pprint outputs the model name, i.e. always `True`. YAY. After looking around which admin could be adding some kind of getter or whatever, I found modeltranslation - 0.19.2 and everything is ok. 0.19.3 adds some typing stuff that I don't really understand, and monkeypatches Djangos ModelAdmin - as far as I get it.
Here it is, probably/maybe around the monkeypatch?!:
https://github.com/deschler/django-modeltranslation/compare/v0.19.2...v0.19.3#diff-70244e2051407abc833c4b5691c54704040cb171edc9a46d71260332fe87ec4bR26
I don't know admin internals that well, but it seems that somewhere an admin_site is required? But where and how, no clue yet.
## Additional information (CMS/Python/Django versions)
Python 3.11, django-cms 3.11.X and django-modeltranslation 0.19.3+ to reproduce.
## Do you want to help fix this issue?
* [x] Yes, I want to help fix this issue and I will join the channel #pr-reviews on [the Discord Server](https://discord-pr-review-channel.django-cms.org) to confirm with the community that a PR is welcome.
* [ ] No, I only want to report the issue.
| 0easy
|
Title: Django40 deprecation warnings
Body: **Describe the bug**
When upgrading to ue Django 3.1.2, the following warnings are emitted:
```
/usr/local/lib/python3.8/dist-packages/import_export/admin.py:368: RemovedInDjango40Warning: django.conf.urls.url() is deprecated in favor of django.urls.re_path().
url(r'^export/$',
/usr/local/lib/python3.8/dist-packages/import_export/signals.py:3: RemovedInDjango40Warning: The providing_args argument is deprecated. As it is purely documentational, it has no replacement. If you rely on this argument as documentation, you can move the text to a code comment or docstring.
post_export = Signal(providing_args=["model"])
```
**To Reproduce**
Steps to reproduce the behavior:
1. Upgrade Django to 3.1.2.
2. Run some tests which invoke Django views:
```
py.test -slv test_with_django_view.py
```
3. Scroll down to 'warnings summary'
4. See messages
**Versions**
- Django Import Export: 2.3.0
- Python 3.8
- Django 3.1.2
**Expected behavior**
No deprecation warning should be emitted.
| 0easy
|
Title: [Feature]: Support openai responses API interface
Body: ### ๐ The feature, motivation and pitch
OpenAI has released a new Responses API, but vLLM does not currently support it.
We request that vLLM adds compatibility with this API to stay in sync with OpenAI's updates.
Supporting the Responses API will enhance vLLM's utility and competitiveness.
For more details, see the OpenAI documentation
* OpenAI Responses api reference
https://platform.openai.com/docs/api-reference/responses
* OpenAI Responses vs. Chat Completions
https://platform.openai.com/docs/guides/responses-vs-chat-completions
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 0easy
|
Title: "cmd" disabled or not supported
Body: ### Describe the bug

### Reproduce
The language model is: bagelmisterytour
not really relevant
### Expected behavior
We should make it use powershell when cmd is the input from the model
### Screenshots
_No response_
### Open Interpreter version
0.2.0
### Python version
n/a
### Operating System name and version
Windows 11
### Additional context
_No response_ | 0easy
|
Title: Histogram error with `>u2` dtype
Body: The histogram function when plotting does not work with certain dtypes. Example of this:
```python
import dask.array as da
import hyperspy.api as hs
data = da.arange(0, 10000, dtype='>u2').reshape((100, 100))
s = hs.signals.Signal2D(data).as_lazy()
s.plot()
# Press H
```
Gives the error message:
```python
File "hyperspy/hyperspy/signal_tools.py", line 964, in plot_histogram
self.hist_data = self._get_histogram(data)
File "hyperspy/hyperspy/signal_tools.py", line 922, in _get_histogram
return numba_histogram(data, bins=self.bins,
File "hyperspy/hyperspy/misc/array_tools.py", line 396:
def numba_histogram(data, bins, ranges):
<source elided>
# Adapted from https://iscinumpy.gitlab.io/post/histogram-speeds-in-python/
hist = np.zeros((bins,), dtype=np.intp)
^
This error may have been caused by the following argument(s):
- argument 0: Unsupported array dtype: >u2
```
The `>u2` dtype sometimes comes from reading raw binary files from memory, via `numpy.memmap`. Ergo, the dtype is set directly when loading the file. While it is possible to change the dtype manually to something else, like `np.uint16` it would be nice to avoid this if possible.
The error seems to stem from the `numba_histogram` function utilizing numba, which does not support these types of dtypes (like `>u2`).
I'm not sure how best to fix this. One way could be to check if the data is this type of dtype, and if it is, convert it to `np.uint16`. | 0easy
|
Title: [BUG] black and flake disagree
Body: This is cool with `black` but not with `flake8`
```python
def test_throw_valuerror_given_nonsense():
X = np.ones((10, 2))
y = np.ones(10)
with pytest.raises(ValueError):
x_transform = IntervalEncoder(n_chunks=0).fit(X, y)
with pytest.raises(ValueError):
x_transform = IntervalEncoder(n_chunks=-1).fit(X, y)
with pytest.raises(ValueError):
x_transform = IntervalEncoder(span=-0.1).fit(X, y)
with pytest.raises(ValueError):
x_transform = IntervalEncoder(span=2.0).fit(X, y)
with pytest.raises(ValueError):
x_transform = IntervalEncoder(method="dinosaurhead").fit(X, y)
```
What flake8 wants;
```python
def test_throw_valuerror_given_nonsense():
X = np.ones((10, 2))
y = np.ones(10)
with pytest.raises(ValueError):
IntervalEncoder(n_chunks=0).fit(X, y)
with pytest.raises(ValueError):
IntervalEncoder(n_chunks=-1).fit(X, y)
with pytest.raises(ValueError):
IntervalEncoder(span=-0.1).fit(X, y)
with pytest.raises(ValueError):
IntervalEncoder(span=2.0).fit(X, y)
with pytest.raises(ValueError):
IntervalEncoder(method="dinosaurhead").fit(X, y)
```
Can I put flake8 back into pre-commit? | 0easy
|
Title: payjs ๅพฎไฟกไบ็ปด็ ่ฐ็จๅคฑ่ดฅ
Body: 
ๅทฒ็ป็กฎ่ฎค mchid ๅ payjs_key ๆญฃ็กฎใ
ๅจ 1.6 ไธๆญฃๅธธ๏ผๅ็บงๅฐ 1.83 ๅๅญๅจไธ่ฟฐ้่ฏฏใ
ๅฐๅ๏ผhttps://store.dig77.com
| 0easy
|
Title: Ability to dynamically update data in heatmaps
Body: Currently, the style of a heatmap can be updated dynamically after the map has been rendered, but not the underlying data. The abilty to do this would be useful.
This was originally raised in issue #186. | 0easy
|
Title: Items are not converted when using generics like `list[int]` and passing object, not string
Body: We added support for converting items with generics like `list[int]` and `dict[str, int]` in RF 6.0 (#4433). There unfortunately is a bug that it only works when passing values as string literals, not as actual object. For example, using this example from our Slack:
```python
def sum_values(values_dict: dict[str, int]) -> int:
values_sum: int = 0
for _, value in values_dict.items():
values_sum += value
return values_sum
```
like:
```
&{dict} = Create Dictionary spam 11 eggs 22
${sum} = Sum Values ${dict}
```
fails with:
> TypeError: unsupported operand type(s) for +=: 'int' and 'str'`.
Using the example with a dictionary literal works fine:
```
${sum} = Sum Values {'spam': '11', 'eggs': '22'}
``` | 0easy
|
Title: [QOL] Allow delay between requests in case of API throttling
Body: ## Context
Some APIs in the open have quotas on how many requests can be sent in a specific amount of time. Since GraphQLer doesn't have a sense of how many requests per time interval this is set at, GraphQLer should allow the user to set a custom time interval to delay requests such that an API that GraphQLer is used against does not throttle.
## Deliverables
Add a configurable delay to requests such that requests aren't hammering an API too quickly | 0easy
|
Title: modify _find_or_check_categorical_variables to support dtype='category'
Body: **Is your feature request related to a problem? Please describe.**
Currently, _find_or_check_categorical_variables in variable_manipulation.py only checks for type object to determine categorical variables.
**Describe the solution you'd like**
The method should be enhanced to support dtype="category" to support cases where user has used category datatype to optimise memory footprint of Pandas dataframe.
(https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html)
| 0easy
|
Title: Docs out of sync for `qt_api`
Body: Hello! Had a test fail with 4.0.0 like:
```
@pytest.fixture
def spawn_terminal(qtbot, blank_pilot_db):
prefs.clear()
prefs.set('AGENT', 'TERMINAL')
pilot_db_fn = blank_pilot_db
prefs.set('PILOT_DB', pilot_db_fn)
> app = qt_api.QApplication.instance()
E AttributeError: '_QtApi' object has no attribute 'QApplication'
```
after looking around closed issues it looks like using `qt_api` directly has largely been deprecated: https://github.com/pytest-dev/pytest-qt/pull/358 and https://github.com/pytest-dev/pytest-qt/issues/365 so I think i'll be able to fix it, but the docs still say to use `qt_api` to test `QApplication` objects this way:
https://pytest-qt.readthedocs.io/en/latest/qapplication.html?highlight=qt_api#testing-custom-qapplications
```
from pytestqt.qt_compat import qt_api
class CustomQApplication(qt_api.QtWidgets.QApplication):
def __init__(self, *argv):
super().__init__(*argv)
self.custom_attr = "xxx"
def custom_function(self):
pass
```
I realize the reference i'm using: `qt_api.QApplication` is different than `qt_api.QtWidgets.QApplication` but if i'm reading right i should just use `QApplication` itself? or is that a special case vs. the recommendation to do so with `QWidgets` eg here: https://github.com/pytest-dev/pytest-qt/issues/365 ?
seems like a reasonably quick fix to the docs, just lettin ya know :) | 0easy
|
Title: Allow rendering deterministic sites
Body: Currently, we skip deterministic sites while rendering the model. It would be nice to support plotting deterministic nodes (e.g. to make the rendered model clearer when using reparam handlers). This is especially useful for hierarchical models like the one in #1295 | 0easy
|
Title: Add explicit argument converter for `Any` that does no conversion
Body: In Python's type system [Any](https://docs.python.org/3/library/typing.html#typing.Any) is used to indicate that any type is accepted. We should add an explicit argument convertor for `Any` that recognizes it but doesn't do any conversion. Because we don't do any conversion for unrecognized types, and `Any` currently isn't recognized, this change wouldn't change much. It nevertheless has some benefits:
- `Any` will get its own type documentation pop-up in Libdoc HTML output explaining that all arguments are accepted without conversion.
- It's becomes explicit that there's no conversion with `Any`.
- Behavior with `Any` won't change even if we later decide to handle unrecognized types differently than nowadays. See #4628 for related discussion.
- It would have avoided issues reported in #4626. Those issues are now fixed, but adding `Any` converter would help removing the somewhat annoying workaround code that was added.
| 0easy
|
Title: Add a "don't ask me anymore" option to the interactive mode
Body: this is where we ask for input in vulnerabilities.py | 0easy
|
Title: Remove the st2resulstracker service from st2ctl and development env
Body: Mistral was deprecated and removed from the st2 in `v3.3.0` release. However there is a `st2resultstracker` service which is used for polling result from async runner. Mistral was the only async runner which means resultsracker is obsolete and could be disabled.
Team agreed that we should keep the service implementation in st2 codebase if we'll need it in future, but remove service from the associated packaging/startup logic, st2ctl , development env.
This should be easy & fun task, if anyone from @StackStorm/contributors wants to help removing that. | 0easy
|
Title: Feature request: Windows script for running code generated by GPT-Engineer
Body: ## Policy and info
- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
- Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
- Consider adding the label "good first issue" for interesting, but easy features.
## Feature description
The gpt-engineer project currently only has a script for running on Linux systems. To enhance accessibility and convenience, it would be beneficial to provide a Windows-compatible script (using either batch (.bat) or PowerShell (.ps1) format).
Ideally, when using GPT-Engineer to generate new projects, this enhanced setup should include one of the following options:
- Auto-detection: The script can automatically detect the operating system (Windows or Linux) and execute the appropriate setup instructions.
- Separate scripts: Provide clearly named scripts for both Windows and Linux, allowing users to select the one they need.
**What we have now is a script for linux shell**

## Motivation/Application
- Cross-Platform Usability: Windows users would be able to easily utilize GPT-Engineer to generate and manage projects.
- Project Setup Streamlining: New projects generated by GPT-Engineer could be directly initiated with the right setup across both Windows and Linux environments without manual adjustments.
| 0easy
|
Title: Docs: being more explicit about why to use api.background.task
Body: Hey! When I was reading the docs and I saw `api.background.task`, the first thing that popped into my head was "but you can just use `asyncio.create_task` or `loop,.run_in_executor` for that without inventing something new!". But then I noticed that it also passes the context vars to the synchronous backround stuff, which is nice. Wonder why that's not the case in the stdlib, cause it would be nice :)
All I'm asking is: would you welcome a docs PR explaining `api.background.task` to asyncio purists like myself? :)
| 0easy
|
Title: Deprecate `falcon.testing.httpnow()`
Body: Apparently this ancient alias has survived unnoticed from the very early iterations of the framework.
Wrap it in the [`@deprecated(...)`](https://falcon.readthedocs.io/en/stable/api/util.html#falcon.util.deprecated) decorator so that we don't miss it again.
Write that it will be removed in Falcon 5.0 in the deprecation message. | 0easy
|
Title: [Feature] support Qwen 3 and Qwen 3 MoE
Body: ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 2. Please use English, otherwise it will be closed.
### Motivation
ref https://github.com/InternLM/lmdeploy/pull/3305
### Related resources
_No response_ | 0easy
|
Title: Add estimated train time in Optuna mode
Body: In `Optuna` mode the train time is:
- `Optuna` tuning for each algorithm = `len(algorithms)*otpuna_time_budget`,
- ML model training = `total_train_time` | 0easy
|
Title: In docker/*.Dockerfile remove "<2" from the django dep
Body: The qualifier is no longer necessary because we are only testing recent Python 3.x versions. | 0easy
|
Title: ASGI websocket must pass thru bytes as is
Body:
_Originally posted by @Tronic in https://github.com/sanic-org/sanic/pull/2640#discussion_r1058027028_
| 0easy
|
Title: Unintended sharing of `MultipartParseOptions._DEFAULT_HANDLERS` between instances
Body: I think we should initialize `media_handlers` in `MultipartParseOptions.__init__()` with a copy `MultipartParseOptions._DEFAULT_HANDLERS`, not just assign reference:
```python
>>> from falcon.media.multipart import MultipartParseOptions
>>> mo1 = MultipartParseOptions()
>>> mo2 = MultipartParseOptions()
>>> sorted(mo1.media_handlers)
['application/json', 'application/x-www-form-urlencoded']
>>> mo1.media_handlers.pop('application/x-www-form-urlencoded')
<falcon.media.urlencoded.URLEncodedFormHandler object at 0x7dce2c618f50>
>>> mo1.media_handlers
{'application/json': <falcon.media.json.JSONHandler object at 0x7dce2c7fa990>}
>>> mo2.media_handlers
{'application/json': <falcon.media.json.JSONHandler object at 0x7dce2c7fa990>}
>>> MultipartParseOptions._DEFAULT_HANDLERS
{'application/json': <falcon.media.json.JSONHandler object at 0x7dce2c7fa990>}
``` | 0easy
|
Title: Self-attentive Sentence Embedding Tutorial Undeclared Dependency (sklearn)
Body: ## Description
self-attentive sentence embedding tutorial is using sklearn for accuracy and f1. As we didn't take sklearn as a dependency, we should migrate those usage to the accuracy/f1 in mxnet and drop the dependency in docker | 0easy
|
Title: Finish the TODO list of classes/functions to type.
Body: We have a [big todo list of classes/functions to type](https://github.com/keras-team/autokeras/blob/dda2d9de0c602a1f47b45bb26783eacfd9572815/tests/autokeras/typed_api_test.py#L11) (add type hints, see [PEP484](https://www.python.org/dev/peps/pep-0484/) for more infos about type hints) and the docs: https://docs.python.org/3/library/typing.html
Functions of the list should be typed and for classes, only the contructor (`__init__()`) should be typed. No need to specify the return type of a `__init__()` because it's always `None`.
When you add types for a function/class, you can remove it from the TODO list in the same pull request.
Please make multiple small pull requests. Not a single big one :)
If you don't really understand what is asked:
We did the same procedure in tensorflow/addons here: https://github.com/tensorflow/addons/issues/989 See this issue and look into the pull requests made.
You can always ask if you're wondering what to do. It's fairly easy and a good way to start contributing to autokeras!
| 0easy
|
Title: [tech debt] Merge RGBShift and GaussianNoise
Body: Both do the same, the only distribution is the distribution from which noise is sampled.
1. Merge two classes.
2. Add Deprecated warning to `RGBShift` | 0easy
|
Title: [Points] Enable copy-pasting of selected Points between Points layers
Body: ## ๐ Feature
Allow users to copy Points that are selected in a Points layer and then paste them into another Points layer.
## Motivation
Points are very useful for annotating locations of things, they can also be used as prompts for segmentation (seeded watershed, segment-anything, etc.)
It can be convenient to have a ground truth layer of locations and then be able use subsets of those for other purposes.
For example `micro-SAM` plugin has it's own `point_prompts` layer that it uses for prompts for segmentation.
Copying points between layers can be easily done programmatically, e.g.:
`viewer.layers['Points 2'].add(viewer.layers['Points 1'].data[list(viewer.layers['Points 1'].selected_data)])`
But there is no GUI way of doing it.
## Pitch
When a user has a Points layer selected and they have Points selected within that layer, enable a `Copy` keybinding, temporarily saving the data of those points to some buffer.
When the user has a Points layer selected and this temporary buffer exists, enable a `Paste` keybinding to add the points from the buffer to the current layer.
In an ideal world this would be a contextually aware copy/paste Edit menu item (once that menu exists).
## Alternatives
Add an example of using the console to copy paste points.
## Additional context
This came up while working on a complex annotation and segmentation problem with a user.
Intuitively we tried copy-paste for the selected points and it didn't work!
This could also be extended to Shapes.
The buffer could store the last copied data *and the layer type* and then when you paste check if the layer type matches, if not abort and warn the user. | 0easy
|
Title: ๐ Please consider sponsoring this project โค๏ธ
Body: I really enjoy working on Open Source projects, and seeing my contributions being picked up by others is immensely motivating. Yet, as I am juggling ๐คน personal and professional life my time is limited and I do have to pick my battles a bit. At times, I am unable to reach that extra mile, to bring projects further, or to attend to features useful not just to me and my projects, but to the community as a whole. If you would like to help out, consider sponsoring as that definitely helps in setting the priorities straight... :heart:
https://github.com/sponsors/pennersr
| 0easy
|
Title: Inline completion strips out new line when it should be kept
Body: ## Description
New line is stripped out if model returns
~~~
```
~~~
prefix.
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
1. Write comment like `# load json file`
2. See suggestion gets applied like: `# load json file import`
## Expected behavior
```python
# load json file
import
````
## Context
This is an issue in
https://github.com/jupyterlab/jupyter-ai/blob/fcb2d7111f35e363f76afd28aea581500aad89e9/packages/jupyter-ai-magics/jupyter_ai_magics/completion_utils.py#L25-L47
| 0easy
|
Title: RequestOptions.auto_parse_qs_csv documentation is misleading for the True case
Body: [`uri.parse_query_string`](https://falcon.readthedocs.io/en/stable/api/util.html#falcon.uri.parse_query_string) supports splitting parameter values on comma, however, only the first occurrence is actually split:
```python
>>> uri.parse_query_string('interesting&stuff=1,2&key=value&stuff=3,4,5', csv=True)
{'stuff': ['1', '2', '3,4,5'], 'key': 'value'}
```
This is a known and documented limitation of `uri.parse_query_string`:
> The two different ways of specifying lists may not be mixed in a single query string for the same parameter.
However, documentation for [`RequestOptions.auto_parse_qs_csv`](https://falcon.readthedocs.io/en/stable/api/api.html#falcon.RequestOptions.auto_parse_qs_csv) mentions neither this limitation nor the fact that `uri.parse_query_string` is used for parsing. Moreover, it provides a misleading (although the example values are technically correct since only the first occurrence of `t` has commas) example:
> When `auto_parse_qs_csv` is set to ``True``, the query string value
> is also split on non-percent-encoded commas and these items
> are added to the final list (i.e. ``/?t=1,2,3&t=4``
> becomes ``['1', '2', '3', '4']``). | 0easy
|
Title: envbindir behavior changed in tox4
Body: ``` ini
[tox]
skipsdist = true
envlist = py{37,38}
[testenv]
deps = flake8
commands =
py37: {[testenv:py37-flake8]commands}
py38: {[testenv:py38-flake8]commands}
[testenv:py{37,38}-flake8]
commands = {envbindir}/flake8 --version
```
``` console
$ venv/bin/tox --version
3.28.0 imported from /tmp/tmp.5W879ZcLx9/venv/lib/python3.8/site-packages/tox/__init__.py
$ venv/bin/tox
py37 installed: flake8==5.0.4,importlib-metadata==4.2.0,mccabe==0.7.0,pycodestyle==2.9.1,pyflakes==2.5.0,typing_extensions==4.4.0,zipp==3.12.0
py37 run-test-pre: PYTHONHASHSEED='335891262'
py37 run-test: commands[0] | /tmp/tmp.5W879ZcLx9/.tox/py37/bin/flake8 --version
5.0.4 (mccabe: 0.7.0, pycodestyle: 2.9.1, pyflakes: 2.5.0) CPython 3.7.13 on
Linux
py38 installed: flake8==6.0.0,mccabe==0.7.0,pycodestyle==2.10.0,pyflakes==3.0.1
py38 run-test-pre: PYTHONHASHSEED='335891262'
py38 run-test: commands[0] | /tmp/tmp.5W879ZcLx9/.tox/py38/bin/flake8 --version
6.0.0 (mccabe: 0.7.0, pycodestyle: 2.10.0, pyflakes: 3.0.1)
CPython 3.8.10 on Linux
____________________________ summary _____________________________
py37: commands succeeded
py38: commands succeeded
congratulations :)
```
- Note how `{envbindir}` was substituted using the name of the env being _run_:
> py38 run-test: commands[0] | /tmp/tmp.5W879ZcLx9/.tox/py38/bin/flake8 --version
``` console
$ venv/bin/tox --version
4.4.2 from /tmp/tmp.5W879ZcLx9/venv/lib/python3.8/site-packages/tox/__init__.py
$ venv/bin/tox
py37: commands[0]> .tox/py37-flake8/bin/flake8 --version
py37: exit 2 (0.01 seconds) /tmp/tmp.5W879ZcLx9> .tox/py37-flake8/bin/flake8 --version
py37: FAIL โ in 0.1 seconds
py38: commands[0]> .tox/py38-flake8/bin/flake8 --version
py38: exit 2 (0.00 seconds) /tmp/tmp.5W879ZcLx9> .tox/py38-flake8/bin/flake8 --version
py37: FAIL code 2 (0.10=setup[0.09]+cmd[0.01] seconds)
py38: FAIL code 2 (0.02=setup[0.02]+cmd[0.00] seconds)
evaluation failed :( (0.23 seconds)
```
- Note how the `{envbindir}` substitution changed to the name of the env being _referenced_:
> py38: commands[0]> .tox/py38-flake8/bin/flake8 --version
Was this intentional? | 0easy
|
Title: Handle mentions in the text during a conversation
Body: Collapse mentioned users to their server nicknames when people are mentioned in conversations.
For example, the bot will see <@120931820938102>, we want to collapse that to that user's server nickname | 0easy
|
Title: test_s3_export fails with boto3 >= 1.36.0
Body: https://github.com/scrapy/scrapy/actions/runs/12845409692/job/35819443101?pr=6618
```
botocore.exceptions.StubAssertionError: Error getting response stub for operation PutObject: Expected parameters:
{'Body': <ANY>, 'Bucket': 'mybucket', 'Key': <ANY>},
but received:
{'Body': <s3transfer.utils.ReadFileChunk object at 0x7f8fc1dee750>,
'Bucket': 'mybucket',
'ChecksumAlgorithm': 'CRC32',
'Key': 'export.csv/3.json'}
``` | 0easy
|
Title: Document how libraries can do cleanup activities if Robot's timeout occurs
Body: Issue #5376 proposes enhancing the Process library so that if Robot's timeout occurs when a process is waited to end, the process is killed instead of leaving it running on the background. That turned out to be surprisingly simple. Because other libraries may have similar needs, we should document how to do that in the User Guide. In addition to that, we should make sure that the related APIs are good.
When Robot's timeout occurs, `robot.errors.TimeoutError` is raised in the library code. It can basically occur at any point similarly as the standard `KeyboardInterrupt` or `SystemExit`. The simplest way to do cleanup activities if any exception occurs is using `try/finally`. It is really simple and straightforward, and it also has a benefit that the exception is automatically re-raised:
```python
def keyword():
try:
do_something()
finally:
do_cleanup()
```
In some cases there may be a need to catch Robot's `TimeoutError` explicitly. That's easy as well, but then the exception should be re-raised explicitly:
```python
from robot.errors import TimeoutError
def keyword():
try:
do_something()
except TimeoutError:
do_cleanup()
raise
```
A problem with the above is that Python also has its own [TimeoutError](https://docs.python.org/3/library/exceptions.html#TimeoutError). This is problematic because `TimeoutError` shadows the standard exception and also because you don't know which exception `except TimeoutError:` refers to when you see just that line. Now that we are planning to make our `TimeoutError` part of the public API, I believe we should rename it to avoid confusion. I think `TimeoutExceeded` would be fine, it's different to `subprocess.TimeoutExpired`, but I'm open for other ideas as well. To avoid backwards compatibility problems, we should preserve the old name as an alias at least until RF 9.
If you wonder why we added `TimeoutError` in the first place when Python also has `TimeoutError`, the reason is that ours was there first. It existed already in RF 2.0, the first open source version, in 2008, but Python got its `TimeoutError` only in Python 3.3 in 2012.
| 0easy
|
Title: base64 and tracedSVG are not part of ImageObjectType
Body: Using the guide, it does not seem that Base64 or SVG exists within the object that defines images types. The documentation section in question is linked here: https://wagtail-grapple.readthedocs.io/en/latest/general-usage/graphql-types.html?highlight=tracedSvg%3A%20String%20base64%3A#imageobjecttype
The query:
``` GraphQl
{
pages {
... on ExamplePage {
title
image {
src
base64
}
}
}
}
```
The output:
``` GraphQl
{
"errors": [
{
"message": "Cannot query field \"base64\" on type \"ImageObjectType\".",
"locations": [
{
"line": 7,
"column": 9
}
]
}
]
}
```
I believe that this is inhibiting use with [Wagtail-Gatsby](https://github.com/NathHorrigan/wagtail-gatsby) | 0easy
|
Title: Message passed to `log_message` listener method has wrong type
Body: Messages should be instances of `robot.result.Message`, but the message that's created during execution, and passed to the `log_message` listener method, is `robot.output.Message` that's based on `robot.model.Message`. There are valid reasons to use a different message type during execution and when working with results afterwards, but `robot.output.Message` should be based on `robot.result.Message`.
Interestingly, the type hint in `robot.api.interfaces.ListenerV3` is wrong as well. It is currently `robot.model.Message` when it should be `robot.result.message`.
This isn't a too severe issue at the moment, but needs to be fixed now that we plan to include messages in the result model also during execution (#5260). | 0easy
|
Title: RangeDetailView should have pagination
Body: The page RangeDetailView (oscar/offer/range.html) should show products in pagination.
Because there might be too many products in a Range | 0easy
|
Title: add tests: update on filtered query fails when using subquery
Body: The original bug was fixed but we obviously lack tests that would have prevented the issue. This issue can be resolved by adding more tests covering usage of `Subquery` with updates. This might be a good first time issue.
# Original bug report
**Describe the bug**
As of #1777 / 7f077c169920e2981d74c4e66c5a24884dc241fb and version 0.22.1, update queries using `tortoise.expressions.Subquery` in filters fail with the following exceptions:
```
# sqlite3
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 2, and there are 1 supplied.
# postgres
tortoise.exceptions.OperationalError: invalid input for query argument $1: 'test' ('str' object cannot be interpreted as an integer)
# from postgres' log
UPDATE "event" SET "name"=$1 WHERE "tournament_id" IN (SELECT "id" "0" FROM "tournament" WHERE "id"=$1)
```
**To Reproduce**
Run the following sample code under `tortoise-orm==0.22.1`:
```python
from tortoise.expressions import Subquery
from tortoise.models import Model
from tortoise import fields
from tortoise import Tortoise
import asyncio
class Tournament(Model):
id = fields.IntField(primary_key=True)
name = fields.CharField(max_length=255)
class Event(Model):
id = fields.IntField(primary_key=True)
tournament = fields.ForeignKeyField("models.Tournament", related_name="events")
name = fields.CharField(max_length=255)
async def init():
await Tortoise.init(
config={
"connections": {"default": "sqlite://:memory:"},
"apps": {
"models": {
"models": ["__main__"],
"default_connection": "default",
},
},
"use_tz": True,
}
)
await Tortoise.generate_schemas()
async def main():
await init()
await Tournament.create(name="abc")
await Event.create(tournament_id=1, name="abc")
# this succeeds
affected = await Event.filter(tournament_id=1).update(name="test")
print("0", affected)
# this succeeds too
events = await Event.filter(
tournament_id__in=Subquery(Tournament.filter(id=1).values_list("id", flat=True))
)
print("1", events)
# this fails
affected = await Event.filter(
tournament_id__in=Subquery(
Tournament.filter(id=1).values_list("id", flat=True)
),
).update(name="test")
print("2", affected)
await Tortoise.close_connections()
if __name__ == "__main__":
asyncio.run(main())
```
**Expected behavior**
The generated query should be correct, rows matching the filter should be updated.
**Additional context**
The good news is that it's fixed in #1797, however I cannot say why or how that specific PR fixes the underlying issue.
| 0easy
|
Title: Complete function for missing combinations of data
Body: # Brief Description
I would like to propose the ``complete`` function, similar to tidyr's ``complete`` function, to get missing combinations of data.
# Example API
```python
#code idea:
@pf.register_dataframe_method
def complete(df, combinations: ..., fill_values = dictionary):
"""
Expose missing combinations of data in a dataframe.
"""
#get current positions of the columns
cols = df.columns
combo = pd.MultiIndex.from_product([df.loc[:,col].unique() for col in combinations], names = combinations)
df = (df.set_index(combinations)
.reindex(combo)
.reset_index()
.reindex(columns = cols))
if fill_values:
df = df.fillna(fill_values)
return df
# example data:
df = pd.DataFrame({'group': [1, 2, 1],
'item_id': [1, 2, 2],
'item_name': ['a', 'b', 'b'],
'value1': [1, 2, 3],
'value2': [4, 5, 6]})
group item_id item_name value1 value2
1 1 1 a 1 4
2 2 2 b 2 5
3 1 2 b 3 6
df.complete(combinations = ("group", "item_id", "item_name"))
group item_id item_name value1 value2
0 1 1 a 1.0 4.0
1 1 1 b NaN NaN
2 1 2 a NaN NaN
3 1 2 b 3.0 6.0
4 2 1 a NaN NaN
5 2 1 b NaN NaN
6 2 2 a NaN NaN
7 2 2 b 2.0 5.0
df.complete(combinations = ("group", "item_id", "item_name"), fill_values = {"value1" : 0}))
group item_id item_name value1 value2
0 1 1 a 1.0 4.0
1 1 1 b 0.0 NaN
2 1 2 a 0.0 NaN
3 1 2 b 3.0 6.0
4 2 1 a 0.0 NaN
5 2 1 b 0.0 NaN
6 2 2 a 0.0 NaN
7 2 2 b 2.0 5.0
# more examples below
# ...
```
| 0easy
|
Title: log1m_exp and log_diff_exp functions
Body: When writing custom distributions, it is often helpful to have numerically stable implementations of `log_diff_exp(a, b) := log(exp(a) - exp(b))` and particularly `log1m_exp(x) := log(1 - exp(x))`. The naive implementations are not stable for many probabilistic programming use cases, and so probabilistic programming languages including [Stan](https://mc-stan.org/docs/2_29/functions-reference/composed-functions.html) and [PyMC](https://docs.pymc.io/en/v3/api/math.html) provide numerically-stable implementations (typically following [Machler, 2012](https://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf)) of these functions.
As far as I can tell, Numpyro does not, and they are not present in Jax.
I wonder whether it would be worth providing them. I have written basic implementations following Machler for my own use. I would happily make a PR including them, but someone more experienced could probably write better/more idiomatic ones.
```
import jax.numpy as jnp
def log1m_exp(x):
"""
Numerically stable calculation
of the quantity log(1 - exp(x)),
following the algorithm of
Machler [1]. This is
the algorithm used in TensorFlow Probability,
PyMC, and Stan, but it is not provided
yet with Numpyro.
Currently returns NaN for x > 0,
but may be modified in the future
to throw a ValueError
[1] https://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf
"""
# return 0. rather than -0. if
# we get a negative exponent that exceeds
# the floating point representation
arr_x = 1.0 * jnp.array(x)
oob = arr_x < jnp.log(jnp.finfo(
arr_x.dtype).smallest_normal)
mask = arr_x > -0.6931472 # appox -log(2)
more_val = jnp.log(-jnp.expm1(arr_x))
less_val = jnp.log1p(-jnp.exp(arr_x))
return jnp.where(
oob,
0.,
jnp.where(
mask,
more_val,
less_val))
def log_diff_exp(a, b):
# note that following Stan,
# we want the log diff exp
# of -inf, -inf to be -inf,
# not nan, because that
# corresponds to log(0 - 0) = -inf
mask = a > b
masktwo = (a == b) & (a < jnp.inf)
return jnp.where(mask,
1.0 * a + log1m_exp(
1.0 * b - 1.0 * a),
jnp.where(masktwo,
-jnp.inf,
jnp.nan))
```
| 0easy
|
Title: Can't simulate a repeated `CircuitOperation` that contains a `repeat_until` `CircuitOperation`
Body: **Description of the issue**
Repetitively checking the syndromes of a prepared state before using it to measure stabilizers is an [important primitive for fault tolerance](https://courses.cs.washington.edu/courses/cse599d/06wi/lecturenotes19.pdf). True fault tolerance requires that this procedure happen multiple times.
For a minimum reproducible example, measuring a qubit until it's `|0>` and then applying an `X` gate to it multiple times will throw a ` raise ValueError('Infinite loop: condition is not modified in subcircuit.')`
**How to reproduce the issue**
```python
import cirq
import sympy
sim = cirq.Simulator()
q = cirq.LineQubit(0)
inner_loop = cirq.CircuitOperation(
cirq.FrozenCircuit(cirq.H(q), cirq.measure(q, key="inner_loop")),
use_repetition_ids=False,
repeat_until=cirq.SympyCondition(sympy.Eq(sympy.Symbol("inner_loop"), 0)),
)
outer_loop = cirq.Circuit(inner_loop, cirq.X(q), cirq.measure(q, key="outer_loop"))
circuit = cirq.Circuit(cirq.CircuitOperation(cirq.FrozenCircuit(outer_loop), repetitions=2))
print(circuit)
result = sim.run(circuit, repetitions=1)
print(result)
```
Will print
`ValueError: Infinite loop: condition is not modified in subcircuit.`
The alternative is to run the `CircuitOperation` twice, but this breaks the printing of the result but this will throw
`ValueError: Cannot extract 2D measurements for repeated keys`
```python
import cirq
import sympy
sim = cirq.Simulator()
q = cirq.LineQubit(0)
inner_loop = cirq.CircuitOperation(
cirq.FrozenCircuit(cirq.H(q), cirq.measure(q, key="inner_loop0")),
use_repetition_ids=False,
repeat_until=cirq.SympyCondition(sympy.Eq(sympy.Symbol("inner_loop0"), 0)),
)
inner_loop1 = cirq.CircuitOperation(
cirq.FrozenCircuit(cirq.H(q), cirq.measure(q, key="inner_loop1")),
use_repetition_ids=False,
repeat_until=cirq.SympyCondition(sympy.Eq(sympy.Symbol("inner_loop1"), 0)),
)
outer_loop = cirq.Circuit(inner_loop, cirq.X(q), cirq.measure(q, key="outer_loop"))
outer_loop1 = cirq.Circuit(inner_loop1, cirq.X(q), cirq.measure(q, key="outer_loop1"))
circuit = cirq.Circuit(outer_loop, outer_loop1)
print(circuit)
result = sim.run(circuit, repetitions=1)
print(result)
```
**Cirq version**
`1.4.0.dev20240126200039`
| 0easy
|
Title: [Feature]: Consolidate `LRUCache` implementations
Body: ### ๐ The feature, motivation and pitch
#14805 introduced `cachetools.LRUCache` to support different size for each item and prepare for a thread-safe implementation. On the other hand, the code under `vllm/adapter_commons` uses the existing `vllm.utils.LRUCache`. To clean up the code, we should consolidate these implementations inside `vllm.utils.LRUCache`. This cache should support the following features:
- Pinning specific items in the cache (the existing `vllm.utils.LRUCache`)
- Custom function to compute the size for each item (`cachetools.LRUCache`)
- Custom callback functions when an item is removed (`vllm.adapter_commons.AdapterLRUCache`)
- The cache should remain compatible with `collections.abc.MutableMapping` interface so it can be passed to `cachetools.cached` to make it thread-safe.
### Alternatives
Keep the two implementations separate. However, this may cause confusion since the two classes share the same name.
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 0easy
|
Title: Allow gaussian splatting to initialize start points from other data pipelines than COLMAP
Body: Currently, gaussian splatting intakes a pointcloud for initialization either through: 1) using a COLMAP dataset, or 2) adding a pointcloud to `transforms.json` and using the nerfstudio dataparser to read it. Currently, data intake pipelines like polycam, metashape, etc do not output a pointcloud and thus gaussian splatting will use a random initialization of points, which produces much worse results.
To fix this, we should edit the data intake scripts to add point clouds to the output `transforms.json` files and use the nerfstudio dataparser to read them. #2557 added the capability to load .ply files in the nerfstudio dataparser
- [x] Add to the COLMAP data processing pipeline to save pointclouds of the SFM points as output, and add it to the `transforms.json`
- [x] Switch to nerfstudio dataparser as the default for splatting
- [ ] implement pointcloud saving in other data processing pipelines | 0easy
|
Title: Programming Language Distribution metric API
Body: The canonical definition is here: https://chaoss.community/?p=3430 | 0easy
|
Title: Avoid the error "Slowdown: 20 per 1 minute"
Body: I propose to avoid this by waiting 1 second to translate the text.
That is, when the user stops typing for 1 second, perform the translation automatically. | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.