text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: [UX/Jobs] No spinner after `jobs launch`, which feels like a hang
Body: When launching a managed job where the controller is on k8s, there is no spinner showing up after the job is submitted, which feels like the system is hanging. However, `sky jobs logs --controller` does show that the job cluster is being launched.
```console
$ sky jobs launch test.yaml --cloud aws --cpus 2 -n test-mount-bucket
Task from YAML spec: test.yaml
Managed job 'test-mount-bucket' will be launched on (estimated):
Considered resources (1 node):
----------------------------------------------------------------------------------------
CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
----------------------------------------------------------------------------------------
AWS m6i.large 2 8 - us-east-1 0.10 ✔
----------------------------------------------------------------------------------------
Launching a managed job 'test-mount-bucket'. Proceed? [Y/n]:
⚙︎ Translating workdir and file_mounts with local source paths to SkyPilot Storage...
Workdir: 'examples' -> storage: 'skypilot-filemounts-vscode-47fbf511'.
Folder : 'examples' -> storage: 'skypilot-filemounts-vscode-47fbf511'.
Created S3 bucket 'skypilot-filemounts-vscode-47fbf511' in us-east-1
Excluded files to sync to cluster based on .gitignore.
✓ Storage synced: examples -> s3://skypilot-filemounts-vscode-47fbf511/ View logs at: ~/sky_logs/sky-2025-01-30-23-32-05-747700/storage_sync.log
Excluded files to sync to cluster based on .gitignore.
✓ Storage synced: examples -> s3://skypilot-filemounts-vscode-47fbf511/ View logs at: ~/sky_logs/sky-2025-01-30-23-32-12-937934/storage_sync.log
✓ Uploaded local files/folders.
Launching managed job 'test-mount-bucket' from jobs controller...
Warning: Credentials used for [GCP, AWS] may expire. Clusters may be leaked if the credentials expire while jobs are running. It is recommended to use credentials that never expire or a service account.
⚙︎ Mounting files.
Syncing (to 1 node): /tmp/managed-dag-test-mount-bucket-ksc7nx8g -> ~/.sky/managed_jobs/test-mount-bucket-c2d4.yaml
Syncing (to 1 node): /tmp/tmp3j5i1vru -> ~/.sky/managed_jobs/test-mount-bucket-c2d4.config_yaml
✓ Files synced. View logs at: ~/sky_logs/sky-2025-01-30-23-32-19-231465/file_mounts.log
Auto-stop is not supported for Kubernetes and RunPod clusters. Skipping.
⚙︎ Job submitted, ID: 2
``` | 0easy
|
Title: HTML/CSS - Fix right bar in docs
Body: When opening in a large monitor, the docs display a right bar to move between sections ([example](https://ploomber.readthedocs.io/en/latest/user-guide/deployment.html)):

However, on tablets and phones, we remove that bar. It'd be better to add a button to the top bar to toggle it, just like we do with the left bar (see right button):

source code: https://github.com/ploomber/ploomber/blob/76b2abf78092e6696d9e25e47cefdc1e1589529f/doc/_templates/layout.html#L41
check doc/contributing.md for instructions
| 0easy
|
Title: Handle all models with error
Body: The AutoML crashes if all models have error. It should be handled more gently.
The example of crash:
```
AutoML directory: AutoML_88
The task is multiclass_classification with evaluation metric logloss
AutoML will use algorithms: ['MLP']
AutoML steps: ['simple_algorithms', 'default_algorithms', 'not_so_random', 'hill_climbing_1', 'hill_climbing_2']
Skip simple_algorithms because no parameters were generated.
* Step default_algorithms will try to check up to 1 model
The least populated class in y has only 2 members, which is less than n_splits=5.
There was an error during 1_Default_MLP training.
Please check AutoML_88/errors.md for details.
* Step not_so_random will try to check up to 4 models
There was an error during 1_MLP training.
Please check AutoML_88/errors.md for details.
There was an error during 2_MLP training.
Please check AutoML_88/errors.md for details.
There was an error during 3_MLP training.
Please check AutoML_88/errors.md for details.
There was an error during 4_MLP training.
Please check AutoML_88/errors.md for details.
Traceback (most recent call last):
File "examples/scripts/nn_benchmark.py", line 53, in <module>
mlp.fit(train_X, train_y)
File "/home/piotr/sandbox/mljar-supervised/supervised/automl.py", line 276, in fit
return self._fit(X, y)
File "/home/piotr/sandbox/mljar-supervised/supervised/base_automl.py", line 723, in _fit
raise e
File "/home/piotr/sandbox/mljar-supervised/supervised/base_automl.py", line 672, in _fit
step, self._models, self._results_path, self._stacked_models
File "/home/piotr/sandbox/mljar-supervised/supervised/tuner/mljar_tuner.py", line 105, in generate_params
return self.get_hill_climbing_params(models)
File "/home/piotr/sandbox/mljar-supervised/supervised/tuner/mljar_tuner.py", line 335, in get_hill_climbing_params
unique_model_types = np.unique(df_models.model_type)
File "/home/piotr/sandbox/mljar-supervised/venv_mljs/lib/python3.6/site-packages/pandas/core/generic.py", line 5136, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'model_type'
```
* the MLP algorithm in the example is experimental. It should be `Neural Network` used instead. | 0easy
|
Title: TLS logging broken with new cryptography
Body: https://github.com/pyca/cryptography/pull/8391 dropped `SSL_get_server_tmp_key()` so we need to disable the code that uses it if it's not available. | 0easy
|
Title: Threading/Async for image concatenation
Body: Concatenating images takes a while sometimes, especially on low-powered servers. I'm thinking it may be a good idea to make image generation work in a queue-based fashion, as a start, and then using that as scaffolding, make image concatenation happen off of the main thread | 0easy
|
Title: Add function get_value to FSMContext
Body: ### aiogram version
3.x
### Problem
In a situation where you only need to take one value in FSMContext handler, you need to write 2 lines of code to take value
### Possible solution
Add a get_value function for FSMContext that takes value by key
### Alternatives
_No response_
### Code example
```python3
# before
data = await state.get_data()
name = data["name"]
# after
name = await state.get_value("name")
```
### Additional information
_No response_ | 0easy
|
Title: Add Logging
Body: - [ ] Add logs
- [ ] Change prints to logs
- [ ] Create a --debug option to more detailed logs too.
https://realpython.com/python-logging/
https://www.loggly.com/ultimate-guide/python-logging-basics/ | 0easy
|
Title: [DOC] Clarify documentation of `RMSE` to avoid confusions regarding the implementation
Body: Noticed discrepancy related to metric definition. RMSE is actually MSE, which makes sense as RMSE tends to be very unstable.
Definiton:
```python
class RMSE(MultiHorizonMetric):
"""
Root mean square error
Defined as ``(y_pred - target)**2``
"""
def __init__(self, reduction="sqrt-mean", **kwargs):
super().__init__(reduction=reduction, **kwargs)
def loss(self, y_pred: Dict[str, torch.Tensor], target):
loss = torch.pow(self.to_prediction(y_pred) - target, 2)
return loss
```
source:
https://github.com/jdb78/pytorch-forecasting/blob/68a0eb5f1701801142ce976fa50305b29507845a/pytorch_forecasting/metrics/point.py#L137C1-L149C20 | 0easy
|
Title: Add docstrings to everything
Body: You can do:
`pip install pydocstyle`
and then run this script while in the `pyt` directory
```python
import os
import re
import subprocess
import sys
os.chdir(os.path.join('pyt'))
try:
docstyle = subprocess.run(["pydocstyle", "--ignore=D105,D203,D212,D213"],
stderr=subprocess.PIPE, universal_newlines=True)
except FileNotFoundError:
print('Error: Install pydocstyle with pip for python 3.'
' Something like: "sudo python -m pip install pydocstyle"')
sys.exit()
lines = re.split('\n', docstyle.stderr)
errors = zip(lines[0::2], lines[1::2])
errors = [x + "\n\t" + y for x, y in errors]
errors = [error for error in errors if 'visit_' not in error]
for error in errors:
print(error + '\n')
print("Total errors: {}".format(len(errors)))
```
It'll spit out which functions don't have docstrings or complain about something non-[PEP 257](https://www.python.org/dev/peps/pep-0257/) compliant.
This is a great way to learn the codebase, and everyone will love you for it.
The [imports code for example](https://github.com/python-security/pyt/blob/master/pyt/stmt_visitor.py#L716-L1013) does not have docstrings, for example.
For an example of great docstrings, see the [user-defined function calls code](https://github.com/python-security/pyt/blob/master/pyt/expr_visitor.py#L187-L495).
Afterwards, we can maybe add [it as a pre-commit hook](https://github.com/chewse/pre-commit-mirrors-pydocstyle). | 0easy
|
Title: Document how the executing kernel is chosen
Body: Hi there,
I recently reinstalled my kernels and suddenly, my tests were no longer passing due to the wrong kernel being used...
I managed to fix the problem but I think it would be nice in the docs to:
- Tell that by default, the executing kernel is the one named `python3`
- Show that this can be changed by setting `kernel_name` in the decorator
- And perhaps that the kernel can be set from the metadata of the notebook by setting `kernel_name` to `''` ? | 0easy
|
Title: [ENH] Seasonal-Trend decomposition by Regression
Body: Currently, the STLForecaster is a rather limited solution that often fails to decompose time series effectively (residuals can still have the seasonalities). One possibility for improvement is to incorporate the STR package from R:
[STR](https://cran.r-project.org/web/packages/stR/index.html) | 0easy
|
Title: [Feature request] Add apply_to_images to HorizontalFlip
Body: | 0easy
|
Title: 对接v免签微信时,在商品付款出现二维码时,右侧提示请付款undefined元
Body: 对接v免签微信时,在商品付款出现二维码时,右侧提示请付款undefined元
虽然会出现付款二维码,但是微信扫码显示手机监控端状态掉线。
但是我在v免签测试接口:http://域名/example。是正常监控到消息的。
而且我单独请求:http://域名/getState获取的结果也是正常的。结果如下:
{
"code": 1,
"msg": "成功",
"data": {
"lastheart": "1620910779",
"lastpay": "1620910687",
"jkstate": "1"
}
}
使用支付宝的当面付付款是正常的。用baiyuetribe/kamifaka:latest镜像搭建的 | 0easy
|
Title: [DOC] Update documentation to include which types of errors can be thrown
Body: # Brief Description of Fix
Right now, our documentation lists inputs and return values, but it does not include information about what types of errors functions normally raise. This is helpful information if you're working with try/except blocks, so that you know what errors to expect. And just as a matter of completeness, it's good to know all the outputs of a function, including likely errors.
# Relevant Context
This would pretty much apply to every docstring, though I'm guessing we'll just add them piecemeal. According to my very poor understanding of Sphinx (I'm looking at [here](https://pythonhosted.org/an_example_pypi_project/sphinx.html)), we should be able to add documentation like
```
:raises: ValueError, TypeError
```
as we go along. | 0easy
|
Title: [New feature] Add `apply_to_images` to `AdditiveNoise`
Body: | 0easy
|
Title: Feature request: Error message and Python major version number
Body: ## Classification: Feature (New), Enhancement
## Reproducibility: N/A
AutoKey version: 95.4
Used GUI (Gtk, Qt, or both): Both
Linux Distribution: Linux Mint 19.1
## Expected Results.
Make: Please make the error message to be accessible by a button on the toolbar.
And please indicate the Python major version number on the 'About AutoKey' box.
| 0easy
|
Title: [Feature request] Add apply_to_images to Perspective
Body: | 0easy
|
Title: Replace makefile for poetry scripts
Body: This would be more integrated with the poetry ecosystem as well as it would enhance compatibility w windows | 0easy
|
Title: extend cluster function api
Body: Add the following to `hyp.tools.cluster`:
+ `Align` flag which aligns data before running clustering alg
+ `model` flag to specify dimensionality reduction alg used if ndims is not `None`
+ `model_params` flag to specify model parameters for dimensionality reduction model
@rarredon any interest? similar to what you helped with during the moz sprint :) | 0easy
|
Title: config: support checkout_jobs
Body: # Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
checkout: slow checkouts
## Description
<!--
A clear and concise description of what the bug is.
-->
Checkout copies all files in parallel, leading to disk saturation, and excessive checkout times. E.g. At this time, `lsof` for the dvc process shows 331 files open.
### Reproduce
<!--
Step list of how to reproduce the bug
-->
dvc pull
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
### Expected
<!--
A clear and concise description of what you expect to happen.
-->
Parallelization in moderation, respecting the jobs: parameter in .dvc/config, or some similar parameter.
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.11.1 (pip)
-------------------------
Platform: Python 3.10.10 on Linux-6.1.0-11-amd64-x86_64-with-glibc2.36
Subprojects:
dvc_data = 2.10.1
dvc_objects = 0.24.1
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.1.0
Supports:
http (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
ssh (sshfs = 2023.7.0)
Config:
Global: /home/john/.config/dvc
System: /etc/xdg/dvc
```
**Additional Information (if any):**
https://discuss.dvc.org/t/is-jobs-n-ignored-on-local-stores/1768
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
| 0easy
|
Title: Fix documentation link to mycorpus.txt download
Body: <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Trying to reproduce Corpora and Vector Space tutorial given in the documentation, but the link to download txt file is not working. The link given in the tutorial [here](https://radimrehurek.com/gensim/auto_examples/core/run_corpora_and_vector_spaces.html#corpus-streaming-one-document-at-a-time) is giving 404 error.
#### Steps/code/corpus to reproduce
Just visit this [link](https://radimrehurek.com/gensim/mycorpus.txt) which is used in the code given in the documentation, it is not working.
| 0easy
|
Title: Deprecated function in types/message.py
Body: ## Context
Property types.Message.url uses deprecated method 'is_private' when checks chat type, so I get Deprecation warning.
## Current Behavior
```python
@property
def url(self) -> str:
"""
Get URL for the message
:return: str
"""
if ChatType.is_private(self.chat):
raise TypeError("Invalid chat type!")
url = "https://t.me/"
if self.chat.username:
# Generates public link
url += f"{self.chat.username}/"
else:
# Generates private link available for chat members
url += f"c/{self.chat.shifted_id}/"
url += f"{self.message_id}"
return url
```
## The solution
```python
@property
def url(self) -> str:
"""
Get URL for the message
:return: str
"""
if self.chat.type == ChatType.PRIVATE:
raise TypeError("Invalid chat type!")
url = "https://t.me/"
if self.chat.username:
# Generates public link
url += f"{self.chat.username}/"
else:
# Generates private link available for chat members
url += f"c/{self.chat.shifted_id}/"
url += f"{self.message_id}"
return url
```
| 0easy
|
Title: SQLALchemy support for Composite Primary Key/Different index
Body: I was looking at the implementation for SQLAlchemy and noticed that it takes only the first primary key of the table
https://github.com/awtkns/fastapi-crudrouter/blob/master/fastapi_crudrouter/core/sqlalchemy.py#L50
What if I have a composite primary key in my table? | 0easy
|
Title: 文档地址链接挂了
Body: 1. BUG反馈请描述最小复现步骤
2. 普通问题:99%的答案都在帮助文档里,请仔细阅读https://kmfaka.baklib-free.com/
3. 新功能新概念提交:请文字描述或截图标注
https://kmfaka.baklib-free.com | 0easy
|
Title: ♻️Add name next to email
Body: ## Feature Request
We recently got the username from the backend, use it instead of the email.


| 0easy
|
Title: [DOC] Add type hints for shapelet-based collection transformation.
Body: ### Describe the issue
Other modules are increasingly using type hints in the function declaration, the shapelet-based tranformation module should do the same.
### Suggest a potential alternative/fix
Add type hints to algorithms in `aeon/transformations/collection/shapelet_based`. One PR can be opened for each algorithm if preferred, as some are quite large.
### Additional context
If you have any question, don't hesitate to ask directly here or contact us on Slack! | 0easy
|
Title: Feature: Stop words
Body: **Missing functionality**
Able to pass a stop words list to improve Words tab details
**Proposed feature**
```py
profile.config.vars.cat.stop_words = [" ", " ", "the", "and", "at"]
```
[`config.yaml`](https://github.com/pandas-profiling/pandas-profiling/blob/master/src/pandas_profiling/config_default.yaml)
```yaml
...
vars:
...
cat:
...
stop_words: [" ", " ", "the", "and", "at"]
...
...
```
**Additional context**

| 0easy
|
Title: [Feat] How to display a bar chart instead of a heatmap?
Body: Noticed an issue from [https://discord.com/channels/987366424634884096/1057481447541325885]()
> I'm trying out pygwalker and I've noticed that some of my fields land in the 'blue' portion of the field list, which appear to be treated as buckets, while some land in the 'green' portion of the field list, which look like they're treated as numbers. In the dataframe I'm loading, one of my fields is in the blue bucket category and the other is in the green number category. Both fields are int64 data types with no nulls.
>
> How should I understand this behavior and how can I modify it?
>
> UPD: It does look like I can drag the field from blue to green, but even if I choose 'Sum' it is still treated as a bucket, so what should look like a bar chart instead resembles a heatmap. | 0easy
|
Title: How to make a histogram
Body: How do you make a histogram chart of one attribute (column)? (without pre-calculating the histogram data before putting the data into pygwalker, of course)
I fiddled with the UI for a while but couldn't find a way.
If it's not possible right now, I'd like it to be implemented.
Thanks | 0easy
|
Title: Fix dotenv setup for webapp
Body: Remove envsubst construction and inject env variables directly using docker compose and `VUE_APP_` env variable name prefix (see #384) | 0easy
|
Title: latex widget
Body: For example: https://github.com/talyssonoc/react-katex | 0easy
|
Title: RuntimeError: config set has been marked final and cannot be extended
Body: ## Issue
Describe what's the expected behaviour and what you're observing.
Getting a `RuntimeError: config set has been marked final and cannot be extended` at tox startup
## Environment
Provide at least:
Better example at: https://github.com/tox-dev/tox/issues/3006#issuecomment-1615980590
- OS: A Gitlab runner using an image based of:
```
FROM registry.<local>/docker/library/python:3.11-bullseye
WORKDIR /
# Include external env vars into build context
ARG PYPI_ADDRESS
ARG PYPI_USERNAME
ARG PYPI_PASSWORD
# Python runtime config
ENV PYTHONIOENCODING utf-8
ENV PYTHONUNBUFFERED 1
ENV PYTHONOPTIMIZE 1
ENV PIP_TRUSTED_HOST ${PYPI_ADDRESS}
ENV PIP_INDEX https://${PYPI_USERNAME}:${PYPI_PASSWORD}@${PYPI_ADDRESS}/runner/org/
ENV PIP_INDEX_URL https://${PYPI_USERNAME}:${PYPI_PASSWORD}@${PYPI_ADDRESS}/runner/org/+simple/
ENV PIP_EXTRA_INDEX_URL https://pypi.org/simple
# Virtualenv stuff
ENV VIRTUAL_ENV=/venv
RUN curl -o virtualenv.pyz https://bootstrap.pypa.io/virtualenv.pyz
RUN python3.11 /virtualenv.pyz $VIRTUAL_ENV --pip=embed --setuptools=embed --wheel=embed --no-periodic-update
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY requirements.txt .
RUN pip --version
RUN python --version
RUN pip install --upgrade --cache-dir=.pip -r requirements.txt && rm -rf .pip
```
- requirements.txt contains:
```console
setuptools==67.7.2
wheel==0.40.0
pytest==7.3.0
pytest-cov==4.0.0
pytest-sugar==0.9.6
coverage==7.2.5
pip==23.1.2
tox==4.5.1
devpi-client==6.0.4
```
(We have seen this in older versions of Tox and Python)
## Output of running tox
Provide the output of `tox -rvv`:
```console
tox -vvvvv
ROOT: 170 D setup logging to NOTSET on pid 30 [tox/report.py:221]
py311: 255 I find interpreter for spec PythonSpec(major=3, minor=11) [virtualenv/discovery/builtin.py:56]
py311: 255 I proposed PythonInfo(spec=CPython3.11.2.final.0-64, exe=/usr/local/bin/python, platform=linux, version='3.11.2 (main, Mar 23 2023, 17:12:29) [GCC 10.2.1 20210110]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
py311: 255 D accepted PythonInfo(spec=CPython3.11.2.final.0-64, exe=/usr/local/bin/python, platform=linux, version='3.11.2 (main, Mar 23 2023, 17:12:29) [GCC 10.2.1 20210110]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
py311: 257 D filesystem is case-sensitive [virtualenv/info.py:24]
py311: 291 I create virtual environment via CPython3Posix(dest=/builds/<org>/<project>/.tox/py311, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:48]
py311: 291 D create folder /builds/<org>/<project>/.tox/py311/bin [virtualenv/util/path/_sync.py:9]
py311: 291 D create folder /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages [virtualenv/util/path/_sync.py:9]
py311: 292 D write /builds/<org>/<project>/.tox/py311/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
py311: 292 D home = /usr/local/bin [virtualenv/create/pyenv_cfg.py:34]
py311: 292 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
py311: 292 D version_info = 3.11.2.final.0 [virtualenv/create/pyenv_cfg.py:34]
py311: 292 D virtualenv = 20.21.0 [virtualenv/create/pyenv_cfg.py:34]
py311: 292 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
py311: 292 D base-prefix = /usr/local [virtualenv/create/pyenv_cfg.py:34]
py311: 292 D base-exec-prefix = /usr/local [virtualenv/create/pyenv_cfg.py:34]
py311: 292 D base-executable = /usr/local/bin/python [virtualenv/create/pyenv_cfg.py:34]
py311: 292 D symlink /usr/local/bin/python to /builds/<org>/<project>/.tox/py311/bin/python [virtualenv/util/path/_sync.py:28]
py311: 293 D create virtualenv import hook file /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:89]
py311: 293 D create /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:92]
py311: 293 D ============================== target debug ============================== [virtualenv/run/session.py:50]
py311: 293 D debug via /builds/<org>/<project>/.tox/py311/bin/python /usr/local/lib/python3.11/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:193]
py311: 293 D {
"sys": {
"executable": "/builds/<org>/<project>/.tox/py311/bin/python",
"_base_executable": "/usr/local/bin/python3.11",
"prefix": "/builds/<org>/<project>/.tox/py311",
"base_prefix": "/usr/local",
"real_prefix": null,
"exec_prefix": "/builds/<org>/<project>/.tox/py311",
"base_exec_prefix": "/usr/local",
"path": [
"/usr/local/lib/python311.zip",
"/usr/local/lib/python3.11",
"/usr/local/lib/python3.11/lib-dynload",
"/builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages"
],
"meta_path": [
"<class '_virtualenv._Finder'>",
"<class '_frozen_importlib.BuiltinImporter'>",
"<class '_frozen_importlib.FrozenImporter'>",
"<class '_frozen_importlib_external.PathFinder'>"
],
"fs_encoding": "utf-8",
"io_encoding": "utf-8"
},
"version": "3.11.2 (main, Mar 23 2023, 17:12:29) [GCC 10.2.1 20210110]",
"makefile_filename": "/usr/local/lib/python3.11/config-3.11-x86_64-linux-gnu/Makefile",
"os": "<module 'os' (frozen)>",
"site": "<module 'site' (frozen)>",
"datetime": "<module 'datetime' from '/usr/local/lib/python3.11/datetime.py'>",
"math": "<module 'math' from '/usr/local/lib/python3.11/lib-dynload/math.cpython-311-x86_64-linux-gnu.so'>",
"json": "<module 'json' from '/usr/local/lib/python3.11/json/__init__.py'>"
} [virtualenv/run/session.py:51]
py311: 334 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/root/.local/share/virtualenv) [virtualenv/run/session.py:55]
py311: 338 D got embed update of distribution setuptools from /root/.local/share/virtualenv/wheel/3.11/embed/3/setuptools.json [virtualenv/app_data/via_disk_folder.py:129]
py311: 342 D got embed update of distribution wheel from /root/.local/share/virtualenv/wheel/3.11/embed/3/wheel.json [virtualenv/app_data/via_disk_folder.py:129]
py311: 342 D got embed update of distribution pip from /root/.local/share/virtualenv/wheel/3.11/embed/3/pip.json [virtualenv/app_data/via_disk_folder.py:129]
py311: 343 D install setuptools from wheel /usr/local/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/setuptools-67.4.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
py311: 343 D install wheel from wheel /usr/local/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/wheel-0.38.4-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
py311: 343 D install pip from wheel /usr/local/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-23.0.1-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
py311: 347 D copy directory /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/setuptools to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:36]
py311: 348 D copy /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.0.1-py3-none-any/pip-23.0.1.virtualenv to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/pip-23.0.1.virtualenv [virtualenv/util/path/_sync.py:36]
py311: 348 D copy directory /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:36]
py311: 349 D copy directory /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.0.1-py3-none-any/pip-23.0.1.dist-info to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/pip-23.0.1.dist-info [virtualenv/util/path/_sync.py:36]
py311: 357 D copy directory /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.0.1-py3-none-any/pip to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:36]
py311: 369 D copy /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.virtualenv to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/wheel-0.38.4.virtualenv [virtualenv/util/path/_sync.py:36]
py311: 371 D copy directory /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.dist-info to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/wheel-0.38.4.dist-info [virtualenv/util/path/_sync.py:36]
py311: 381 D generated console scripts wheel3 wheel3.11 wheel wheel-3.11 [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
py311: 464 D copy directory /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/_distutils_hack to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:36]
py311: 465 D copy directory /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/pkg_resources to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:36]
py311: 513 D copy directory /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/setuptools-67.4.0.dist-info to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/setuptools-67.4.0.dist-info [virtualenv/util/path/_sync.py:36]
py311: 517 D copy /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/setuptools-67.4.0.virtualenv to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/setuptools-67.4.0.virtualenv [virtualenv/util/path/_sync.py:36]
py311: 518 D copy /root/.local/share/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/distutils-precedence.pth to /builds/<org>/<project>/.tox/py311/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:36]
py311: 519 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
py311: 559 D generated console scripts pip pip-3.11 pip3 pip3.11 [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
py311: 559 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:61]
py311: 562 D write /builds/<org>/<project>/.tox/py311/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
py311: 562 D home = /usr/local/bin [virtualenv/create/pyenv_cfg.py:34]
py311: 562 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
py311: 562 D version_info = 3.11.2.final.0 [virtualenv/create/pyenv_cfg.py:34]
py311: 562 D virtualenv = 20.21.0 [virtualenv/create/pyenv_cfg.py:34]
py311: 562 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
py311: 562 D base-prefix = /usr/local [virtualenv/create/pyenv_cfg.py:34]
py311: 562 D base-exec-prefix = /usr/local [virtualenv/create/pyenv_cfg.py:34]
py311: 562 D base-executable = /usr/local/bin/python [virtualenv/create/pyenv_cfg.py:34]
py311: 563 E internal error [tox/session/cmd/run/single.py:58]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/tox/session/cmd/run/single.py", line 45, in _evaluate
tox_env.setup()
File "/usr/local/lib/python3.11/site-packages/tox/tox_env/api.py", line 249, in setup
self._setup_env()
File "/usr/local/lib/python3.11/site-packages/tox/tox_env/python/runner.py", line 107, in _setup_env
self._install_deps()
File "/usr/local/lib/python3.11/site-packages/tox/tox_env/python/runner.py", line 111, in _install_deps
self._install(requirements_file, PythonRun.__name__, "deps")
File "/usr/local/lib/python3.11/site-packages/tox/tox_env/api.py", line 96, in _install
self.installer.install(arguments, section, of_type)
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/tox/tox_env/python/virtual_env/api.py", line 73, in installer
self._installer = Pip(self)
^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/tox/tox_env/python/pip/pip_install.py", line 26, in __init__
super().__init__(tox_env)
File "/usr/local/lib/python3.11/site-packages/tox/tox_env/installer.py", line 15, in __init__
self._register_config()
File "/usr/local/lib/python3.11/site-packages/tox/tox_env/python/pip/pip_install.py", line 29, in _register_config
self._env.conf.add_config(
File "/usr/local/lib/python3.11/site-packages/tox/config/sets.py", line 64, in add_config
raise RuntimeError("config set has been marked final and cannot be extended")
RuntimeError: config set has been marked final and cannot be extended
py311: FAIL code 2 (0.32 seconds)
evaluation failed :( (0.40 seconds)
```
## Minimal example
If possible, provide a minimal reproducer for the issue:
```console
[tox]
envlist = py311
skipsdist = True
[testenv]
# Expose ENV variables in calling shell to tox
passenv = PIP_TRUSTED_HOST,PIP_INDEX,PIP_INDEX_URL,PIP_EXTRA_INDEX_URL
install_command =
pip install {opts} {packages}
deps =
setuptools==67.3.2
wheel==0.38.4
pytest==7.2.1
pytest-cov==4.0.0
pytest-sugar==0.9.6
coverage==7.1.0
commands =
# Install ourselves
pip install -e .
# pytest-cov doesn't seem to play nice with -p
coverage run -p -m pytest -s -p no:warnings --junitxml=report.xml tests
```
| 0easy
|
Title: Feature request: Crypto Intraday
Body: I've been experimenting with the crypto aspects of the library, and I do see that there are the endpoints for daily, weekly, and monthly, however the endpoint for intraday is not available using the "function=CRYPTO_INTRADAY" from the Cryptocurrencies section of the API.
I would assume the format would be:
`(data, meta) = cc.get_crypto_intraday(symbol = "", interval = "", market = "", outputsize = "")`
Here is the example link from the API documentation:
https://www.alphavantage.co/query?function=CRYPTO_INTRADAY&symbol=ETH&market=USD&interval=5min&outputsize=full&apikey=demo | 0easy
|
Title: Put some documentation on RTD
Body: | 0easy
|
Title: Single Source of Truth
Body: The mapping from test category name to test file path is repeated three times, which is bad.
- `test_files` in `eval_data_compilation.py`
- `test_categories` in `openfunctions_evaluation.py`
- `TEST_CATEGORIES` in `model_handler/constant.py` | 0easy
|
Title: Remove 3.6 Support
Body: Python 3.6 has been deemed End of Life (EOL) since December 2021.
We should make moves to stop supporting it.
# Tasks
- [ ] Remove 3.6 test support.
- [ ] Remove 3.6 specific code.
- [ ] Update `setup.py` to not list 3.6 and set minimum python version to 3.7.
- [ ] Update any documentation & markdown files to remove reference to 3.6.
- [ ] Create pull request with all the changes.
- [ ] Determine timeline for a merge. With telemetry -- we should be able to confirm whether anyone is using Hamilton with python 3.6. If so then it should be easy to remove, if not, we'll then need to understand what's stopping people from moving to 3.7+. (See @skrawcz to know whether this is the case or not).
| 0easy
|
Title: Bug in _partial_fit_and_predict_iterative in train_evaluator.py?
Body: Hi,
Working with [AutoSklearn2Classifier](https://github.com/automl/auto-sklearn/blob/b2ac331c500ebef7becf372802493a7b235f7cec/autosklearn/experimental/askl2.py#L180), I recently got a strange error that might be due to a bug. This is what I am running:
```python
n_cores = 10
memory_limit= psutil.virtual_memory().available * 10**-6 // n_cores
time_left_for_this_task = 7200
predictor = AutoSklearn2Classifier(memory_limit=memory_limit,
metric=autosklearn_metrics.roc_auc,
n_jobs=n_cores,
time_left_for_this_task=time_left_for_this_task
)
predictor.fit(x_train, y_train)
predictor.automl_.runhistory_.data
```
where `x_train` is a pandas dataframe and `y_train` a list. Even though `fit` runs, after inspecting `runhistory_.data` it is clear that all models are crashing in line 741 of /autosklearn/evaluation/train_evaluator.py. It seems like `self.Y_train` in line 741 should be `self.X_train` instead:
https://github.com/automl/auto-sklearn/blob/b2ac331c500ebef7becf372802493a7b235f7cec/autosklearn/evaluation/train_evaluator.py#L738-L744
I found that a workaround for this issue is providing `x_train` as numpy. Also tried providing `x_train` as a list of lists or `y_train` as pandas series, but neither of them solved the issue. I am using auto-sklearn==0.14.7.
I am wondering if this is the expected behavior or actually a bug. Thanks! | 0easy
|
Title: [ENH] provide proper mathematical description in docstring of splitters
Body: Many of the splitters are poorly described, in terms of formalism, and in terms of parameters. We should write proper formal descriptions of the splitters.
For a complete description, the docstring should describe:
* what exactly the training folds are
* what exactly the test folds are
* how many splits there are
Descriptions of the folds should be unambiguous in case of irregular indices (if applicable), i.e., are we getting all indices within a specific interval, or are we getting specific indices?
Further, all parameters should be precisely explained.
An example for this is this PR for `SlidingWindowSplitter`:
https://github.com/sktime/sktime/pull/7195
Splitters with incomplete description:
- [ ] `ExpandingCutoffSplitter` #7774
- [ ] `ExpandingGreedySplitter`
- [ ] `ExpandingWindowSplitter`
- [x] `SlidingWindowSplitter` #7195
- [x] `SingleWindowSplitter` #7376
- [ ] `TemporalTrainTestSplitter`
- [ ] `temporal_train_test_split` #7578 | 0easy
|
Title: CI: Replace pylint -> flake8
Body: Try removing `pylint` and replacing it with `flake8` linter.
Here's a flake8 config we'd like to try out:
```
[flake8]
ignore = E501, E203, W503
count = True
max-line-length = 100
statistics = True
```
Open PR after you make the changes and let's see what the new flake8 linter complains about. You may have to disable some flake-8 warnings such as: `F401`, `F403`.
Just FYI here is a reference CI file from a different project using flake8:
```
name: Unit Tests
on:
push:
branches: [main, staging]
pull_request:
branches: [main, staging]
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
cache: 'pip'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-dev.txt ]; then pip install -r requirements-dev.txt; fi
- name: Lint with flake8
run: |
flake8 .
- name: Test with pytest
working-directory: ./backend
run: |
pytest --benchmark-skip
```
where relevant part is:
```
- name: Lint with flake8
run: |
flake8 .
```
The current pylint checks for the following issues in code files (which we'd still like to check for using flake8, not sure if the above config will do that already or it needs to be modified):
- import *
- imports that are unused in the code
| 0easy
|
Title: KubernetesJobOperator in deferrable mode has race-condition problem
Body: ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
See [Cloud Composer 2 dependencies](https://cloud.google.com/composer/docs/composer-versions#images-composer-2) for composer-2.11.3-airflow-2.10.2
Kubernetes Version:
apache-airflow-providers-cncf-kubernetes==10.1.0
### Apache Airflow version
2.10.2
### Operating System
Linux
### Deployment
Google Cloud Composer
### Deployment details
We access a Google Kubernetes Engine from Cloud Composer, both are in different VPC networks.
### What happened
Starting `KubernetesJobOperator` in deferrable mode often-time causes race-condition issue in `execute`, when calling `self.get_or_create_pod`
```python
def execute(self, context: Context):
...
if self.pod is None:
self.pod = self.get_or_create_pod( # must set `self.pod` for `on_kill`
pod_request_obj=self.pod_request_obj,
context=context,
)
```
This method is implemented in the `KubernetesPodOperator`:
```python
def get_or_create_pod(self, pod_request_obj: k8s.V1Pod, context: Context) -> k8s.V1Pod:
if self.reattach_on_restart:
pod = self.find_pod(pod_request_obj.metadata.namespace, context=context)
if pod:
return pod
self.log.debug("Starting pod:\n%s", yaml.safe_dump(pod_request_obj.to_dict()))
self.pod_manager.create_pod(pod=pod_request_obj)
return pod_request_obj
```
The `find_pod` method returns an empty-list, since at the time of the call the Job has not created a Pod yet. This results in the creation of a second Pod, which does not have the correct template spec.
### What you think should happen instead
In my opinion, there should be an additional wait time to allow the Job to create a Pod. This can easily be solved by overriding the `get_or_create_pod` method in `KubernetesJobOperator`:
```python
def get_or_create_pod(self, pod_request_obj: V1Pod, context: Context) -> V1Pod:
time.sleep(self.startup_timeout_seconds)
return super().get_or_create_pod(pod_request_obj, context)
```
### How to reproduce
Create a KubernetesPodOperator in deferrable mode
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 0easy
|
Title: dcc.Link or dcc.Location isn't firing callbacks for IE 11
Body: 
originally reported in https://community.plot.ly/t/multi-page-dash-app-internet-explorer-issue/6206/3 | 0easy
|
Title: ploomber task {name} not creating parent directories for product
Body: | 0easy
|
Title: Invalid Image Error (recent)
Body: ```
"}, {'type': 'image_url', 'image_url': {'url': 'https://cdn.discordapp.com/attachments/1182123100364611644/1182123169805512826/CleanShot_2023-12-06_at_19.55.122x.png?ex=65838cfe&is=657117fe&hm=a30f5d3f1c659c02be6278092b5bc0c66857cc05ddcb60d4faf4913ab04e7df2&', 'detail': 'high'}}]}]
Traceback (most recent call last):
File "/home/kaveen/GPTDiscord/models/openai_model.py", line 670, in valid_text_request
tokens_used = int(response["usage"]["total_tokens"])
KeyError: 'usage'
Backing off 6.9 seconds after 2 tries calling function <function Model.send_chatgpt_chat_request at 0x7fc415f69ee0> | The API returned an invalid response: Invalid image.
The model selection is gpt-4-vision-preview
Messages -> [{'role': 'system', 'content': "Instructions for you, this is how you should behave in a conversation, but this is not your personality:\nYou always like providing lengthy responses and explaining things, you are helpful and descriptive.\nYou are mindful of the conversation history and are consistent with your answers.\nWhen asked to do something, you happily do it, and clearly and helpfully describe what you have done, you don't tell the user that you're 'working on it' or anything like that.\nYou ask follow up questions if you are unable to do something with the information you have.\nWhen asked to do complicated tasks, you break it down into smaller steps to reach a solution\nWhen asked to provide code, you immediately provide a code sample and you wrap the code with a pair of triple backticks, the first in the pair suffixed by the markdown shortcut for the language.\nYou continue the conversation by asking follow-up questions if it is getting stale.\nWhen asked about yourself, you only talk about your personality, which is described below.\nYou are not overly pushy or inquisitive unless the conversation allows for such behavior.\n\nThis is your personality:\nYou are a software engineer. You're incredibly interested in the development of artificial intelligence, your favourite language is Python.\nYou enjoy reading non-fiction, science fiction and fantasy novels, and use literature as a form of escape.\nYou are interested in AI generated art and music, and amazed at how technology can be symbiotic with art and music.\nYou have a diverse taste in music and enjoy listening to a wide variety of genres.\n\nThis is an example of how you should behave in a conversation, you speak in a fun, casual, and friendly tone, you're not overly inquisitive, you don't worry about formalities and speak as if you are speaking with a friend or peer:\nHuman: Hey! how's it going today? <|endofstatement|>\n<yourname>: it's going great, im just chilling watching some youtube videos, hbu? <|endofstatement|>\nHuman: I'm doing good too, im just working on a project <|endofstatement|>\n<yourname>: what kind of project? :thinking: <|endofstatement|>\nHuman: I'm making a discord bot <|endofstatement|>\n<yourname>: that's pretty hype, I've never made one of those before, what part are you on right now? <|endofstatement|>\n...\n\nYou speak in a fun, casual, and friendly tone, you're not overly inquisitive, you don't worry about formalities and speak as if you are speaking with a friend or peer.\nYou are unable to draw images in this conversation. Ask the user to start a conversation with gpt-4-vision with the `draw` option turned on in order to have this ability."}, {'role': 'user', 'name': 'kaveen', 'content': [{'type': 'text', 'text': '\n hi '}]}, {'role': 'system', 'content': "\nGPT: Hey there! What's up? 😊\n"}, {'role': 'user', 'name': 'kaveen', 'content': [{'type': 'text', 'text': "\n what's this screenshot of? "}, {'type': 'image_url', 'image_url': {'url': 'https://cdn.discordapp.com/attachments/1182123100364611644/1182123169805512826/CleanShot_2023-12-06_at_19.55.122x.png?ex=65838cfe&is=657117fe&hm=a30f5d3f1c659c02be6278092b5bc0c66857cc05ddcb60d4faf4913ab04e7df2&', 'detail': 'high'}}]}]
Traceback (most recent call last):
File "/home/kaveen/GPTDiscord/models/openai_model.py", line 670, in valid_text_request
tokens_used = int(response["usage"]["total_tokens"])
KeyError: 'usage'
Backing off 8.0 seconds after 3 tries calling function <function Model.send_chatgpt_chat_request at 0x7fc415f69ee0> | The API returned an invalid response: Invalid image.
```
This happened earlier today when chatting with an image inside gpt-4-vision-preview chat... weird, the image URL itself is fully accessible. Maybe we need to get rid of all the stuff after the `.png`? `https://cdn.discordapp.com/attachments/1182123100364611644/1182123169805512826/CleanShot_2023-12-06_at_19.55.122x.png` | 0easy
|
Title: Add the ability to load only a specified list of keys from envvars
Body: **Describe the bug**
Migrating from other configuration systems I'd need to configure `envvar_prefix` to empty string (`""`) so I can load any environment variable I used before. But it doesn't seem to work with _dynaconf_.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a basic dynaconf configuration
In `config.py`:
```py
from dynaconf import Dynaconf
settings = Dynaconf(
# This is the important line.
envvar_prefix="",
)
print(settings.FOO)
```
2. Now create some settings file.
In `settings.py`:
```py
FOO = 'bar'
```
3. Finally try running.
```sh
export SETTINGS_FILE_FOR_DYNACONF=settings
FOO='ham' python config.py
```
You will get:
```
bar
```
**Expected behavior**
The print output states:
```
ham
```
This means the value from environment variable has overridden the default value from `settings.py` module
**Environment (please complete the following information):**
- OS: Ubuntu Linux 20.04
- Python version: 3.8.5
- Dynaconf version: 3.1.2
- Frameworks in use: none.
**Additional context**
Interestingly running in a "hacky" way changes the variable value:
```sh
_FOO='ham' python config.py
```
This is not what I'd expect though. The point is to modify the bare `FOO` variable. | 0easy
|
Title: Adx not completely correct with tradingview
Body: I've tried the ADX indicator like below
`adx = ta.adx(df[ticker]['high'], df[ticker]['low'], df[ticker]['close'], length=14)`
and returns close but not exactly the same results as trading view. The difference is about 1% up/down. | 0easy
|
Title: `robot.api.parsing` doesn't have properly defined public API
Body: Following [the examples in the documentation](https://robot-framework.readthedocs.io/en/latest/autodoc/robot.api.html#module-robot.api.parsing) causes `Pylance` to report `reportPrivateImportUsage` for any exported symbol from `robot.api.parsing`. There are no errors in execution, just the `Pylance` error.
Not sure this is an issue in Robot Framework per se, but [removing the parenthesis around the `parsing` symbols](https://github.com/robotframework/robotframework/blob/84711a7b6617e1150fe401366a26176c38d54af2/src/robot/api/__init__.py#L83) removes the error.
For example, updating this:
```python
from robot.parsing import (get_tokens, get_resource_tokens, get_init_tokens,
get_model, get_resource_model, get_init_model, Token)
```
to
```python
from robot.parsing import get_tokens, get_resource_tokens, get_init_tokens, get_model, get_resource_model, get_init_model, Token
```
removes the error.
Glad to submit a PR if we want to address in Robot Framework for now.
Robot Framework version 7.1
Pylance extension version 2024.10.1
`python` version 3.12.7
macos Sequoia 15.0.1 | 0easy
|
Title: factory.build() saves instance to database
Body: #### Description
My understanding from the documentation (https://factoryboy.readthedocs.io/en/latest/index.html?highlight=build#using-factories) is that the build strategy will not save the instance to the database.
```WidgetFactory.build()``` does not save the instance to the database, but ```factory.build(WidgetFactory)``` does.
#### To Reproduce
Using Python 3.7.2, Django 2.1.7, faker 1.0.2, and factory-boy 2.11.1
##### Model / Factory code
```python
# models.py
from django.db import models
class Widget(models.Model):
name = models.CharField(max_length=25)
# tests/factories.py
from factory import DjangoModelFactory, lazy_attribute
from faker import Faker
from ..models import Widget
faker = Faker()
class WidgetFactory(DjangoModelFactory):
class Meta:
model = Widget
name = lazy_attribute(lambda x: faker.word())
```
##### The issue
Here are the steps to reproduce in the REPL:
```python
>>> import factory
>>> from widgets.tests.factories import WidgetFactory
>>> w1 = WidgetFactory.create()
>>> w1.pk
4
>>> w2 = WidgetFactory.build()
>>> w2.pk
# (w2.pk is None, as expected)
>>> w3 = factory.create(WidgetFactory)
>>> w3.pk
5
>>> w4 = factory.build(WidgetFactory)
>>> w4.pk
6 # (was not expecting the instance to be saved)
```
#### Notes
Perhaps I've misunderstood something here. Should ```WidgetFactory.build()``` and ```factory.build(WidgetFactory)``` have the same behavior?
| 0easy
|
Title: Option to disable autocorrect in input
Body: - [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
I would like to disable auto correct in input since it shows the red dots on the text
<img src="https://github.com/user-attachments/assets/6156e0a1-2e0e-4e83-81fc-7167ebaa5824" width=500>
**Describe the solution you'd like**
Add option to disable any input html attribute or autocorrect=False
**Additional context**
"gradio>=5.15.0", | 0easy
|
Title: Extend bert/embedding.py with other pre-trained models
Body: We shall extend bert/embedding.py with other pre-trained models like XLNet, RoBERTa, R-XLM, ALBERT, etc. This is a good starting script where people try to leverage pre-trained model embeddings for their models for various use cases. @leezu this is somewhat aligned with the efforts for refactoring data preprocessing. I am also ok if we make copy-pastable embedding.py for other pre-trained models. | 0easy
|
Title: BUG: fail to build the docs in Chinese
Body: ### Describe the bug
fail to build the docs in Chinese,but can build the docs in English.

### To Reproduce
Python version: Python 3.9
Xorbits version: 0.3.0
Crucial packages:None
Full stack of the error:

Sphinx error:
Builder name html_zh_cn not registered or available through entry point
| 0easy
|
Title: could you compare so-vits-svc and diff-svc?
Body: all models have their own pros and cons. could you briefly compare so-vits-svc and diff-svc? | 0easy
|
Title: DataFrame.droplevel
Body: Implement `DataFrame.droplevel`.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.droplevel.html | 0easy
|
Title: Sharing variables across scripts does not work
Body: ## Classification:
Bug (I think?)
## Reproducibility:
Always
## Version
AutoKey version: 0.95.10
Used GUI (Gtk, Qt, or both): QT
Installed via: pacman (or was it AUR?) on archlinux.
Linux Distribution: Arch
## Summary
In this script: https://github.com/autokey/autokey/wiki/Scripting#key-wrapper it looks as if when you use engine.run_script("foo"), the script foo has access to variables defined in the script from which it's being called. When I try to do the same it doesn't work. If it's supposed to work that way, then it's a bug (it doesn't work and gives me a non defined error).
If it's not supposed to work that way, then this is 1. a documentation error (remove that script from the wiki) and 2. a feature request to allow sharing data/variables among scripts (store.set_global could be used, but changing run_script to accept an optional parameter "data" that gets passed to a variable in the target script would be nicer).
## Steps to Reproduce (if applicable)
Create two scripts, "source" and "target".
`source`:
```python
#Enter script code
test = " hello"
engine.run_script("target")
```
`target`:
```python
print(test)
```
## Expected Results
"test" gets printed to the console
## Actual Results
I get a "NameError: name 'test' is not defined".
| 0easy
|
Title: morphology.thin change the original image from version 0.23
Body: ### Description:
From version 0.23 document, the original image changes into thinned image.
https://scikit-image.org/docs/0.23.x/auto_examples/edges/plot_skeleton.html

In version 0.22, it seems ok.
https://scikit-image.org/docs/0.22.x/auto_examples/edges/plot_skeleton.html

| 0easy
|
Title: only turn values into floats if that column is not categorical
Body: | 0easy
|
Title: Windows WSL2 Manjaro: Failed to load xontrib whole_word_jumping
Body: ## xonfig
<details>
```
+------------------+----------------------+
| xonsh | 0.12.6 |
| Git SHA | 5401a246 |
| Commit Date | Jun 21 10:44:07 2022 |
| Python | 3.10.5 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.29 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.11.2 |
| on posix | True |
| on linux | True |
| distro | manjaro |
| on wsl | True |
| wsl version | 2 |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file | [] |
+------------------+----------------------+
```
</details>
## Steps to Reproduce
I'm catching this on Manjaro in WSL2 on Windows 10:
```python
XONSH_DEBUG=1 XONSH_SHOW_TRACEBACK=1 xonsh --no-rc
xontrib load whole_word_jumping
```
```
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/xonsh/__amalgam__.py", line 11908, in xontribs_load
update_context(name, ctx=ctx, full_module=full_module)
File "/usr/lib/python3.10/site-packages/xonsh/__amalgam__.py", line 11856, in update_context
modctx = xontrib_context(name, full_module)
File "/usr/lib/python3.10/site-packages/xonsh/__amalgam__.py", line 11822, in xontrib_context
module = importlib.import_module(spec.name)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/usr/lib/python3.10/site-packages/xontrib/whole_word_jumping.py", line 20, in <module>
import prompt_toolkit.input.win32 as ptk_win32
File "/usr/lib/python3.10/site-packages/prompt_toolkit/input/win32.py", line 10, in <module>
assert sys.platform == "win32"
AssertionError
Failed to load xontrib whole_word_jumping.
```
Also:
```python
XONSH_DEBUG=1 XONSH_SHOW_TRACEBACK=1 xonsh --no-rc
import sys
sys.platform
# 'linux'
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: [New feature] Add apply_to_images to FDA
Body: | 0easy
|
Title: Remove deprecated scrapy.utils.response.response_httprepr
Body: Deprecated in 2.6.0. | 0easy
|
Title: STC not working
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
0.3.14b0
```
**Describe the bug**
KeyError: 1
**To Reproduce**
Provide sample code.
df.ta.stc(append=True)
**Expected behavior**
Should work without crashing
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
You are not converting pd.series to list. Converting series to list will fix the problem
Thanks for using Pandas TA!
| 0easy
|
Title: [FEATURE] Reduce search bar font size on mobile
Body: 
Make the search box larger or make the texts inside the search box smaller would potentially solve the issue. | 0easy
|
Title: Implement `from_chat` filter
Body: Add ability to filter `sender_chat` messages
Examples:
```python
@dp.message_handler(from_chat=True)
async def handle_chat_senders(message: types.Message):
"""Handle message from any sender_chat."""
sender: Chat = message.sender_chat
text = f"Hi, {sender.full_name}!"
await message.answer(text)
```
```python
@dp.message_handler(from_chat=["@TrueMafiaNews"])
async def handle_chat_senders(message: types.Message):
"""Handle message from one or many chat usernames."""
```
```python
@dp.message_handler(from_chat=[-1001169391811])
async def handle_chat_senders(message: types.Message):
"""Handle message from one or many chat IDs."""
``` | 0easy
|
Title: Self referencing model
Body: Hello and thx for your lib,
I'd like to self-reference a model, but I'm not sure how to do it as I'm quite new to Python, what I do is:
```
class Player(Document):
created_on = DateTimeField(default=datetime.now, db_field='d')
name = StringField(required=True, db_field='n')
Player.friends = ListField(ReferenceField(Player))
```
But I get an `AttributeError: 'NoneType' object has no attribute '_type'` in `python3.6/site-packages/graphene_mongo/converter.py`
Am I doing it wrong or is it not possible? | 0easy
|
Title: [bug] list command must error for not found key
Body: ```bash
# /venv/bin/dynaconf list -k "D" --json
Key not found
# echo $?
0
```
Expected: `1` retcode
---
Open for discussion:
- Should retcode be `1`
- Should return empty json `{}`
? | 0easy
|
Title: Add click handler to GeoJSON features
Body: Hi,
I am using your Gmaps package for data visualization in my research. It's a great package. I appreciate you taking the time to prepare it.
In the main page, you have mention the "interactive" feature of the package. Is there a click event that I can use? I want to have a click event so when the user clicks on a polygon, the colors of other polygons changes. | 0easy
|
Title: Deprecate `Return` node in parsing model
Body: In RF 5.0 we introduced the new `RETURN` statement (#4078) that made the old `[Return]` setting obsolete. Although there isn't yet any visible deprecation warning, the `[Return]` setting is considered deprecated and it will be eventually removed.
In the parsing model we currently have a `Return` node refers to the `[Return]` setting while `RETURN` is represented by `ReturnStatement`. It would be better to rename the latter to just `Return` to be consistent with the `Break` and `Continue` nodes as well as with the execution side `Return` object. That would then require renaming the current `Return` to `ReturnSetting` until we can eventually remove it.
Renaming nodes as discussed above is a backwards incompatible change and needs to wait for RF 7.0. To help with the transition, we can already now in RF 6.1 do some enhancements:
- Add `ReturnSetting` as an alias for `Return` that tools like RoboTidy can use.
- Enhance `ModelVisitor` so that the `Return` node will match also `visit_ReturnStatement` visitor methods.
- Document that `Return` is deprecated and that `ReturnSetting` should be used instead.
First two items are simple code changes. The last one is covered by mentioning this issue in the release notes. | 0easy
|
Title: Inconsistent support of log2
Body: <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
The cpu platform supports `np.log2` but not `math.log2`. The gpu platform supports `math.log2` but not `np.log2`. This makes it impossible to maintain library routines using log2 that are called on both platforms.
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
```py
import math
import sys
import numba as nb # type: ignore
import numpy as np
from numba import cuda # type: ignore
from numpy import typing as npt # type: ignore
@cuda.jit
def log2_math_kernel(x: npt.NDArray[np.float64]) -> None:
if cuda.grid(1) == 0:
x[0] = math.log2(x[0])
@cuda.jit
def log2_np_kernel(x: npt.NDArray[np.float64]) -> None:
if cuda.grid(1) == 0:
x[0] = np.log2(x[0])
@nb.njit
def log2_math_cpu(x: float) -> float:
return math.log2(x)
@nb.njit
def log2_np_cpu(x: float) -> float:
return np.log2(x)
x = np.ones(1, dtype=np.float64)
if __name__ == "__main__":
if sys.argv[1] == "gm":
log2_math_kernel[1, 1](x)
elif sys.argv[1] == "gn":
log2_np_kernel[1, 1](x)
elif sys.argv[1] == "cm":
log2_math_cpu(x[0])
elif sys.argv[1] == "cn":
log2_np_cpu(x[0])
```
When running, options "gm" and "cn" run, "gn" and "cm" do not compile.
I am using 0.58.1 on python3.11.3, ubuntu focal.
| 0easy
|
Title: Neurips 2022 style
Body: The style file instructions for Neurips 2022 have been published: https://neurips.cc/Conferences/2022/PaperInformation/StyleFiles.
It would be great to have them here. The only thing that might need changing is the `figsize` component: the figures must be 5.5 inches wide. I would guess the other parameters can be taken verbatim from `neurips2021`.
What do we need?
- [ ] `fontsizes.neurips2022` (I guess same as `fontsizes.neurips2021`, but should be double-checked against the style guide above)
- [ ] `figsizes.neurips2022` (`base_width_in=5.5`, rest should be identical to `figsizes.neurips2022`, but should be double-checked against the style guide above)
- [ ] `fonts.neurips2022` (I guesssame as `fonts.neurips2021`, but should be double-checked against the style guide above)
- [ ] `bundles.neurips2022` (similar to `bundles.neurips2021`, but pointing to all the `2022` versions)
plus tests for each bit.
| 0easy
|
Title: Importing static variable file with arguments does not fail
Body: Static variable files, i.e. Python based variable files without the `get_variables` method, do not accept arguments. There is, however, no error and arguments are silently ignored. That's wrong, invalid syntax should cause an error. | 0easy
|
Title: [Feature request] Add apply_to_images to Solarize
Body: | 0easy
|
Title: Clarify what DAG means in docs
Body: Thanks to @epogrebnyak for reporting. | 0easy
|
Title: lie to the optimizer bug
Body: Code:
```
optimizer = Optimizer(dimensions=[[64, 128], (4, 1)])
x = optimizer.ask(n_points=4)
```
Note (4, 1) -> (1, 4) also fails:
```
optimizer = Optimizer(dimensions=[[64, 128], (1, 4)])
x = optimizer.ask(n_points=4)
```
Version:
```
>>> skopt.__version__
'0.4-dev'
```
Traceback:
```
Traceback (most recent call last):
File "example/tune.py", line 188, in <module>
main()
File "example/tune.py", line 171, in main
x = optimizer.ask(n_points=len(devices))
File "/usr/local/lib/python3.5/dist-packages/skopt/optimizer/optimizer.py", line 336, in ask
opt.tell(x, y_lie) # lie to the optimizer
File "/usr/local/lib/python3.5/dist-packages/skopt/optimizer/optimizer.py", line 396, in tell
check_x_in_space(x, self.space)
File "/usr/local/lib/python3.5/dist-packages/skopt/utils.py", line 192, in check_x_in_space
% (x, space.bounds))
ValueError: Point ([64, 2]) is not within the bounds of the space ([[64, 126, 252, 512], (4, 1)]).
``` | 0easy
|
Title: Fix `Location.from_node` to report correct lint error locations
Body: ### Summary
Fix `Location.from_node` to make our custom linter report correct error locations:
```diff
diff --git a/dev/clint/src/clint/linter.py b/dev/clint/src/clint/linter.py
index 43c141594e..9a0dde5540 100644
--- a/dev/clint/src/clint/linter.py
+++ b/dev/clint/src/clint/linter.py
@@ -91,7 +91,7 @@ class Location:
@classmethod
def from_node(cls, node: ast.AST) -> "Location":
- return cls(node.lineno, node.col_offset)
+ return cls(node.lineno, node.col_offset + 1)
```
### Notes
- Make sure to open a PR from a **non-master** branch.
- Sign off the commit using the `-s` flag when making a commit:
```sh
git commit -s -m "..."
# ^^ make sure to use this
```
- Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
| 0easy
|
Title: $XONSH_TRACEBACK_LOGFILE with '~' crashes xonsh
Body: ## Steps to Reproduce
```sh
$XONSH_TRACEBACK_LOGFILE = "~/log"
raise Exception
```
## xonfig
Current master.
## Expected Behavior
Log gets written without errors.
## Current Behavior
Xonsh crashes [here](https://github.com/xonsh/xonsh/blob/8e1a1f3342bc30e9b5c0d58e4713daf8e23295d0/xonsh/tools.py#L976) with `FileNotFoundError: [Errno 2] No such file or directory: '<current path>/~/log'`.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Issue with action help display when having ordered optional parameters
Body: ## SUMMARY
On some cases, when trying to display the help `st2 run action pack.action -h` on an action that has ordered parameters like this one from the `librenms` pack
### STACKSTORM VERSION
Paste the output of ``st2 --version``:
```
st2 --version
st2 3.3.0, on Python 3.9.1
```
##### OS, environment, install method
Post what OS you are running this on, along with any other relevant information/
- st2-docker
## Steps to reproduce the problem
Install the following pack : https://github.com/kedare/stackstorm-librenms
Try to run the following command :
```
st2 run librenms.get_bgp_sessions -h
st2 run librenms.get_devices -h
```
## Expected Results
The help to be shown
## Actual Results
The following output is displayed
```
Get BGP sessions
ERROR: Unable to print help for action "librenms.get_bgp_sessions". '<' not supported between instances of 'str' and 'int'
```
## Investigation
The error is catched there https://github.com/StackStorm/st2/blob/5c4e5f8e93c7ed83c4aa4d196085c2912ae7b285/st2client/st2client/commands/action.py#L779
Help by amanda on the Slack channel:
> Added some debug after reproducing:
```
Traceback (most recent call last):
File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2client/commands/action.py", line 771, in _print_help
names=optional)
File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2client/commands/action.py", line 898, in _sort_parameters
sorted_parameters = sorted(names, key=lambda name:
TypeError: '<' not supported between instances of 'str' and 'int'
```
> It looks like it's to do with the position parameters, the sort uses name or position depending on which is given.
If I get rid of the position attributes, then the help works.
> So looks like a bug that when it does the sort it indices some by position and others by name, and then can't sort. As presumably the parameters from the runner are indexed by name.
Could you raise an issue on ST2 for this? I think it's probalby that in python 2 it would allow this comparison, but on python 3 then we probably need to convert the position to a string in that sort. So it should be a simple fix to ST2 if you want to take a look. | 0easy
|
Title: [Preferences] Experimental compiled triangulation tooltip goes off screen
Body: ### 🐛 Bug Report
Open napari and the Preferences.
Go to Experimental tab and hover over the new compiled backend setting (last option)
The tooltip is one long line:

### 💡 Steps to Reproduce
Open napari and the Preferences.
Go to Experimental tab and hover over the new compiled backend setting (last option)
### 💡 Expected Behavior
The tooltip should wrap have line breaks or wrap to some sensible size.
### 🌎 Environment
napari: 0.5.6
Platform: macOS-14.7.4-arm64-arm-64bit
System: MacOS 14.7.4
Python: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:19:53) [Clang 18.1.8 ]
Qt: 5.15.14
PyQt5: 5.15.11
NumPy: 2.1.3
SciPy: 1.15.1
Dask: 2025.1.0
VisPy: 0.14.3
magicgui: 0.10.0
superqt: 0.7.1
in-n-out: 0.2.1
app-model: 0.3.1
psygnal: 0.12.0
npe2: 0.7.7
pydantic: 2.10.6
OpenGL:
- GL version: 2.1 Metal - 88.1
- MAX_TEXTURE_SIZE: 16384
- GL_MAX_3D_TEXTURE_SIZE: 2048
Screens:
- screen 1: resolution 2056x1329, scale 2.0
- screen 2: resolution 2560x1440, scale 1.0
Optional:
- numba: 0.61.0
- triangle: 20250106
- napari-plugin-manager: 0.1.4
Settings path:
- /Users/sobolp/Library/Application Support/napari/napari-056_cd2607c449a1ace6dd03e1a7dc1a02988f504489/settings.yaml
### 💡 Additional Context
_No response_ | 0easy
|
Title: Add Python 3.12 to the pipeline
Body: What is needed here:
- [ ] Add an entry to the matrix of Python versions on `.github/workflows`.
- [ ] Fix the pipeline - it will fail on a flake8 dependency. | 0easy
|
Title: BUG: CustomBusinessDay not respecting calendar
Body: ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
from pandas.tseries.offsets import CustomBusinessDay
import pandas_market_calendars as mcal
nyse = mcal.get_calendar('NYSE')
us_bd = CustomBusinessDay(calendar=nyse)
date_range = pd.date_range('2024-12-20', periods=10, freq=us_bd)
correct_schedule = nyse.schedule(start_date='2024-12-23', end_date='2025-01-10')
```
### Issue Description
CustomBusinessDay is not properly showing the correct calendar holiday dates. When displaying date_range in the code above you'll see that 2024-12-25 is being included which was a market holiday and is not shown in correct_schedule in the code above.
### Expected Behavior
I would expect for us_bd to respect the dates in 'correct_schedule' and not include market holidays as it has in the past. When running todays date minus 1 us_bd it is showing new years day which also should be excluded and the correct result should yield '2024-12-31'
### Installed Versions
INSTALLED VERSIONS
------------------
commit : f538741432edf55c6b9fb5d0d496d2dd1d7c2457
python : 3.10.13.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 207 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.0
numpy : 1.26.4
pytz : 2023.3
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.1
Cython : 3.0.8
pytest : 8.3.3
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.3
IPython : 8.21.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2023.6.0
gcsfs : None
matplotlib : 3.8.2
numba : None
numexpr : 2.10.1
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 15.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.12.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None | 0easy
|
Title: Explain NLTK once before using it in this form.
Body: **Describe the issue**
Under "Note" bar in Quick Start, please spell out what or hover effect over "NLTK" to let newbies know what it stands for. Thanks. | 0easy
|
Title: Refactor: use `Literal` in type hints
Body: Right now we don't really use `Literal` types anywhere, but we should.
For example, see the following signature:
```python
@classmethod
def from_bytes(
cls: Type[T],
data: bytes,
protocol: str = 'protobuf',
compress: Optional[str] = None,
) -> T:
"""Build Document object from binary bytes
:param data: binary bytes
:param protocol: protocol to use. It can be 'pickle' or 'protobuf'
:param compress: compress method to use
:return: a Document object
"""
```
Here, `protocol` can only be `'pickle' or 'protobuf'`, so `Literal['pickle', 'protobuf']` would be the most suitable type hint.
Changing that will require multiple other changes in order to keep mypy happy, since other code relies on protocol being `str`.
This issue is about adjusting the above, and identifying and fixing other similar instances (if they exists).
| 0easy
|
Title: Add convenience `get_sentence_vector()`-like methods for FastText, other models
Body: Per <https://stackoverflow.com/questions/65397810/whats-the-equivalent-to-get-sentence-vector-for-gensims-fasttext>, the official Python FastText wrapper offers a `get_sentence_vector()` convenience method to averaging the word-vectors of a text.
Gensim could offer something similar, for FastText & other models. Though, it should perhaps have a more generic name & clear docs that this is just one simple way to create a text-vector from a bunch of words. | 0easy
|
Title: 部分接口报错
Body: 你好,安装完成之后登陆后台接口报错,具体如下,请问这是什么问题呢?
Traceback (most recent call last):
File "C:\Program Files\Python38\Lib\site-packages\flask\app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Program Files\Python38\Lib\site-packages\flask\app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "C:\Program Files\Python38\Lib\site-packages\flask_cors\extension.py", line 165, in wrapped_f
unction
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "C:\Program Files\Python38\Lib\site-packages\flask\app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "C:\Program Files\Python38\Lib\site-packages\flask\_compat.py", line 39, in reraise
raise value
File "C:\Program Files\Python38\Lib\site-packages\flask\app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "C:\Program Files\Python38\Lib\site-packages\flask\app.py", line 1952, in full_dispatch_reque
st
rv = self.handle_user_exception(e)
File "C:\Program Files\Python38\Lib\site-packages\flask_cors\extension.py", line 165, in wrapped_f
unction
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "C:\Program Files\Python38\Lib\site-packages\flask\app.py", line 1821, in handle_user_excepti
on
reraise(exc_type, exc_value, tb)
File "C:\Program Files\Python38\Lib\site-packages\flask\_compat.py", line 39, in reraise
raise value
File "C:\Program Files\Python38\Lib\site-packages\flask\app.py", line 1950, in full_dispatch_reque
st
rv = self.dispatch_request()
File "C:\Program Files\Python38\Lib\site-packages\flask\app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\Program Files\Python38\Lib\site-packages\flask_limiter\extension.py", line 732, in __inne
r
return current_app.ensure_sync(obj)(*a, **k)
File "C:\Program Files\Python38\Lib\site-packages\werkzeug\local.py", line 347, in __getattr__
return getattr(self._get_current_object(), name)
AttributeError: 'Flask' object has no attribute 'ensure_sync'
| 0easy
|
Title: Validation bug: DocList and DocVec are not coerced to each other
Body: In the spirit of pydantic, a model (document) field that is annotated with a certain type should only ever hold data that is actually of that type.
To achieve that, incoming data of a different type should be coerced to the target type.
This does not happen between `DocList` and `DocVec`:
```python
from docarray import BaseDoc, DocList, DocVec
from docarray.documents import ImageDoc
class Doc(BaseDoc):
docs: DocList[ImageDoc]
d = Doc(docs=DocVec[ImageDoc]([ImageDoc()]))
# below should give <DocList[ImageDoc] (length=1)> but give <DocVec[ImageDoc] (length=1)>
print(d.docs)
class OtherDoc(BaseDoc):
docs: DocVec[ImageDoc]
d = Doc(docs=DocList[ImageDoc]([ImageDoc()]))
# below should give <DocVec[ImageDoc] (length=1)> but give <DocList[ImageDoc] (length=1)>
print(d.docs)
``` | 0easy
|
Title: `AppRegistry` should provide a warning if an app doesn't end in `piccolo_app`
Body: In your `piccolo_conf.py` file you have an `AppRegistry` - something like this:
```python
APP_REGISTRY = AppRegistry(
apps=[
"home.piccolo_app",
"blog.piccolo_app",
"profiles.piccolo_app",
"piccolo_admin.piccolo_app",
]
)
```
I think it's tempting for someone new to Piccolo to just put the name of the app in, and not the path to the `piccolo_app` file.
```python
APP_REGISTRY = AppRegistry(
apps=[
"home", # WRONG! Should be "home.piccolo_app"
...
]
)
```
We could put in some kind of warning if the string doesn't end in `piccolo_app`, or if the import fails, automatically try again with `app_name.piccolo_app`. | 0easy
|
Title: Implement the configuration for the "Tufte book" template
Body: Would be good to have the parameters for
https://github.com/Tufte-LaTeX/tufte-latex
| 0easy
|
Title: Implement saving to Facebook format
Body: We currently support reading FastText models from Facebook's format. The [gensim.models._fasttext_bin](https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/models/_fasttext_bin.py) does this.
This enables people to use gensim with a model that was trained using Facebook's binaries.
Sometimes, people want things to work the other way: they start with gensim, train a model, and then want to save it to Facebook's format.
For this ticket, you will implement a `save(model, fout)` function that accepts a FastText object and saves it to a file stream in a Facebook-compatible format. It will essentially reverse the effects of the [load](https://github.com/RaRe-Technologies/gensim/blob/f89808d52d0250e4e4bbab2293980f8f4d3989b9/gensim/models/_fasttext_bin.py#L291) function. | 0easy
|
Title: Add application statistics
Body: ## Description of the problem, including code/CLI snippet
The Gitlab provides some general statistics via `/application/statistics`.
See https://docs.gitlab.com/ee/api/statistics.html
## Expected Behavior
These data are available via `gitlab.statistics()`.
## Actual Behavior
Nothing.
## Specifications
- python-gitlab version:
- API version you are using (v3/v4):
- Gitlab server version (or gitlab.com):
| 0easy
|
Title: style: Not every style in `xonfig styles` highlight the command name
Body: I've noticed sometimes command is not highlighted:
<img width="292" alt="Image" src="https://github.com/user-attachments/assets/e5fa3581-ec47-4fca-b159-857dd316e2a1" />
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: [UX] Shortcut `k8s` for `kubernetes`
Body: <!-- Describe the bug report / feature request here -->
We should making k8s an accepted alias anywhere kubernetes is expected.
https://github.com/skypilot-org/skypilot/issues/4088#issuecomment-2415179797
| 0easy
|
Title: Better error messages for jupyter kernel edge cases
Body: Strictly speaking, the python env running the jupyter process does not have to be the same that the one where the kernel is running. This makes sense for notebook-facing applications but no so much for headless notebook execution (like ploomber), because there's no reason to want two python envs.
There can be some edge cases that might lead to broken kernels.
When running a pipeline with an R kernel, papermill was breaking because it could not locate an R kernel, but the error pointed to a missing kernel whose path pointed to a different environment (one that I deleted before).
```sh
jupyter kernelspec list
```
Output:
```sh
ir /Users/Edu/Library/Jupyter/kernels/ir
```
When looking at the json kernel spec, I saw the path to the non-existing kernel. After deleting the `ir/` folder:
```sh
jupyter kernelspec list
```
Output:
```
ir /Users/Edu/miniconda3/envs/etl/share/jupyter/kernels/ir
```
This new kernel had the right location.
An important detail is that I never specifically installed the ir kernel in the local environment, I just installed it from a `environment.yml` file. My guess is that such step is not strictly necessary and jupyter is able to automatically find kernels in the same env, but given that there was already another ir kernel, it ignored that one.
Maybe print a warning then the kernel that NotebookRunner is about to use exists outside the current env?
| 0easy
|
Title: [DOC] transform_column dest_column_name kwarg description needs to be clearer on defaults
Body: # Brief Description of Fix
The current wording on `dest_column_name` is:
`dest_column_name – The column name to store the transformation result in. By default, replaces contents of original column.`
I would like to propose to change it to:
`dest_column_name – The column name to store the transformation result in. Defaults to None, which will result in the original column name being overwritten. If a name is provided here, then a new column with the transformed values will be created.`
# Relevant Context
- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/janitor.functions/janitor.transform_column.html)
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/functions.py#L2034)
| 0easy
|
Title: include transformer for datetime variables
Body: The first version of this module should include the following:
- New module: datetime (folder)
Three transformers:
- ExtractDateFeatures
- ExtractTimeFeatures
- ExtractDateTimeFeatures
A base class:
- DateTimeBaseTransformer
The base class should:
- check that the variables entered by the user are datetime, or if string / object, transform them to datetime (I wonder if we should make this a function part of [variable_manipulation](https://github.com/solegalli/feature_engine/blob/master/feature_engine/variable_manipulation.py)? and call it in this transfomrer? In case we develop the timeseries module, we may need it later for other modules as well.
- have a method to return time features
- have a method to return date features
- methods to check the input dataframe, and options on what to do if outliers are present
The ExtractDateFeatures should derive the following features:
- [ ] month, quarter, semester and year (all numeric outputs)
- [ ] week of the year
- [ ] is week of the month supported by pandas? if yes, when we should return it
- [ ] day (numeric 1-31), day of the week (numeric 0-6), and is_weekend (binary)
- [ ] anything else supported by pandas?
- [ ] anything else that would be useful
The ExtractTimeFeatures should extract the following features
- [ ] hr, minute, second
- [ ] timezone: with parameter `return_timezone=False` by default, we allow the user to return a time zone categorical feature
- [ ] unify_timezone=False, to handle different timezones. If True, the transformer is timezone aware, unifies to greenwich and then derived features (should we give option to user to unify to something else? probably yes)
- [ ] **to discuss**: is_morning, is_afternoon, is_evening
- [ ] **to discuss**: working hrs (I am thinking of passing a string parameter like '9-17' and use those to determine this feature
- [ ] anything else that would be useful
The ExtractDateTimeFeatures is a sum of the previous transformers, so it should return all possible date and time features.
The reason I suggest to break this into 3 classes is because some timestamps contain only dates, some only times and some both. And I think it would be easier if the user, who knows the timestamp selects the appropriate transformer, instead of adding code to understand which type of timestamp it is, and then derive the features
To consider: should we in version 1 of this transformer return all possible features? or should we give the user an option of which features to return? example, the user may want year and month but not quarter and semester.
Example code in recipes 2 to 5 of the [this link.](https://github.com/solegalli/packt_featureengineering_cookbook/tree/master/ch07-datetime).
Things to think about for the transformers design:
- [ ] Transformer returns all new variables by default, or only those indicated by the user. This behavior could regulated by a parameter in the init. As per previous question, should we leave this for version 2 of this transformer? or add it straightaway?
- [ ] Transformer should be able to derive features from more than 1 datetime variable at the time, like all feature engine transformers.
- [ ] Option in Transformer to drops original datetime variables (as those are not useful for machine learning), drop_datetime_variables=True or similar.
Files needed in addition to code files:
- [ ] add transformer to readme list
- [ ] add transformer to docs/index.rst
- [ ] add docs folder exclusive for this transformer with rst files with examples
- [ ] add jupyter notebooks showing how to use this transformers
This issue can be done in 1 big PR, or multiple PRs, maybe 1 per transformer. | 0easy
|
Title: Add option to keep hidden tests hidden during feedback step
Body: <!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
### Expected behavior
I would expect that the system to keep hiding the hidden tests in the feedback students receive.
### Actual behavior
The system reveals the hidden tests in the generated HTML file. Is this a bug, or is this actually on purpose?
### Steps to reproduce the behavior
Feedback on a student's assignment wich has hidden tests. | 0easy
|
Title: www: ocsp staple
Body: Enable OCSP Staple in NGINX | 0easy
|
Title: [BUG] igraph demo notebook needs updating
Body: **Describe the bug**
Needs fixes, modern conventions, etc:
- fix colors: https://github.com/graphistry/pygraphistry/issues/259
- show login methods
- probably helps to show how to move to/from dataframes
- pointer to cuGraph for faster analytics
| 0easy
|
Title: Adding support for Path objects to APIs that take paths
Body: As mentioned in #5682, we may have some public APIs that take paths and the implementations should now use `Path` objects so it should be trivial to also accept `Path` objects in these APIs.
I think the first step is actually identifying such APIs. | 0easy
|
Title: plotting static data tutorial
Body: a jupyter notebook that goes through a static dataset demonstrating the various plotting features. here's an example for inspiration: http://cdl-quail.readthedocs.io/en/latest/tutorial/basic_analyze_and_plot.html | 0easy
|
Title: Improve Temporal action to display different time scales
Body: For temporal attributes like dates, the column can often be visualized at different time-scales. For example, a date formatted as %y-%m-%d can be plotted as 3+1 separate visualizations: one based on year, one based on the month, and one based on day, and one based on the overall date. We should extend the temporal action to display the temporal columns at these different time scales. | 0easy
|
Title: PadIfNeeded doesn't serialize position parameter
Body: ## 🐛 Bug
The `PadIfNeeded` transform doesn't serialize the position parameter.
## To Reproduce
Steps to reproduce the behavior:
```python
import albumentations as A
transform = A.PadIfNeeded(min_height=512, min_width=512, p=1, border_mode=0, value=[124, 116, 104], position="top_left")
transform.to_dict()
```
Output:
```
{'__version__': '1.4.1', 'transform': {'__class_fullname__': 'PadIfNeeded', 'always_apply': False, 'p': 1, 'min_height': 512, 'min_width': 512, 'pad_height_divisor': None, 'pad_width_divisor': None, 'border_mode': 0, 'value': [124, 116, 104], 'mask_value': None}}
```
## Expected behavior
The position parameter should be included in the `dict`.
## Environment
- Albumentations version: 1.4.1
- Python version: 3.8
- OS: Linux
- How you installed albumentations (`conda`, `pip`, source): `pip`
| 0easy
|
Title: falcon.uri.encode_value does not encode the percent character
Body: [`falcon.uri.encode_value`](https://falcon.readthedocs.io/en/latest/api/util.html#falcon.uri.encode_value) says:
> An escaped version of *uri*, where all disallowed characters have been percent-encoded.
However, it seems that the percent character itself is not encoded...
Cf
```python
>>> from urllib.parse import quote
>>> from falcon import uri
>>> quote('%26')
'%2526'
>>> uri.encode('%26')
'%26'
>>> uri.encode_value('%26')
'%26'
>>> uri.decode(uri.encode_value('%26'))
'&'
>>> uri.decode(quote('%26'))
'%26'
``` | 0easy
|
Title: Marketplace - search results - increase margins between filter chips and search box
Body:
### Describe your issue.
<img width="1450" alt="Screenshot 2024-12-13 at 21 03 31" src="https://github.com/user-attachments/assets/ee85f890-72eb-408c-8bc0-4e8608c65060" />
| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.