text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: 建议添加移动端
Body: 建议添加移动端 | 0easy
|
Title: Unexpected Behaviors to Address
Body: # Just add any screenshot of unexpected behavior as a comment
<img width="1280" alt="Screen Shot 2020-04-23 at 4 54 08 PM" src="https://user-images.githubusercontent.com/1670421/80079725-3daa4380-8583-11ea-848a-09a9f8c064da.png">
| 0easy
|
Title: 后单获取不到付款通知
Body: 按照你帮助里说的,后台网站地址里设置好网站的真实地址,测试付款后,后台的订单依然显示未付款。不知道是什么问题。 | 0easy
|
Title: Add K-Medoids Support
Body: * igel version: 0.3.1
* Python version: 3.6.9
* Operating System: Ubuntu 18.04 LTS running as a Linux Subsystem on WSL2
### Description
Adding support for K-Medoids Clustering from the sklearn_extra library.
This clustering method would be useful for median-based distance metrics in clustering, because it reduces the impact of outliers on finding new central points, and calculates dissimilarities (pairwise) to all the objects in the cluster, creating a more centered cluster center point.
### What I Did
Currently working on it. Will submit PR shortly.
```
| 0easy
|
Title: Good First Issue: Allow `initial_train_size` in `backtesting_forecaster` to accept date values
Body: Use branch `0.15.x` as base.
**Summary**
Currently, the `initial_train_size` parameter in the `backtesting_forecaster` function only accepts an integer value. This integer defines how many observations to use as the initial training set. We would like to extend this functionality so that `initial_train_size` can also accept a date (e.g., `'2020-01-01'`). If a date is provided, the function should calculate the appropriate number of observations corresponding to the time window between the start of the data and the given date.
**Task**
1. Create an auxiliary function, `_preprocess_initial_train_size(y: pd.Series, initial_train_size)` in the `utils` module:
- `initial_train_size` can be an integer or any datetime format that pandas allows to be passed to a `pd.DatetimeIndex` (e.g., string, pandas timestamp...).
- If `y` does not have a `pd.DatetimeIndex` and `initial_train_size` is not an integer, raise a `TypeError` with the message: "If `y` does not have a pd.DatetimeIndex, `initial_train_size` must be an integer."
- If the series `y` has a `pd.DatetimeIndex`, this function will return the length of the time window between the start of the data and the given date as an integer value. The given date must be included in the window.
- If the input `initial_train_size` is an integer, return the same integer.
- Create unit tests using pytest in the `utils.tests` folder.
```python
# Expected behavior
# ==============================================================================
y = pd.Series([1, 2, 3, 4, 5], index=pd.date_range('2020-01-01', periods=5, freq='D'))
_preprocess_initial_train_size(y, '2020-01-02') # expected output: 2
y = pd.Series([1, 2, 3, 4, 5], index=pd.date_range('2020-01-01', periods=5, freq='D'))
_preprocess_initial_train_size(y, 2) # expected output: 2
y = pd.Series([1, 2, 3, 4, 5], index=pd.RangeIndex(start=0, stop=5, step=1))
_preprocess_initial_train_size(y, '2020-01-02') # expected output: TypeError
```
2. Integrate this function with `_backtesting_forecasting` and `backtesting_forecasting` in the `model_selection` module.
**Acceptance Criteria**
- [ ] The `initial_train_size` parameter accepts both integer and date formats.
- [ ] The function correctly calculates the initial training size when a date is provided.
- [ ] Existing tests continue to pass.
- [ ] New test cases are added to verify the correct behavior for both int and date inputs.
**Full Example**
The initial training set must contain 127 observations and the results must be the same if `initial_train_size = 127`.
```python
# Expected behavior
# ==============================================================================
data = fetch_dataset(name="h2o", kwargs_read_csv={"names": ["y", "datetime"], "header": 0})
initial_train_size = '2002-01-01 00:00:00'
forecaster = ForecasterRecursive(
regressor = LGBMRegressor(random_state=123, verbose=-1),
lags = 15
)
cv = TimeSeriesFold(
steps = 10,
initial_train_size = initial_train_size,
refit = False,
fixed_train_size = False,
gap = 0,
allow_incomplete_fold = True
)
metric, predictions = backtesting_forecaster(
forecaster = forecaster,
cv = cv,
y = data['y'],
metric = 'mean_squared_error',
verbose = True,
show_progress = True
)
``` | 0easy
|
Title: Add more valid tags for CAA record
Body: Hey,
I updated my CAA record (issue and iodef), I wanted to add “issuemail”, for S/MIME certificates (https://www.rfc-editor.org/rfc/rfc9495.pdf), but I get a warning saying that the tag is invalid.
The record works anyway, but if it were possible to remove the warning to avoid confusion, that would be nice.
I also recently saw on a test on hardenize.com that there is an “issuevmc” tag, for BIMI certificates, which could be added at the same time.
Thanks for your work ! | 0easy
|
Title: Add new pseudo log level `CONSOLE` that logs to console and to log file
Body: It would be useful to allow set log level to "Console" for keyword,
Currently To print list of dictionary you have to do it your self using Log To Console
Keywords in mind
- Log Dictionary
- Log List
| 0easy
|
Title: [ENH] window based time series segmentation via clustering
Body: Two advanced forms of https://github.com/sktime/sktime/issues/6750:
Non-overlapping
1. run a non-overlapping sliding window across the time series to get subseries and segments
2. apply a *time series clusterer* to the pooled set of all the subseries.
3. For each segment, the assignment is the cluster assignment of the segment it is in
Parameters: the clusterer; the sliding window schema
Overlapping
1. run a non-overlapping sliding window across the time series to get subseries and segments
2. apply a *time series clusterer* to the pooled set of all the subseries.
3. Each point will now be a part of multiple segments. We can do two things:
* 3a - aggregate the cluster assignments according to a rule, e.g., majority, or `predict_proba` to output with weights based on the number of segments.
* 3b - return the overlapping segments
Parameters: the clusterer; the sliding window schema; the aggregation strategy; whether we return overlapping segments, or a non-overlapping segmentation
| 0easy
|
Title: 最新的numpy版本不能使用,建议在install.md的安装中提醒限制版本
Body: numpy 1.24版本中已经移除np.float等用法,与代码中的写法相悖。
我在安装的时候直接使用了pip install numpy,自动安装了最新版本,导致持续有error出现。
将numpy降级之后,代码成功运行。 | 0easy
|
Title: Logging changes break the robot backgoundlogger
Body: The changes in #5255 break the robot backgoundlogger. The import of the backgroundlogger failes:
```
Traceback (most recent call last):
File "/home/user/test/TestLib.py", line 2, in <module>
from robotbackgroundlogger import BackgroundLogger
File "/home/user/test/.venv/lib/python3.10/site-packages/robotbackgroundlogger.py", line 57, in <module>
class BackgroundLogger(BaseLogger):
File "/home/user/test/.venv/lib/python3.10/site-packages/robotbackgroundlogger.py", line 75, in BackgroundLogger
LOGGING_THREADS = logger.librarylogger.LOGGING_THREADS
AttributeError: module 'robot.output.librarylogger' has no attribute 'LOGGING_THREADS'
``` | 0easy
|
Title: Explore Pydantic usage instead of dataclasses
Body: Piccolo uses dataclasses extensively, all over the code base.
It's worth investigating if any noticeable performance improvement can be achieved by using Pydantic models instead. | 0easy
|
Title: Cleanup ctx manager state management
Body: ### 🐛 Describe the bug
Today in https://github.com/pytorch/pytorch/blob/bc86b6c55a4f7e07548a92fe7c9b52ad2c88af35/torch/_dynamo/variables/ctx_manager.py#L58
We keep an indirect state object to workaround the previous immutability requirement of VariableTrackers. Since they can now be mutated, we can store the cleanup logic directly on the ctx manager objects.
### Error logs
_No response_
### Versions
N/A
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | 0easy
|
Title: Improve readme
Body: Make a better readme with setup instructions and how examples on how to use the program. Also, architecture design would be a plus. | 0easy
|
Title: Docker tag issue
Body: https://github.com/Kav-K/GPTDiscord/actions/runs/6901545502/job/18776545093
An issue on new releases where it says a tag is required for new pushes to the registry.. but a tag exists, not sure what's wrong here, I'm terrible with docker
```
Run docker/build-push-action@v5
GitHub Actions runtime token ACs
Docker info
Proxy configuration
Buildx version
/usr/bin/docker buildx build --cache-from type=gha --cache-to type=gha,mode=max --iidfile /tmp/docker-actions-toolkit-4jJHHm/iidfile --label org.opencontainers.image.created=2023-11-17T08:21:34.578Z --label org.opencontainers.image.description=A robust, all-in-one GPT interface for Discord. ChatGPT-style conversations, image generation, AI-moderation, custom indexes/knowledgebase, youtube summarizer, and more! --label org.opencontainers.image.licenses=MIT --label org.opencontainers.image.revision=bad090824ebcaf9857e3dcb909259df25e695bab --label org.opencontainers.image.source=https://github.com/Kav-K/GPTDiscord --label org.opencontainers.image.title=GPTDiscord --label org.opencontainers.image.url=https://github.com/Kav-K/GPTDiscord --label org.opencontainers.image.version= --platform linux/amd64,linux/arm64 --provenance mode=max,builder-id=https://github.com/Kav-K/GPTDiscord/actions/runs/6901545502 --metadata-file /tmp/docker-actions-toolkit-4jJHHm/metadata-file --push .
ERROR: tag is needed when pushing to registry
Error: buildx failed with: ERROR: tag is needed when pushing to registry
``` | 0easy
|
Title: Native SQLModel Support
Body: ### Discussed in https://github.com/awtkns/fastapi-crudrouter/discussions/108
<div type='discussions-op-text'>
<sup>Originally posted by **voice1** September 29, 2021</sup>
Is there any intent to support SQLModel? [https://github.com/tiangolo/sqlmodel](https://github.com/tiangolo/sqlmodel)</div> | 0easy
|
Title: Build more impressive example
Body: To really reinforce how streamlit-folium works, it'd be great to have a more robust example. Something that shows advanced functionality from Folium, integration of Streamlit widgets to control the output, etc.
https://github.com/randyzwitch/streamlit-folium/blob/master/examples/streamlit_folium_example.py | 0easy
|
Title: [BUG] DataBricks binding
Body: Error report: https://graphistry-community.slack.com/archives/C014ESCDDU0/p1597198763002700

Likely due to generated client JS for DataBricks environment | 0easy
|
Title: [BUG] MultiRocket does not accept singular series in `transform`
Body: ### Describe the bug
From #1696
The `MultiRocket` transform does not accept collections of size 1 when transforming. Discovered through the `Arsenal` classifier.
We should add to the general testing a line to predict cases like this. Should be relatively easy and cheap.
### Steps/Code to reproduce the bug
```python
from aeon.classification.convolution_based import Arsenal
import numpy as np
X = np.random.random((10, 20))
y = np.array([0, 1, 2, 3, 3, 1, 0, 0, 2, 1])
afc = Arsenal(rocket_transform='multirocket')
afc.fit(X, y)
X2 = np.random.random((1, 20))
afc.predict(X2)
```
### Expected results
transformer transforms the single case without exception
### Actual results
```python-traceback
Traceback (most recent call last):
File "D:\CMP_Machine_Learning\Repositories\aeon\local_code\local_code.py", line 9, in <module>
afc.predict(X2)
File "D:\CMP_Machine_Learning\Repositories\aeon\aeon\classification\base.py", line 175, in predict
return self._predict(X)
File "D:\CMP_Machine_Learning\Repositories\aeon\aeon\classification\convolution_based\_arsenal.py", line 195, in _predict
for prob in self._predict_proba(X)
File "D:\CMP_Machine_Learning\Repositories\aeon\aeon\classification\convolution_based\_arsenal.py", line 212, in _predict_proba
y_probas = Parallel(n_jobs=self._n_jobs, prefer="threads")(
File "D:\CMP_Machine_Learning\Repositories\aeon\.venv\lib\site-packages\joblib\parallel.py", line 1863, in __call__
return output if self.return_generator else list(output)
File "D:\CMP_Machine_Learning\Repositories\aeon\.venv\lib\site-packages\joblib\parallel.py", line 1792, in _get_sequential_output
res = func(*args, **kwargs)
File "D:\CMP_Machine_Learning\Repositories\aeon\aeon\classification\convolution_based\_arsenal.py", line 381, in _predict_proba_for_estimator
preds = classifier.predict(X)
File "D:\CMP_Machine_Learning\Repositories\aeon\.venv\lib\site-packages\sklearn\pipeline.py", line 602, in predict
Xt = transform.transform(Xt)
File "D:\CMP_Machine_Learning\Repositories\aeon\aeon\transformations\collection\base.py", line 126, in transform
Xt = self._transform(X=X_inner, y=y_inner)
File "D:\CMP_Machine_Learning\Repositories\aeon\aeon\transformations\collection\convolution_based\_multirocket.py", line 159, in _transform
X = _transform(
File "D:\CMP_Machine_Learning\Repositories\aeon\.venv\lib\site-packages\numba\core\dispatcher.py", line 703, in _explain_matching_error
raise TypeError(msg)
TypeError: No matching definition for argument type(s) array(float32, 1d, C), array(float32, 1d, C), Tuple(array(int32, 1d, C), array(int32, 1d, C), array(float32, 1d, C)), Tuple(array(int32, 1d, C), array(int32, 1d, C), array(float32, 1d, C)), int64
```
### Versions
N/A | 0easy
|
Title: Feature: public API to add middlewares
Body: We should provides user with the ability to add middlewares to already created broker.
I imagine it via the following interface:
```python
class Broker:
def add_outer_middleware(self, middleware: BaseMiddleware) -> None:
self._middlewares = (middleware, *self._middlewares)
for sub in self._subscribers.values():
sub.add_outer_middleware(middleware)
for pub in self._publishers.values():
pub.add_outer_middleware(middleware)
def add_inner_middleware(self, middleware: BaseMiddleware) -> None:
self._middlewares = (*self._middlewares, middleware)
for sub in self._subscribers.values():
sub.add_inner_middleware(middleware)
for pub in self._publishers.values():
pub.add_inner_middleware(middleware)
``` | 0easy
|
Title: update datalab.get_issues("class_imbalance") to include the label of each example
Body: This dataframe here should indicate the class label of each example

Also `datalab.report()` should be updated so that:

Contains an extra line:
> About this issue:
> Examples belonging to the most under-represented class in the dataset (class: <name_of_class>) | 0easy
|
Title: feat: bright theme!
Body: | 0easy
|
Title: 大佬,代码开源吗
Body: 大佬,代码开源吗,小白跪谢
| 0easy
|
Title: FileNotFoundError: [Errno 2] No such file or directory: 'path/to/project/exports/charts/temp_chart.png'
Body: ### System Info
OS version: macOS Sonoma 14.4.1
Python version: 3.10.12
pandasai version: 2.0.28
### 🐛 Describe the bug
It cannot go through when using the Agent to plot pictures. It is okay if the question response is supposed to be text.
Below is my code:
```
import os
from dotenv import load_dotenv
import pandas as pd
from pandasai import Agent
load_dotenv()
DEFAULT_PICTURE_FOLDER = "./exports/charts"
# create picture folder if necessary
if not os.path.isdir(DEFAULT_PICTURE_FOLDER):
os.makedirs(DEFAULT_PICTURE_FOLDER)
df = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
"sales": [5000, 3200, 2900, 4100, 2300, 2100, 2500, 2600, 4500, 7000]
})
agent = Agent(
df,
config={
"verbose": True,
"enforce_privacy": True,
"enable_cache": True,
"conversational": False,
"save_charts": False,
"open_charts": False,
"save_charts_path": DEFAULT_PICTURE_FOLDER,
},
)
qeury = "Plot the histogram of countries in Europe showing for each the gdp, using different colors for each bar"
response = agent.chat(qeury)
```
full traceback is as below
```
Traceback (most recent call last):
File "/Users/liaden/miniconda3/envs/genai/lib/python3.10/site-packages/pandasai/pipelines/chat/generate_chat_pipeline.py", line 283, in run
output = (self.code_generation_pipeline | self.code_execution_pipeline).run(
File "/Users/liaden/miniconda3/envs/genai/lib/python3.10/site-packages/pandasai/pipelines/pipeline.py", line 137, in run
raise e
File "/Users/liaden/miniconda3/envs/genai/lib/python3.10/site-packages/pandasai/pipelines/pipeline.py", line 101, in run
step_output = logic.execute(
File "/Users/liaden/miniconda3/envs/genai/lib/python3.10/site-packages/pandasai/pipelines/chat/code_execution.py", line 134, in execute
{"content_type": "response", "value": ResponseSerializer.serialize(result)},
File "/Users/liaden/miniconda3/envs/genai/lib/python3.10/site-packages/pandasai/responses/response_serializer.py", line 29, in serialize
with open(result["value"], "rb") as image_file:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/liaden/genai_streamlitapp/exports/charts/temp_chart.png'
```
| 0easy
|
Title: Marketplace - search results -
Body:
### Describe your issue.
Change this font to large-poppins style
style: large-poppins
font: poppins
weight: semi-bold
size: 18
line-height: 28
Style guide can be found here: [https://www.figma.com/design/aw299myQfhiXPa4nWkXXOT/agpt-template?node-id=7-47&t=VRNjygLc04MYRQkI-1](url)
<img width="1474" alt="Screenshot 2024-12-13 at 21 17 49" src="https://github.com/user-attachments/assets/6cbeca2a-c91a-4f34-bfdb-71fc65f5b606" />
| 0easy
|
Title: Add missing shell injection sinks
Body: We should add missing `sinks` w.r.t. "shell injection" to [`all_trigger_words.py`](https://github.com/python-security/pyt/blob/master/pyt/vulnerability_definitions/all_trigger_words.pyt). Refer to [Bandit plugin `shell_injection`](https://github.com/PyCQA/bandit/blob/master/bandit/plugins/injection_shell.py) for more info. | 0easy
|
Title: ✨Export to markdown and HTML
Body: ## Feature Proposal
We can easily export the document to markdown or html.
We could add 2 buttons, "Copy to markdown" and "Copy to html" that will copy the editor in your clipboard to paste it where you want.
See: https://www.blocknotejs.org/docs/editor-api/converting-blocks

## Dropdown code location
https://github.com/numerique-gouv/impress/blob/39d0211593511dd1030264f9a7e37e57cab1bce8/src/frontend/apps/impress/src/features/docs/doc-header/components/DocToolBox.tsx#L67-L130
| 0easy
|
Title: provide a way to evaluate multilabeled data
Body: igel should support multiclass classification. Therefore, evaluation metrics should work properly with multiclass/multilabel classification
| 0easy
|
Title: missing detail in grid documentation
Body: in the docs we have an example that looks like this:
```yaml
tasks:
- source: random-forest.py
name: random-forest-
product: 'n_estimators=[[n_estimators]]/criterion=[[criterion]].html'
grid:
n_estimators: [5, 10, 20]
criterion: [gini, entropy]
```
[link](https://docs.ploomber.io/en/latest/api/spec.html#tasks-grid)
the `[[placeholders]]`` are replaced at runtime by the parameter values, however, to ensure each file has a different name we also append a number at the end; but this isn't documented.
right below the snippet that I put above, we should add a little note saying that we number the filenames to prevent conflicting paths.
| 0easy
|
Title: tox 4 does not support factor all conditionals as 3
Body: We use factors to differentiate test setups like this:
```
[testenv]
...
commands_pre =
py27,py35: DO X
!py27,!py35: DO Y
...
```
This worked fine until version 4.0.13. Now the `py27` and `py35` environments match both the first and second condition, both `commands_pre` are run.
## Environment
Provide at least:
- OS: macOS
- Python: 3.8.15
- `pip list` of the host Python where `tox` is installed:
```console
$ bin/pip list
Package Version
------------------ ---------
bleach 5.0.1
build 0.9.0
cachetools 5.2.0
certifi 2022.12.7
chardet 5.1.0
charset-normalizer 2.1.1
check-manifest 0.49
colorama 0.4.6
commonmark 0.9.1
distlib 0.3.6
docutils 0.19
filelock 3.8.2
idna 3.4
importlib-metadata 5.1.0
jaraco.classes 3.2.3
keyring 23.11.0
more-itertools 9.0.0
packaging 22.0
pep517 0.13.0
pip 22.3.1
pkginfo 1.9.2
platformdirs 2.6.0
pluggy 1.0.0
py 1.11.0
Pygments 2.13.0
pyproject_api 1.2.1
readme-renderer 37.3
requests 2.28.1
requests-toolbelt 0.10.1
rfc3986 2.0.0
rich 12.6.0
setuptools 65.6.3
six 1.16.0
tomli 2.0.1
tox 4.0.13
twine 4.0.2
typing_extensions 4.4.0
urllib3 1.26.13
virtualenv 20.17.1
webencodings 0.5.1
wheel 0.38.4
zc.buildout 3.0.1
zipp 3.11.0
```
| 0easy
|
Title: Using `pnpm start` in local Docker deployment causes issues with PostHog and Supabase
Body: **Describe the bug**
This is relevant to local prod deployments. When running `pnpm start` inside of Docker causes errors initializing Supabase and Posthog. Supabase looks for url and key, and posthog fails to initialize.
**To Reproduce**
Steps to reproduce the behavior:
Overwrite the docker entrypoint command for frontend/Dockerfile (either in the dockerfile or in docker-compose.prod.yaml) to use `pnpm start`, and set NODE_ENV="production"
**Expected behavior**
It should run normally and load the app
**Additional context**
Workaround is to just use `pnpm dev`, even though we're using a multi-stage build and actually building the nextjs project with `pnpm run build`
I have a draft PR that may help with the posthog issue: #58 , feel free to use this as a starting point. | 0easy
|
Title: Get an error run `scripts/test` in a newly cloned `uvicorn`
Body: ### Discussed in https://github.com/encode/uvicorn/discussions/1755
<div type='discussions-op-text'>
<sup>Originally posted by **ys-wu** November 2, 2022</sup>
I clone my fork:
```
git clone https://github.com/MY-USERNAME/uvicorn
```
Then simply install and test:
```
cd uvicorn
scripts/install
scripts/test
```
And get the error:
```
+ '[' -z ']'
+ scripts/check
+ ./scripts/sync-version
+ venv/bin/black --check --diff --target-version=py37 uvicorn tests
All done! ✨ 🍰 ✨
63 files would be left unchanged.
+ venv/bin/flake8 uvicorn tests
+ venv/bin/mypy --show-error-codes
Success: no issues found in 53 source files
+ venv/bin/isort --check --diff --project=uvicorn uvicorn tests
+ venv/bin/python -m tools.cli_usage --check
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/wuyusheng/playground/uvicorn/tools/cli_usage.py", line 69, in <module>
rv |= _generate_cli_usage(path, check=args.check)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wuyusheng/playground/uvicorn/tools/cli_usage.py", line 47, in _generate_cli_usage
usage_lines = _get_usage_lines()
^^^^^^^^^^^^^^^^^^
File "/Users/wuyusheng/playground/uvicorn/tools/cli_usage.py", line 13, in _get_usage_lines
res = subprocess.run(["uvicorn", "--help"], stdout=subprocess.PIPE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wuyusheng/.pyenv/versions/3.11.0/lib/python3.11/subprocess.py", line 546, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wuyusheng/.pyenv/versions/3.11.0/lib/python3.11/subprocess.py", line 1022, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/Users/wuyusheng/.pyenv/versions/3.11.0/lib/python3.11/subprocess.py", line 1899, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'uvicorn'
```
My environment:
```
MacBook Pro 2017
macOS Monterey Version 12.6 (21G115)
Python 3.11.0
```
</div> | 0easy
|
Title: Add examples page and list to examples of analyses that use the project
Body: Is there some way to link to a couple of projects/pipelines implementing this template? I know there are resources for every question I might have, but to see a whole project where all the parts you suggest here come together would be really helpful.
| 0easy
|
Title: Storage upsert docs are incorrect
Body: **Describe the bug**
The docs on how to control storage upserts are out of date.
The docs say that an `upsert` header is allowed
https://supabase.com/docs/reference/python/storage-from-update
But the underlying storage3 lib accepts an `x-upsert` param not an `upsert`
https://github.com/supabase-community/storage-py/blob/main/storage3/constants.py#L12
cc @olirice | 0easy
|
Title: Document how to use it with django cache framework?
Body: <!--- Provide a general summary of the changes you want in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Description
<!-- A few sentences describing what it is. -->
Hi, [django has a very good caching framework](https://docs.djangoproject.com/en/5.1/topics/cache/) that can work in a [distributed manner](https://github.com/jazzband/django-redis). I was wondering if it was possible to cache the result of graphql output like [we can with `django-rest-framework`](https://www.django-rest-framework.org/api-guide/caching/) | 0easy
|
Title: `linear_model.LowessRegression`
Body: | 0easy
|
Title: Half trend indicator requested with code link.
Body: Hi.
I have put request earlier also. This is a custom indicator same as supertrend but much fast and better than it.
The original code in pine script is at https://www.tradingview.com/script/U1SJ8ubc-HalfTrend/
I am sending link of the already made indicator.
https://github.com/ryu878/halftrend_python
This indicator is working fine with sample data. Now can it be modified to added in Pandas-ta ??
| 0easy
|
Title: Add milleseconds to script start and error timestamps
Body: ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Which Linux distribution did you use?
_No response_
### Which AutoKey GUI did you use?
None
### Which AutoKey version did you use?
_No response_
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
Now that the AutoKey minimum Python version is **3.7**, this **TODO** in [line 35-36 of the show_recent_errors.py file](https://github.com/autokey/autokey/blob/master/lib/autokey/qtui/dialogs/show_recent_script_errors.py#L35-L36) can be completed:
```python
# TODO: When the minimal python version is raised to >= 3.6, add millisecond display to both the script start
# timestamp and the error timestamp.
```
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
_No response_
### What should have happened?
_No response_
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | 0easy
|
Title: Migrate react-sortable tree to rc-tree
Body: react-sortable-tree appears to be [abandoned](https://github.com/frontend-collective/react-sortable-tree/issues/862). As an alternative we can move to [rc-tree](https://github.com/react-component/tree). | 0easy
|
Title: Control
Body: Just adding more to the Antminer-Monitor I would like to add there a reboot action for each Antminer assuming that user: root password: root as default and saved So we can root the miners from that option | 0easy
|
Title: Feature: broker syncify wrapper
Body: To suggest an idea or inquire about a new Message Broker supporting feature or any other enhancement, please follow this template:
**Is your feature request related to a problem? Please describe.**
Provide a clear and concise description of the problem you've encountered. For example: "I'm always frustrated when..."
**Describe the solution you'd like**
Clearly and concisely describe the desired outcome or solution.
**Feature code example**
To help others understand the proposed feature, illustrate it with a **FastStream** code example:
```python
from faststream import FastStream
...
```
**Describe alternatives you've considered**
Provide a clear and concise description of any alternative solutions or features you've thought about.
**Additional context**
Include any other relevant context or screenshots related to the feature request.
| 0easy
|
Title: Error displaying images with "Languages Used (By File Size)"
Body: ## Error description photo
<details>

</details>
## Device used in the photo
> Android 11; Redmi Note 8 Build/RKQ1.201004.002
## Browser version used in the photo
> Chrome 96.0.4664.92 | 0easy
|
Title: Fix documentation issues mentioned in static analysis
Body: ## Feature request
### Description of the feature
<!-- A clear and concise description of what the new feature is. -->
To increase the quality of the project we are using static analysis to find out documentation issues in the project.
A detailed list of the issues can be found [here](https://deepsource.io/gh/scanapi/scanapi/issues/?page=1&analyzer=python&category=doc)
💡 The Issue requires multiple PRs so more than one person can contribute to the issue.
| 0easy
|
Title: Module cgi is deprecated and will be removed in Python 3.13
Body: https://github.com/xonsh/xonsh/blob/38a3f7253a70f5dfb5cd0e1723057ab1a68637cc/xonsh/webconfig/main.py#L2 | 0easy
|
Title: crop_signal1d not updating plot
Body: ```python
import numpy as np
import hyperspy.api as hs
s = hs.signals.Signal1D(np.random.randint(0, 99, 1000))
s.crop_signal1D() # Select a region. Press Apply
```
The plot does not update, as seen in the image here:

Furthermore, pressing "OK" in the dialog box gives this error:
```python
File "hyperspy/hyperspy/_signals/signal1d.py", line 1258, in crop_signal1D
self.crop(axis=self.axes_manager.signal_axes[0].index_in_axes_manager,
File "hyperspy/hyperspy/signal.py", line 2865, in crop
i1, i2 = axis._get_index(start), axis._get_index(end)
File "hyperspy/hyperspy/axes.py", line 340, in _get_index
return self.value2index(value)
File "hyperspy/hyperspy/axes.py", line 544, in value2index
raise ValueError("The value is out of the axis limits")
ValueError: The value is out of the axis limits
```
Replotting the signal fixes the plotting limits.

I see this in both the most recent version of RELEASE_next_patch, the conda-forge version and the newest HyperSpy bundle. | 0easy
|
Title: Recipe needed: how to handle ForeignKey to self model?
Body: Unfortunately, me cannot do a SubFactory to self model yet. (hit recursion error)
| 0easy
|
Title: Incorporate code style formatters in dev workflow
Body: Update development workflow and CONTRIBUTING.md to incorporate [flake8](https://flake8.pycqa.org/en/latest/) and [black](https://github.com/psf/black) for refactoring and improving code style. | 0easy
|
Title: [BUG] Jinja templating for FugueSQL is failing with string
Body: **Minimal Code To Reproduce**
```python
from fugue_sql import FugueSQLWorkflow
data = [
["A", "2020-01-01", 10],
["A", "2020-01-02", None],
["A", "2020-01-03", 30],
["B", "2020-01-01", 20],
["B", "2020-01-02", None],
["B", "2020-01-03", 40]
]
schema = "id:str,date:date,value:double"
with FugueSQLWorkflow() as dag:
df = dag.df(data, schema)
x = "A"
dag("""
SELECT *
FROM df
WHERE id = {{x}}
PRINT
""")
```
**Describe the bug**
Inserting numeric variables work but string variables does not.
**Expected behavior**
It should work
**Environment (please complete the following information):**
- Backend: Pandas and Dask
- Backend version: Fugue 0.4.9
- Python version: 3.7
- OS: linux/windows: Linux
| 0easy
|
Title: Put all docarray method in document and da into a common namespace
Body: # context
as we allow flexible schema we need a way to prevent user to use a key name that is already used by docarray. The best way to do it is to have a namespace for al of the method of `Document`.
Example:
`doc.from_protobuf -> doc.doc_from_protobuf`
or
`doc.da_from_protobuf`
then we just need to tell the user you cannot have a key name that start with `doc` as it is reserved for docarray method.
Pydantic v2 will have a similar approach : https://docs.pydantic.dev/blog/pydantic-v2/#model-namespace-cleanup | 0easy
|
Title: Fix left bar scrolling CSS
Body: When expanding the left bar in the docs, the bottom sections are hidden - even if you scroll down, you won't see the **Community** section:

| 0easy
|
Title: [cfg] We don't visit functions in while loops
Body: You can see we do it for `for` loops and not while loops https://github.com/python-security/pyt/blob/346a2d3070f70efbf81be44c261f356ba6fc0f1c/pyt/cfg/stmt_visitor.py#L513-L529 | 0easy
|
Title: [k8s] Deprecate `nodeport` networking mode
Body: We no longer maintain and support `config.kubernetes.networking: nodeport`. We should remove that field and any codepaths using it. | 0easy
|
Title: CI should run st2-self-check for every PR
Body: Apart from Unit and Integration tests, we should also run the `st2-self-check` script for every PR as part of the CI pipeline.
Something we do in e2e and manual testing.
It should be possible with the Github Actions and would help us to catch bugs before merging into `master`.
Ex #5489. | 0easy
|
Title: Add `anchor` Attribute to `rio.Slider` Component
Body: It would be beneficial to enhance the `rio.Slider` component by introducing an additional attribute called `anchor?`. This attribute will accept a Literal type with values "left" or "right", defaulting to "left". This feature will make the rio.Slider more versatile and allow developers to specify the anchor position of the slider.
### Benefits:
- Increased versatility of the rio.Slider component.
- Better alignment control in various UI layouts.
- Improved developer experience by providing more customization options.
### Proposed API:
```python
class Slider(FundamentalComponent):
def __init__(self, ..., anchor: Literal["left", "right"] = "left"):
```
`rio.Slider(..., anchor = "left")` **or** `rio.Slider(...)`:

`rio.Slider(anchor = "right")`:

### Additional Context:
- Ensure the default behavior remains unchanged to maintain backward compatibility.
- Update relevant documentation to reflect the new attribute and its usage. | 0easy
|
Title: Links in README don't render properly on PyPi project page
Body: The README has links that don't work when viewed from PyPi. An [example](https://github.com/slackapi/python-slack-sdk/blame/main/README.md#L83), which uses `/tutorial` as a relative path, resulting in a 404. The Table of Content anchors also appear to not work as intended.
Links on the [project page](https://pypi.org/project/slack-sdk/#getting-started-tutorial) should be reviewed and updated to work properly.
### The page URLs
- https://pypi.org/project/slack-sdk/#getting-started-tutorial
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: Support julia 1.1
Body: It seems that julia 1.0 is supported, but not julia 1.1. Would be great to have support for 1.1 as well. | 0easy
|
Title: Camel to snake case conversion doesn't add an underscore after a digit
Body: The `pyutils.camel_to_snake` function does not add an underscore if a capital letter occurs after a non-letter character. For example, `came_to_snake("python2Thing")` returns `python2thing` instead of `python2_thing`. This seems counterintuitive, as it changes the case of the `T` without adding an underscore as usual. | 0easy
|
Title: Network throughput eval freezes in certain network conditions
Body: For example, this happens on IPv6-only corporate networks, or when third-party throughput eval servers are down. The `timeout` argument of the `speedtest-cli` module doesn't help.
We need to run the network throughput eval in a separate process and kill it in N seconds if it doesn't yield the result. | 0easy
|
Title: docker image for mac m1 ship
Body: ### Describe the bug
no matching manifest for linux/arm64/v8 in the manifest list entries
### To Reproduce
have a mac m1 with apple ship
run `docker pull ghcr.io/coqui-ai/tts-cpu`
> docker pull ghcr.io/coqui-ai/tts-cpu
>
> Using default tag: latest
> latest: Pulling from coqui-ai/tts-cpu
> no matching manifest for linux/arm64/v8 in the manifest list entries
### Expected behavior
pull the image
### Logs
_No response_
### Environment
```shell
Mac with M1 apple ship
```
### Additional context
_No response_ | 0easy
|
Title: Trend Regularity Adaptive Moving Average [LUX]
Body: Would be awesome to see this indicator on pandas_ta! It's a moving average that adapts to the current trend by taking into account the average of high/lows during a selected period. Link: https://www.tradingview.com/v/p8wGCPi6/
Pinescript code:
```
// This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) https://creativecommons.org/licenses/by-nc-sa/4.0/
// © LuxAlgo
//@version=4
study("Trend Regularity Adaptive Moving Average","TRAMA",overlay=true)
length=input(99),src = input(close)
//----
ama = 0.
hh = max(sign(change(highest(length))),0)
ll = max(sign(change(lowest(length))*-1),0)
tc = pow(sma(hh or ll ? 1 : 0,length),2)
ama := nz(ama[1]+tc*(src-ama[1]),src)
plot(ama,"Plot",#ff1100,2)
```
My attempt: i'm not so good at pinescript, and i'm having a lot of troubles understanding how to translate the lines `tc = pow(sma(hh or ll ? 1 : 0,length),2)` and `ama := nz(ama[1]+tc*(src-ama[1]),src)` to Python.
Until now i managed to convert the first lines to Python, but i'm not sure if i'm doing it right:
```python
#for each row's close, get the highest value in the last n rows
ohlcv['hh'] = ohlcv['close'].rolling(window=99).max().shift(1).fillna(0)
ohlcv['ll'] = ohlcv['close'].rolling(window=99).min().shift(1).fillna(0)
ohlcv['ll'] = ohlcv['ll']*-1
#diff between current hh row and previous row
ohlcv['hh'] = ohlcv['hh'].diff()
ohlcv['ll'] = ohlcv['hh'].diff()
#set ohlcv['hh'] to 1 if it's greater than 0, 0 if equal to zero else -1
ohlcv['hh'] = np.where(ohlcv['hh'] > 0, 1, np.where(ohlcv['hh'] == 0, 0, -1))
ohlcv['ll'] = np.where(ohlcv['ll'] > 0, 1, np.where(ohlcv['ll'] == 0, 0, -1))
#set ohlcv['hh'] to ohlcv['hh'] if higher than 0, else 0
ohlcv['hh'] = np.where(ohlcv['hh'] > 0, ohlcv['hh'], 0)
ohlcv['ll'] = np.where(ohlcv['ll'] > 0, ohlcv['ll'], 0)
```
| 0easy
|
Title: [MNT] Migration to newer skforecast interface
Body: _Originally posted by @JavierEscobarOrtiz in https://github.com/sktime/sktime/issues/6531#issuecomment-2499989752_
> Hello @fkiraly @Abhay-Lejith @yarnabrina
>
> I’m Javier, one of the co-authors of skforecast. First of all, thank you so much for including our library as a wrapper in sktime. It’s amazing to see how our work helps other teams!
>
> I wanted to let you know that in skforecast 0.14.0, we’ve implemented a major refactoring aimed at improving performance and usability. For instance, the class ForecasterAutoreg has been renamed to ForecasterRecursive:
>
> ```python
> # Before Skforecast 0.14
> from skforecast.ForecasterAutoreg import ForecasterAutoreg
>
> # From skforecast 0.14.0 onward
> from skforecast.recursive import ForecasterRecursive
> ```
>
> Same Forecaster but new name and new features.
>
> If you need help with the integration, just ping us! 😄 We have also created a [migration guide](https://skforecast.org/0.14.0/user_guides/migration-guide) section for anyone who wants to update to the de version.
>
> Best,
>
> Javi, @JoaquinAmatRodrigo
| 0easy
|
Title: Have st2 pack install checkout git submodules when cloning a pack repo
Body: When a Pack is installed on Stackstorm using `pack install` - stackstorm performs a clone on the Repo ([Code](https://github.com/StackStorm/st2/blob/dfab25aab0468f330a635db55f01286523217135/st2common/st2common/util/pack_management.py#L205)) - however this results in any submodules present in the repo not being checked out and only being an empty directory.
It would be useful if the `clone_repo` function also checked out any submodules that are present in the repo, as it would be helpful to add dependencies maintained in seperate repos to the pack as a submodule, while still being maintained externally.
In my use case a series of ansible playbooks are maintained in a seperate repo and I would like to also make them available as a pack to st2, and as I see it the best way to do that is via git submodules. | 0easy
|
Title: Bad error message when function is annotated with an empty tuple `()`
Body: ```py
def bar(a: ()): ...
```
```robot
*** Settings ***
Library asdf
*** Test Cases ***
Asdf
bar 1 # ValueError: Argument 'a' got value '1' that cannot be converted to .
``` | 0easy
|
Title: Expose unix_time_to_utc to Jinja
Body: I have a need for `unix_time_to_utc` in the Jinja template but it isn't currently exposed. Ultimately it's to convert between an Epoch timestamp to ISO 8601. | 0easy
|
Title: docs: little updates
Body: Add to [this section](https://faststream.airt.ai/latest/getting-started/subscription/annotation/#json-basic-serialization)
* [ ] partial body consuming example (https://github.com/airtai/faststream/pull/890#issuecomment-1835856313)
* [ ] detail serialization rule notice (#1152)
* [ ] edit [FastAPI broker section](https://faststream.airt.ai/latest/getting-started/integrations/fastapi/#accessing-the-broker-object) to use context (also test it with multiple routers) | 0easy
|
Title: Restructure the functions in anomaly data output view API
Body: Right now, the controller method and the API endpoint both exist [in the same file](https://github.com/chaos-genius/chaos_genius/blob/main/chaos_genius/views/anomaly_data_view.py
)
We need to restructure and refactor some functions.
- [ ] Create a new file `anomaly_data_controller.py` inside the controller folder
- [ ] Move the functions which are interacting with DB in the controller file and import from there in the `anomaly_data_view.py` | 0easy
|
Title: ValueError for backtesting_forecaster when interval is provided
Body: Hi !
I'm trying to use your ``backtesting_forecaster``
and when I use and ask for intervals, it leads to ``ValueError``
When no intervals are asked, all works perfectly:
**[in]:**
```python
if __name__ == "__main__":
import pandas as pd
from skforecast.ForecasterAutoreg import ForecasterAutoreg
from skforecast.model_selection import backtesting_forecaster
from sklearn.ensemble import RandomForestRegressor
y_train = pd.Series([479.157, 478.475, 481.205, 492.467, 490.42, 508.166, 523.182,
499.634, 495.88, 494.174, 494.174, 490.078, 490.078, 495.539,
488.713, 485.3, 493.491, 492.126, 493.832, 485.983, 481.887,
474.379, 433.084, 456.633, 477.451, 468.919, 484.959, 471.99,
486.324, 498.61, 517.381, 485.3, 480.864, 485.983, 484.276,
490.761, 490.078, 494.515, 495.88, 493.15, 491.443, 490.42,
485.3, 485.3, 486.665, 467.895, 441.616, 469.601, 477.11,
486.324, 485.3, 489.054, 494.856, 513.968, 544.683, 557.31,
574.374, 603.383, 617.034, 621.812, 627.273, 612.598, 598.605,
610.891, 598.605, 563.112, 542.635, 536.492, 499.634, 456.633,
431.037, 453.903, 464.141, 454.244, 456.633, 476.768, 495.88,
523.524, 537.516, 577.787, 600.994, 616.693, 631.71, 636.487,
621.471, 635.805, 625.908, 616.011, 581.2, 565.842, 553.556,
570.279, 514.992, 483.253, 460.046, 469.26, 475.745, 478.816,
482.57, 506.801, 510.896])
backtesting_forecaster(
forecaster=ForecasterAutoreg(regressor=RandomForestRegressor(random_state=42), lags=10),
y=y_train,
steps=24,
metric="mean_absolute_percentage_error",
initial_train_size=14,
n_boot=50,
)
```
**[out]:**
```python
(array([0.07647964]),
pred
14 493.40924
15 493.17717
16 492.99968
17 492.98603
18 492.69932
.. ...
96 492.98603
97 492.98603
98 492.98603
99 492.98603
100 492.98603
[87 rows x 1 columns])
```
however asking for intervals leads to ``ValueError``
**[in]:**
```python
if __name__ == "__main__":
import pandas as pd
from skforecast.ForecasterAutoreg import ForecasterAutoreg
from skforecast.model_selection import backtesting_forecaster
from sklearn.ensemble import RandomForestRegressor
y_train = pd.Series([479.157, 478.475, 481.205, 492.467, 490.42, 508.166, 523.182,
499.634, 495.88, 494.174, 494.174, 490.078, 490.078, 495.539,
488.713, 485.3, 493.491, 492.126, 493.832, 485.983, 481.887,
474.379, 433.084, 456.633, 477.451, 468.919, 484.959, 471.99,
486.324, 498.61, 517.381, 485.3, 480.864, 485.983, 484.276,
490.761, 490.078, 494.515, 495.88, 493.15, 491.443, 490.42,
485.3, 485.3, 486.665, 467.895, 441.616, 469.601, 477.11,
486.324, 485.3, 489.054, 494.856, 513.968, 544.683, 557.31,
574.374, 603.383, 617.034, 621.812, 627.273, 612.598, 598.605,
610.891, 598.605, 563.112, 542.635, 536.492, 499.634, 456.633,
431.037, 453.903, 464.141, 454.244, 456.633, 476.768, 495.88,
523.524, 537.516, 577.787, 600.994, 616.693, 631.71, 636.487,
621.471, 635.805, 625.908, 616.011, 581.2, 565.842, 553.556,
570.279, 514.992, 483.253, 460.046, 469.26, 475.745, 478.816,
482.57, 506.801, 510.896])
backtesting_forecaster(
forecaster=ForecasterAutoreg(regressor=RandomForestRegressor(random_state=42), lags=10),
y=y_train,
steps=24,
metric="mean_absolute_percentage_error",
initial_train_size=14,
interval=[95],
n_boot=50,
)
```
**[out]:**
```python
Traceback (most recent call last):
File "...\lib\site-packages\IPython\core\interactiveshell.py", line 3398, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-6-9cd2f471a2e7>", line 22, in <cell line: 22>
backtesting_forecaster(
File "...\lib\site-packages\skforecast\model_selection\model_selection.py", line 925, in backtesting_forecaster
metric_value, backtest_predictions = _backtesting_forecaster_no_refit(
File "...\lib\site-packages\skforecast\model_selection\model_selection.py", line 705, in _backtesting_forecaster_no_refit
pred = forecaster.predict_interval(
File "...\lib\site-packages\skforecast\ForecasterAutoreg\ForecasterAutoreg.py", line 757, in predict_interval
predictions = pd.DataFrame(
File "...\lib\site-packages\pandas\core\frame.py", line 694, in __init__
mgr = ndarray_to_mgr(
File "...\lib\site-packages\pandas\core\internals\construction.py", line 351, in ndarray_to_mgr
_check_values_indices_shape_match(values, index, columns)
File "...\dev\lib\site-packages\pandas\core\internals\construction.py", line 422, in _check_values_indices_shape_match
raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
ValueError: Shape of passed values is (24, 2), indices imply (24, 3)
```
| 0easy
|
Title: Clustering example
Body: *Note: working on this requires some familiarity with ploomber*
We have a [classification example](https://github.com/ploomber/projects/tree/master/ml-basic) but we're missing a clustering one. It'd be to have one that has the following tasks:
1. loading data (can be some [sample dataset](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.datasets))
2. cleaning data (sample datasets are in good shape but adding some cleaning steps is good for the sake of the example)
3. visualize clean data
4. train model
5. evaluate model
The structure is:
```
load -> clean -> viz
clean -> train -> evaluate
```
Please make each task a script.
Whatever metric/plot you use to evaluate the model, please add a short explanation and a link to learn more.
Check out the [contributing guide](https://github.com/ploomber/projects/blob/master/CONTRIBUTING.md) for details.
| 0easy
|
Title: Captcha in ActivateAccountActionHandler hangs on wrong response
Body: See https://github.com/desec-io/desec-stack/pull/754#issuecomment-1887977612 | 0easy
|
Title: implement more models
Body: | 0easy
|
Title: KElbowVisualizer should take a list of k instead of a tuple
Body: This is just a suggestion that KElbowVisualizer should let the user chose the values of k and not be forced to have a continuous range.
This is especially useful when you suppose you have a big number of clusters and you want explore the values of k from 10 to 10 or 100 to 100.
Anyway, thanks for your work. | 0easy
|
Title: could you add SMIIO?
Body: Hi, twopirllc,
I find that you maintain pandas-ta very diligent. Thank you for your work.
I find a indicator is useful, SMIIO, SMI Ergodic Indicator/Oscillator.
[https://www.tradingview.com/chart/LOCO/wsKsWzWo-Momentum-based-SMIIO-Indicator/](url)
yes, I find Stochastic in pandas-ta, I think Stochastic is too sensitive.
I tried to code SMIIO but not success.
Could you add SMIIO to pandas-ta? | 0easy
|
Title: Variables are not resolved in keyword name in WUKS error message
Body: RF version: 6.1.1
```
*** Settings ***
Documentation scratch
*** Test Cases ***
Test
KW Fail
*** Keywords ***
KW
[Arguments] ${name}
Wait Until Keyword Succeeds 3x 1s ${name}
```

| 0easy
|
Title: Japanese: Apply #479 changes to the documents
Body: Refer to https://github.com/slackapi/bolt-python/pull/479 for details
### The page URLs
* https://slack.dev/bolt-python/ja-jp/concepts#view_submissions
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: Removing a thread from awaiting a response should be safer
Body: When a thread's already been removed or not in it for some reason it should just silently pass.
```python
Traceback (most recent call last):
File "/home/bots/discord_bots/GPT3Discord/models/index_model.py", line 699, in index_webpage
raise ValueError(
ValueError: Invalid URL or could not connect to the provided URL.
The summary is None
Ignoring exception in on_message
Traceback (most recent call last):
File "/home/bots/.local/lib/python3.9/site-packages/discord/client.py", line 378, in _run_event
await coro(*args, **kwargs)
File "/home/bots/discord_bots/GPT3Discord/cogs/index_service_cog.py", line 168, in on_message
self.thread_awaiting_responses.remove(message.channel.id)
ValueError: list.remove(x): x not in list
| 0easy
|
Title: Gallery examples use old import convention.
Body: ### Description:
We should `import skimage as ski` in all gallery examples!
This issue may (should) be addressed in small batches of examples. | 0easy
|
Title: Support generic types for `union` and `union_all`
Body: ### Describe the use case
For `union` and `union_all` which both have signatures that accept multiple `Select`s, the generic type information is lost in the union query. Ideally the return type of these functions should be the most specific supertype of each of the input `Select`s.
### Databases / Backends / Drivers targeted
I don't think this covers any specific drivers or backends.
### Example Use
Here's an example that shows how this addition to typing could help to catch a programming error:
```python
from typing import Any, TypeVar
from sqlalchemy import select
from sqlalchemy.orm import Session
from sqlalchemy.sql.elements import ColumnElement
from sqlalchemy.sql.expression import union
from sqlalchemy.sql.selectable import Select
sess: Session
str_c: ColumnElement[str]
int_c: ColumnElement[int]
q1 = select(str_c, int_c)
cursor1 = sess.execute(q1).tuples()
reveal_type(cursor1) # example.py:15: note: Revealed type is "sqlalchemy.engine.result.TupleResult[tuple[builtins.str, builtins.int]]"
for row1 in cursor1:
x1: int
y1: str
x1, y1 = row1 # example.py:19: error: Incompatible types in assignment (expression has type "str", variable has type "int") [assignment] example.py:19: error: Incompatible types in assignment (expression has type "int", variable has type "str") [assignment]
q2 = select(str_c, int_c)
union_query = union(q1, q2)
cursor2 = sess.execute(union_query).tuples()
reveal_type(cursor2) # example.py:25: note: Revealed type is "sqlalchemy.engine.result.TupleResult[Any]"
for row2 in cursor2:
x2: int
y2: str
x2, y2 = row2 # this should be a typing error
T = TypeVar("T", bound=tuple[Any, ...])
# define a union function that takes selects with generic types
def example_union(*selects: Select[T]) -> Select[T]: ... # type: ignore[empty-body]
example_union_query = example_union(q1, q2)
cursor3 = sess.execute(example_union_query).tuples()
reveal_type(cursor3) # example.py:40: note: Revealed type is "sqlalchemy.engine.result.TupleResult[tuple[builtins.str, builtins.int]]"
for row3 in cursor3:
x3: int
y3: str
x3, y3 = row3 # example.py:44: error: Incompatible types in assignment (expression has type "str", variable has type "int") [assignment] example.py:44: error: Incompatible types in assignment (expression has type "int", variable has type "str") [assignment]
```
### Additional context
_No response_ | 0easy
|
Title: Add new section about style to User Guide
Body: As discussed, would be useful to have a chapter about the syle in the User Guide, focusing on most basic things. We could then refer to this in the certificates syllabus. | 0easy
|
Title: gsheets error: Unknown pattern "d/m/yyyy".
Body: It is getting parsed properly in the google sheets, even showing a calendar dropdown in sheets, but getting this error when importing in superset. @betodealmeida | 0easy
|
Title: Implement more "reply to webhook" classes
Body: After reviewing #804, I've found that some methods aren't implemented as reply to webhook classes.
Missing classes I've discovered:
- [ ] `SendAnimation`
- [ ] `SendPoll`
- [ ] `SendDice`
- [ ] I'm sure there are more to find
_Originally posted by @evgfilim1 in https://github.com/aiogram/aiogram/issues/804#issuecomment-1025686058_ | 0easy
|
Title: Indicator: Fisher Transform (FISHT)
Body: The indicator in its current form does not return the signal line. Could this be updated to return a DataFrame with the FT and FT signal values?
https://library.tradingtechnologies.com/trade/chrt-ti-ehler-fisher-transformation.html | 0easy
|
Title: Participate in `FigureFriday` and create a Vizro app (for multiple participants)
Body: ## Good choice! 👋
### You will learn 📚:
- Navigating through technical documentation
- Working with Plotly charts and customizing them via post update calls
- Utilizing PyCafe for development and building your own portfolio
- Creating your Vizro app from scratch
---
## Instructions
1. **Explore [Plotly's FigureFriday](https://community.plotly.com/tag/figure-friday) and choose any week you would like to participate in.** Reviewing previous weeks can inspire your potential submissions. Your task is to take the relevant data set and enhance the existing Plotly visualization. This could involve creating a new chart or improving the current chart code.
2. **Create an account on pyCafe if not already done:** https://py.cafe/ Fork the [Vizro project template](https://py.cafe/snippet/vizro/v1#c=H4sIAAGu-2YAA61VTW_jNhD9K6xyqAPIcpzttoUBt2jTdvfQw2IPuwc7CGhpZBGhSJak7ChB_nvfUHYaOwkSoCsdRA7n481w3uguK21F2Sw7EV_UrbdCBSGNsI7MONjOlySitfpaRVFbL0pPMiqzFq2tOi29qGSUYqNCJ7W6xZE1QjqnVZnWoViaE1E2VF4L20XRxOjCbDJZq9h0q6K07aQtr5UJ1E82KTwHaa0noUxthVyxVQLGjqSpBmePfSW7Ariq2FBly1AoOyEzCVGuNE2SR4i7lkxMoIBpaVTrrI9AzsZO26j7gm6cp4D8g3A3S1N72w4KYqe9A3Jgi0KQTjablh1XtZjDvODCFMqrMDplsZNrwsGmLT5hNVoagSeqqGm-zIbKo3Sf-gtZ0zLLh3Mte2Q6h9HfaTVae1XNF4uzXEwvc7E4z8X54--7XLx7-GLh7faqVeaqIbVuIgJNfzhDZtnpzj_q76xBXcJ8MUj4QbgL6asdxv0T6YY9pPfggJ-TkxPxtZGRuycl8-tTnd--aVcde38q-SydqnQvAul6HMhvELMhXFSgdgW5rUXZhWhbdUsVIoZmZZF2QOcJFK2LFMRYbNGp3G1sagiKjFdWG2lKbMAdhs0dRkGtjUALkVeEQ9hGO2RGotZ0o9CNqYNDKTW3Zo7rhnMzJsNbxvBK1kfV31_j_7i0D4mI3KDHpxd7nqX7_D6w6sduJVIjcxHqzqMoPhHVtzu4SM-TJhlQLYsKFuLCmujVqkspCAlmS72VfRBb0ug_-m6A9ygVfhpPNYC_Oi8ODI_q8cFL14xAmGWGksdI_qpspI-wEbVad57m4OnuaFTVubhhVXJSX2ky69iwZg-ZowjZVmHCsKi02npWdVQqCuDTy5EbFeJzYVlu1162R4HfEOTygb2orNXgLkL-pTRnAauuNQdmOSM6Ph4y2mf5vM4BoFOOmibZA1OGcfbHfjvizgAY_lxCM_XN6LRYdUpXower08J3hmdilmee_umUJ57MAb-gdKO_zM-KaTH9mef9irTdYi6Crcw9UCQSz17pFUSgFzoBeejETg2iBUxl8oFbEfQeZjfWri9KnqvGd3U9R4Afiyn6H-OmvE5AYu_4H5gMsAXrvijaZrNa6kB5RpWKfyaSZrPoO0hcYi5MXI8RUNF4c1acwyuMh6Gdze6yHZRsdo5UrY2fLVze7YNhOEO7bFAdT1BaPJzgxxUo4jDVPptN35_lGUbS12H7bth9TEN92KoKZrXS9Du8YtYx6aQy5F-IwKrj1aALFSfZb5bdX97nT1G8APE_O5SrcP2h9Ruyez2jHcLXstkn8jz84eeHKv70_i0xQQGIpX4t6F6Po_J7n6eqoo8Xl_f_AnooWrNWCQAA) to your account. This will allow you to customize the Vizro app.
4. **Get familiar with Vizro's app configuration and add some changes.** Refer to [Vizro's documentation](https://vizro.readthedocs.io/en/latest/pages/user-guides/install/) to understand how to configure the models. Here are some ideas for modifications:
- [ ] Add a Graph to your Page and [insert a Plotly chart](https://vizro.readthedocs.io/en/latest/pages/user-guides/graph/#insert-plotly-chart)
- [ ] Add a Graph to your Page and [insert a custom Plotly chart](https://vizro.readthedocs.io/en/latest/pages/user-guides/custom-charts/)
- [ ] Add your Page to the Dashboard
- [ ] Add a title to your Dashboard
- [ ] Add a logo to your Dashboard e.g. take the vizro logo [here](https://github.com/mckinsey/vizro/blob/main/vizro-core/examples/dev/assets/images/logo.svg):
- [ ] Add controls to your Dashboard (Filter or Parameter)
- [ ] Add a second Page with a AgGrid to your Dashboard
- [ ] Experiment with the Navigation e.g. create a NavBar
- [ ] (Optional) Upload your app on py.Cafe and submit your app to the Figure Friday challenge
**Useful resources:**
- Vizro documentation: https://vizro.readthedocs.io/en/latest/pages/user-guides/components/
- Existing Figure Friday submissions using Vizro: https://py.cafe/huong-li-nguyen
- First tutorial on dashboards: https://vizro.readthedocs.io/en/stable/pages/tutorials/explore-components/
- Plotly documentation: https://plotly.com/python/plotly-express/
- Plotly layout: https://plotly.com/python/reference/layout/ | 0easy
|
Title: BUG: pd.Series.rename(..., inplace=True) returns a pd.Series and not nont
Body: ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
In [1]: import pandas as pd
In [2]: pd.Series([1, 2, 3])
Out[2]:
0 1
1 2
2 3
dtype: int64
In [3]: pd.Series([1, 2, 3]).rename('A')
Out[3]:
0 1
1 2
2 3
Name: A, dtype: int64
In [4]: pd.Series([1, 2, 3]).rename('A', inplace=True). # should return None
Out[4]:
0 1
1 2
2 3
Name: A, dtype: int64
```
### Issue Description
According to the [documation](https://pandas.pydata.org/docs/dev/reference/api/pandas.Series.rename.html), inplace operations should return None and only modify the object in place without doing a deepcopy.
However when running the rename operation on `pd.Series` with `inplace=True` the type of the returns is `pd.Series` and not None.
This seems to originate from quick look up in the `_set_name` operations that return and object no matter the value of `inplace` (cf https://github.com/pandas-dev/pandas/blob/6bcd30397d67c3887288c7a82c2c235ce8bc3c7f/pandas/core/series.py#L1835-L1850).
And then the `rename` operation only returns the result of `_set_name` without distinguishing the `inplace` value.
### Expected Behavior
The docs suggest that the return of the `rename(..., inplace=True)` should be None.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.1
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:06 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T8103
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.1.3
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 8.29.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : 3.9.2
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| 0easy
|
Title: Caching value_equality_values?
Body: **Is your feature request related to a use case or problem? Please describe.**
Are we "allowed" to cache the `_value_equality_values`? If so, is there a standard way to do this?
I'm in particular looking at PhasedXZGate: https://github.com/quantumlib/Cirq/blob/b28bfce0b91437cc151c0a6b6f0fc9f0f8fe5942/cirq-core/cirq/ops/phased_x_z_gate.py#L123 and computing the hash here is relatively expensive.
**Describe the solution you'd like**
I suppose decorating with `@_compat.cached_method` would be the easiest thing.
**[optional] Describe alternatives/workarounds you've considered**
**[optional] Additional context (e.g. screenshots)**
**What is the urgency from your perspective for this issue? Is it blocking important work?**
<!-- Please choose one and remove the others -->
P1 - I need this no later than the next release (end of quarter)
<!-- [optional] additional comment / context --> | 0easy
|
Title: [Feature request] Add apply_to_images to PixelDistributionAdaptation
Body: | 0easy
|
Title: Dash Cytoscape: control the scroll wheel sensitivity for zooming
Body: Hello,
I need to control the scroll wheel sensitivity to zoom on canvas and change the scale of graphs. I noticed that cytoscape.js and react-cytoscapejs have supported the wheel-sensitivity property in their last updates to control the scroll to zoom in/out on graphs. I was wondering if there is a way to limit the fast scroll behavior in dash-cytoscape?
Thanks,
| 0easy
|
Title: RareLabelEncoder allow user to decide maximum number of returned categories per variable
Body: Add a parameter with which the user can specify the maximum number of categories that they want for the variables | 0easy
|
Title: Error in Golden Features: Please provide the true labels explicitly through the labels argument.
Body: Error when training with Golden Features:
```py
y_true contains only one label (1). Please provide the true labels explicitly through the labels argument.
``` | 0easy
|
Title: Fit time timer utility
Body: Add timer utility functionality to the library to adapt any learned `fit_time_` parameters:
```python
import time
from functools import wraps
from dateutil.relativedelta import relativedelta
def humanizedelta(*args, **kwargs):
"""
Wrapper around dateutil.relativedelta (same construtor args) and returns
a humanized string representing the delta in a meaningful way.
"""
delta = relativedelta(*args, **kwargs)
attrs = ('years', 'months', 'days', 'hours', 'minutes', 'seconds')
parts = [
'%d %s' % (getattr(delta, attr), getattr(delta, attr) > 1 and attr or attr[:-1])
for attr in attrs if getattr(delta, attr)
]
return " ".join(parts)
class Timer(object):
"""
A context object timer. Usage:
>>> with Timer() as timer:
... do_something()
>>> print timer.interval
"""
def __init__(self, wall_clock=True):
"""
If wall_clock is True then use time.time() to get the number of
actually elapsed seconds. If wall_clock is False, use time.clock to
get the process time instead.
"""
self.wall_clock = wall_clock
self.time = time.time if wall_clock else time.clock
def __enter__(self):
self.start = self.time()
return self
def __exit__(self, type, value, tb):
self.finish = self.time()
self.interval = self.finish - self.start
def __str__(self):
return humanizedelta(seconds=self.interval)
def timeit(func, wall_clock=True):
"""
Times a function; returns a Timer along with the result.
"""
@wraps(func)
def timer_wrapper(*args, **kwargs):
"""
Inner function that uses the Timer context object
"""
with Timer(wall_clock) as timer:
result = func(*args, **kwargs)
return result, timer
return timer_wrapper
``` | 0easy
|
Title: provide a way to use hyperparameter-tuning
Body:
### Description
It would be great to provide a way to use hyperparameter-tuning in igel. The user should be able to write that he wants to use hyperparameter-tuning in the yaml file and this will be automatically triggered by igel before fitting a model.
This can be achieved using grid search or maybe random search. Sklearn already have APIs for this. It just needs to be implemented.
| 0easy
|
Title: Parser fail with numerical and logical AND
Body: ```xsh
xonfig
# xonsh 0.14.3
mkdir -p /tmp/abc /tmp/123 /tmp/a123
cd /tmp/abc && ls
# Result: empty
# Expected: empty
cd /tmp/a123 && ls
# Result: empty
# Expected: empty
# But:
cd /tmp/123 && ls
# Result: NameError: name 'cd' is not defined
# Expected: empty
echo / 123 && ls
# Result: NameError: name 'echo' is not defined
# Expected: `/ 123 ...`
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Add sigmoid_kernel Function
Body: The sigmoid kernel converts data via tanh into a new space. This is easy difficulty, but requires significant benchmarking to find when
the scikit-learn-intelex implementation provides better performance. This project will focus on the public API and including the benchmarking
results for a seamless, high-performance user experience. Combines with the other kernel projects to a medium time commitment.
Scikit-learn definition can be found at:
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.sigmoid_kernel.html
The onedal interface can be found at:
https://github.com/uxlfoundation/scikit-learn-intelex/blob/main/onedal/primitives/kernel_functions.py#L127 | 0easy
|
Title: [Feature request] Add apply_to_images to ToFloat
Body: | 0easy
|
Title: Add a button which displays the stdout of the job which executed the notebook
Body: | 0easy
|
Title: Condense hyper parameter example
Body: Can we shorten the example about [tuning hyper-parameters](https://scikit-optimize.github.io/notebooks/hyperparameter-optimization.html) while retaining the pipeline example?
One approach would be to shorten the explanatory text and generally try and condense things. It is good to show off the pipeline feature to make people aware of it. Especially now that you can swap whole steps! At the same time we want to avoid the wall of text problem.
| 0easy
|
Title: [BUG] support for `frame`s missing
Body: Hi,
I got this error, and when i try to read your code, I think u still not implemented `switch_to.frame` correctly. because u treat `frame` like an `iframe` while both of them have different element behaviour
```javascript
top.window.frame2.document.querySelector('input')
```
**selenium_driverless**: 1.9.3.1
https://github.com/kaliiiiiiiiii/Selenium-Driverless/blob/348b95dbb0025d1343fc3c6ade3540129269ae37/src/selenium_driverless/scripts/switch_to.py#L119-L121
**frameset.html**
```html
<html>
<head>
<title>Frame tag</title>
</head>
<frameset cols="50%,50%">
<frame name="frame1" framespacing="0" border="0" frameborder="NO" src="input1.html">
<frame name="frame2" framespacing="0" border="0" frameborder="NO" src="input2.html">
</frameset>
</html>
```
**input1.html**
```html
<html>
<head></head>
<body>
<label>input1</label>
<input type="text" name="input1">
</body>
</html>
```
**input2.html**
```html
<html>
<head></head>
<body>
<label>input2</label>
<input type="text" name="input2">
</body>
</html>
```
`test.py`
```python
async def main2():
options = webdriver.ChromeOptions()
async with webdriver.Chrome(options=options) as driver:
await driver.get("http://localhost/frameset.html", wait_load=True)
time.sleep(1)
frame = await driver.switch_to.frame('frame2')
asyncio.run(main2())
```
**error**
```
Traceback (most recent call last):
File "C:\xampp\htdocs\other\mutasi-mudah\debug\test.py", line 60, in <module>
asyncio.run(main2())
File "C:\Python310\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete
return future.result()
File "C:\xampp\htdocs\other\mutasi-mudah\debug\test.py", line 58, in main2
frame = await driver.switch_to.frame('frame2')
File "C:\Python310\lib\site-packages\selenium_driverless\scripts\switch_to.py", line 121, in frame
target = await self._context.current_target.get_target_for_iframe(frame_reference)
File "C:\Python310\lib\site-packages\selenium_driverless\types\target.py", line 267, in get_target_for_iframe
raise NoSuchIframe(iframe, "no target for iframe found")
selenium_driverless.types.target.NoSuchIframe: no target for iframe found
``` | 0easy
|
Title: Add `RadioButtons`
Body: ### Description
RadioButtons are a type of user interface element that allows users to select one option from a predefined set of options. Unlike checkboxes, which allow for multiple selections, RadioButtons ensure that only one option can be selected at a time. This feature is essential for forms where a single choice is required, such as selecting a payment method, choosing a subscription plan, or specifying a preferred contact method.
### Design Guidline
https://m3.material.io/components/radio-button/overview
### Suggested Solution
**Basic Functionality:**
- Selecting a different `RadioButton` deselects the currently selected one.
- Ensure only one `RadioButton` can be selected at a time within a group.
- Ensure `RadioButtons` are keyboard navigable.
- Support both horizontal and vertical alignment of `RadioButtons`.
- Include labels next to each `RadioButton` for clarity.
- Allow developers to group `RadioButtons` easily.
```python
RadioButton(
options: Mapping[str, T] | Sequence[T],
header: str = '', # or something similar?
selected_value: T | None = None
orientation: Literal["horizontal", "vertical"] = "horizontal",
is_sensitive: bool = True,
is_valid: bool = True,
...
)
```
**Essentially, it functions like a `Dropdown`?**
### Alternatives
_No response_
### Additional Context
_No response_
### Related Issues/Pull Requests
_No response_ | 0easy
|
Title: is_element_visible missing from API docs?
Body: [`is_element_visible`](https://github.com/cobrateam/splinter/blob/04a6da307f76d872ef1c227c47509fc983e5315c/splinter/driver/webdriver/__init__.py#L201) is a very helpful function, but haven't figured out why / what would be the clearest way to make it available in the docs (Should we make all `Window` available?) | 0easy
|
Title: Automate Katib Releases
Body: Currently, to make Katib releases we have to follow this manual process: https://github.com/kubeflow/katib/tree/master/docs/release
We run `make release` command, build and publish the release Docker images locally, and publish Katib SDK version.
Since we build docker images locally, our release images don't support multi OS arch: https://hub.docker.com/layers/kubeflowkatib/katib-controller/v0.14.0/images/sha256-51ca80d6005010ff08853a5f7231158cb695ea899b623200076cbc01509fc0b5?context=repo.
The release process should be automated. For example, we can utilise GitHub Actions to make Katib releases.
cc @tenzen-y @johnugeorge
---
<!-- Don't delete this message to encourage users to support your issue! -->
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| 0easy
|
Title: Yanked dependency, good first issue
Body: ### Describe the bug
During install I got a warning:
Warning: The file chosen for install of jupyter-core 5.6.0 (jupyter_core-5.6.0-py3-none-any.whl) is yanked. Reason for being yanked: breaking change in loop handling
Crash:
```
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/core.py", line 137, in chat
for _ in self._streaming_chat(message=message, display=display):
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/core.py", line 166, in _streaming_chat
yield from terminal_interface(self, message)
File "/home/anton/Documents/dev/open-interpreter/interpreter/terminal_interface/terminal_interface.py", line 135, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/core.py", line 205, in _streaming_chat
yield from self._respond_and_store()
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/core.py", line 251, in _respond_and_store
for chunk in respond(self):
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/respond.py", line 35, in respond
rendered_system_message = render_message(interpreter, system_message)
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/render_message.py", line 17, in render_message
output = interpreter.computer.run(
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/computer/computer.py", line 44, in run
return self.terminal.run(*args, **kwargs)
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/computer/terminal/terminal.py", line 41, in run
for chunk in self._streaming_run(language, code, display=display):
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/computer/terminal/terminal.py", line 64, in _streaming_run
self._active_languages[language] = lang_class(self.computer)
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/computer/terminal/languages/jupyter_language.py", line 50, in __init__
for _ in self.run(code):
File "/home/anton/Documents/dev/open-interpreter/interpreter/core/computer/terminal/languages/jupyter_language.py", line 73, in run
with open(f"{skill_library_path}/{filename}.py", "w") as file:
FileNotFoundError: [Errno 2] No such file or directory: '/home/anton/.config/Open Interpreter Terminal/skills/function_name.py'
```
### Reproduce
```
git clone https://github.com/KillianLucas/open-interpreter.git
poetry install
poetry run interpreter
```
### Expected behavior
Install jupyter core and run
### Screenshots
_No response_
### Open Interpreter version
latest git
### Python version
3.10
### Operating System name and version
Ubuntu 22.04
### Additional context
_No response_ | 0easy
|
Title: README license is incorrect
Body: ### Description:
The BSD license listed in our README is *not* the skimage license. The full license is:
https://github.com/scikit-image/scikit-image/blob/main/LICENSE.txt
This is the one that should be linked to, instead of inlining it in the README.
### Way to reproduce:
_No response_
### Traceback or output:
_No response_
### Version information:
_No response_ | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.