text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Add option to run the changed files from the current branch
Body: Let's say you are working on a feature branch and would like to run all the tests that were changed when compared with the master. To check it, you can run: `git diff --name-only master`.
Would be nice to have this option on _picked_. I'm wondering if something like `pytest --picked=branch` would be the best way to go. | 0easy
|
Title: Add to FAQ: How to set cookies in simulate request for testing
Body: Hi, Is there any way to set cookies in `simulate_request` for the testing client?
I didn't find the method to set cookies in the document. Setting `{'Set-Cookie': 'xxx=yyy'}` also does not work.
| 0easy
|
Title: Duplicated event handlers in Graph
Body: In attempting to create an offshoot of Graph with custom click handling, I found that the JS event handlers are being re-appended every time the Graph is updated by a Dash callback. Here's a minimal example:
```python
import dash
from dash.dependencies import Input, Output
import dash_core_components as dcc
import plotly.graph_objs as go
app = dash.Dash('')
app.layout = html.Div([
dcc.Input(id='input'),
dcc.Graph(id='graph')
])
@app.callback(Output('graph', 'figure'),
[Input('input', 'value')])
def update(value):
return go.Figure(data=[go.Scatter(x=[1,2,3], y=[2, 1, 3])])
if __name__ == '__main__':
app.run_server()
```
If you log the ```plotly_click``` handler in Graph.react.js, you'll see what I mean. Without the Dash callback, the handler runs once every time you click on a point. Add the callback, and now the handler runs twice, initially. Then type away in the input, and twice becomes thrice. And so on.
Maybe I'm abusing the framework by playing with the plotly_click handler directly, but even so, this behavior seems mildly wasteful at best. And for adding customized click handling, it's a nightmare. I don't know enough React to prescribe a solution, but my guess is the that the problem stems from the that ```plot``` always queues up ```bindEvents``` to run after ```Plotly.react``` completes . | 0easy
|
Title: [DOCS] Bayesian Kernel Density
Body: We might want to combine the GMM stuff with this under a `probabilistic` banner. | 0easy
|
Title: [Feature request] Add apply_to_images to RingOvershoot
Body: | 0easy
|
Title: Loading checkpoints behaviour not as expected
Body: I have a deep neural network model in a file called param_optimizer.py and I import the dimensions and the neural network's name to the file where I run gp_minimize():
````py
from skopt import gp_minimize, callbacks
from skopt.plots import plot_convergence, plot_objective, plot_gaussian_process
from matplotlib import pyplot as plt
from skopt.utils import dump, load
from param_optimizer import dims, search_space
import os
dimensions, network = dims()
output_folder = f"E:/PhD/hyperparam_opt/{network}/output/"
if os.path.exists(output_folder) is False:
os.makedirs(output_folder)
checkpoint = f"{output_folder}/checkpoint.pkl"
try:
res = load(checkpoint)
x0 = res.x_iters
y0 = res.func_vals
print(x0)
search_results = gp_minimize(search_space, dimensions, x0=x0, y0=y0, acq_func='EI', n_calls=15, random_state=3, n_jobs=-1, callback=[callbacks.CheckpointSaver(checkpoint)])
except FileNotFoundError:
print("Testing whether we got here")
search_results = gp_minimize(search_space, dimensions, acq_func='EI', n_calls=15, random_state=3, n_jobs=-1, callback=[callbacks.CheckpointSaver(checkpoint)])
````
Now, my model hits OOM every now and then and I'm using the checkpoint.pkl to restart the Bayesian optimization. Unfortunately, the checkpointing mechanism doesn't work properly. While gp_minimize initializes fine and starts working on the optimization problem, it goes through the same hyperparameters that I've already used, meaning that it's wasting resources. Instead, I think it should evaluate a new set of hyperparameters. Here's the output from the print function in the above code on the third try of finding the estimated hyperparameters:
```
[[1, 0.08399650488680278, 'Adagrad'],
[3, 0.043711823614689324, 'Adagrad'],
[1, 0.024796350839631304, 'Adagrad'],
[1, 0.08399650488680278, 'Adagrad'],
[3, 0.043711823614689324, 'Adagrad'],
[1, 0.024796350839631304, 'Adagrad']]
```
As you can see, the hyperparameters used in the first and the second (after loading checkpoint) attempts are identical and the checkpointing has not taken effect. | 0easy
|
Title: Custom Strategy Based on My existing TV indicator
Body: Hi,
First of all thanks for creating a great library, this is a game changer.
I have been predominantly using Trading View, I have build many indicators on TV and use them in my daily trading. You can consider them proprietary Algo which are combination of use of basic building blocks all of which are already present in pandas-ta lib.
So my indicator uses a combination of Previous Day High/Low values, Premarket High/low values, EMAs, candle colors , volume pressure, ADR/ATR , curve , pivot points etc.
In a combination, which then results in a buy/sell signal along with take profit and exit signals
I have also been able to create most of it in my Python code using this lib as individual columns in the DF.
What I don't understand is how to now transfer this price of logic into Strategy Class to take advantage of the watch function etc.
Is there an example where I can pass in a function which creates a result column in the DF ?
Regards,
Idris | 0easy
|
Title: Conversation mode will create a new thread within channel
Body: **Is your feature request related to a problem? Please describe.**
Yes, `!g converse` mode is not working as I expected. Multiple people run multiple conversation in single room is little bit interesting.
**Describe the solution you'd like**
I appreciate if you able to create a new thread for each conversation started by user and everyone joins to the thread will be able to discuss with just like a same chatGPT user in the room.
| 0easy
|
Title: receiving a key error when using depends
Body: ## Issue
receiving a key error when using `depends`
## Environment
Provide at least:
- OS: windows
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
Package Version Editable project location
--------------------- ------- ----------------------------------------
cachetools 5.5.2
chardet 5.2.0
click 8.1.8
colorama 0.4.6
distlib 0.3.9
filelock 3.17.0
iniconfig 2.0.0
nodeenv 1.9.1
nodejs-wheel-binaries 22.14.0
packaging 24.2
platformdirs 4.3.6
pluggy 1.5.0
pyproject-api 1.9.0
pyright 1.1.393
pytest 8.3.5
pytest-mock 3.14.0
ruff 0.9.6
scapy 2.6.1
tox 4.24.1
typing-extensions 4.12.2
virtualenv 20.29.2
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv</code></summary>
```console
.pkg: 540 I find interpreter for spec PythonSpec(major=3, minor=11, free_threaded=False) [virtualenv\discovery\builtin.py:76]
.pkg: 564 D got python info of %s from (WindowsPath('C:/Users/<redacted>/AppData/Roaming/uv/python/cpython-3.11.6-windows-x86_64-none/python.exe'), WindowsPath('C:/Users/<redacted>/AppData/Local/pypa/virtualenv/py_info/2/9c37dba83290a6bc7ba9bf3e32702404ded634937d38bd1937b6ad242b6786b0.json')) [virtualenv\app_data\via_disk_folder.py:133]
.pkg: 566 D filesystem is not case-sensitive [virtualenv\info.py:26]
.pkg: 567 I proposed PythonInfo(spec=CPython3.11.6.final.0-64, system=C:\Users\<redacted>\AppData\Roaming\uv\python\cpython-3.11.6-windows-x86_64-none\python.exe, exe=C:\path\to\\.venv\Scripts\python.exe, platform=win32, version='3.11.6 (main, Oct 2 2023, 23:36:41) [MSC v.1929 64 bit (AMD64)]', encoding_fs_io=utf-8-cp1252) [virtualenv\discovery\builtin.py:83]
.pkg: 567 D accepted PythonInfo(spec=CPython3.11.6.final.0-64, system=C:\Users\<redacted>\AppData\Roaming\uv\python\cpython-3.11.6-windows-x86_64-none\python.exe, exe=C:\path\to\\.venv\Scripts\python.exe, platform=win32, version='3.11.6 (main, Oct 2 2023, 23:36:41) [MSC v.1929 64 bit (AMD64)]', encoding_fs_io=utf-8-cp1252) [virtualenv\discovery\builtin.py:85]
.pkg: 571 D symlink on filesystem does work [virtualenv\info.py:45]
.pkg: 658 I find interpreter for spec PythonSpec(path=C:\path\to\\.venv\Scripts\python.exe) [virtualenv\discovery\builtin.py:76]
.pkg: 659 I proposed PythonInfo(spec=CPython3.11.6.final.0-64, system=C:\Users\<redacted>\AppData\Roaming\uv\python\cpython-3.11.6-windows-x86_64-none\python.exe, exe=C:\path\to\\.venv\Scripts\python.exe, platform=win32, version='3.11.6 (main, Oct 2 2023, 23:36:41) [MSC v.1929 64 bit (AMD64)]', encoding_fs_io=utf-8-cp1252) [virtualenv\discovery\builtin.py:83]
.pkg: 659 D accepted PythonInfo(spec=CPython3.11.6.final.0-64, system=C:\Users\<redacted>\AppData\Roaming\uv\python\cpython-3.11.6-windows-x86_64-none\python.exe, exe=C:\path\to\\.venv\Scripts\python.exe, platform=win32, version='3.11.6 (main, Oct 2 2023, 23:36:41) [MSC v.1929 64 bit (AMD64)]', encoding_fs_io=utf-8-cp1252) [virtualenv\discovery\builtin.py:85]
.pkg: 669 I find interpreter for spec PythonSpec(major=3, minor=12, free_threaded=False) [virtualenv\discovery\builtin.py:76]
.pkg: 669 I proposed PythonInfo(spec=CPython3.11.6.final.0-64, system=C:\Users\<redacted>\AppData\Roaming\uv\python\cpython-3.11.6-windows-x86_64-none\python.exe, exe=C:\path\to\\.venv\Scripts\python.exe, platform=win32, version='3.11.6 (main, Oct 2 2023, 23:36:41) [MSC v.1929 64 bit (AMD64)]', encoding_fs_io=utf-8-cp1252) [virtualenv\discovery\builtin.py:83]
.pkg: 694 D got python info of %s from (WindowsPath('C:/Users/<redacted>/AppData/Local/Programs/Python/Python312/python.exe'), WindowsPath('C:/Users/<redacted>/AppData/Local/pypa/virtualenv/py_info/2/dd6c737d4978187c6f29e7d126668861b98bda13ce88e41754904974690814f2.json')) [virtualenv\app_data\via_disk_folder.py:133]
.pkg: 695 I proposed Pep514PythonInfo(spec=CPython3.12.3.final.0-64, exe=C:\Users\<redacted>\AppData\Local\Programs\Python\Python312\python.exe, platform=win32, version='3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]', encoding_fs_io=utf-8-utf-8) [virtualenv\discovery\builtin.py:83]
.pkg: 695 D accepted Pep514PythonInfo(spec=CPython3.12.3.final.0-64, exe=C:\Users\<redacted>\AppData\Local\Programs\Python\Python312\python.exe, platform=win32, version='3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]', encoding_fs_io=utf-8-utf-8) [virtualenv\discovery\builtin.py:85]
.pkg: 707 I find interpreter for spec PythonSpec(major=3, minor=13, free_threaded=False) [virtualenv\discovery\builtin.py:76]
.pkg: 708 I proposed PythonInfo(spec=CPython3.11.6.final.0-64, system=C:\Users\<redacted>\AppData\Roaming\uv\python\cpython-3.11.6-windows-x86_64-none\python.exe, exe=C:\path\to\\.venv\Scripts\python.exe, platform=win32, version='3.11.6 (main, Oct 2 2023, 23:36:41) [MSC v.1929 64 bit (AMD64)]', encoding_fs_io=utf-8-cp1252) [virtualenv\discovery\builtin.py:83]
.pkg: 709 D discover PATH[0]=C:\path\to\\.venv\Scripts [virtualenv\discovery\builtin.py:152]
.pkg: 709 D discover PATH[1]=C:\Program Files\PowerShell\7 [virtualenv\discovery\builtin.py:152]
.pkg: 711 D discover PATH[2]=C:\Program Files\Eclipse Adoptium\jdk-17.0.11.9-hotspot\bin [virtualenv\discovery\builtin.py:152]
.pkg: 711 D discover PATH[3]=C:\WINDOWS\system32 [virtualenv\discovery\builtin.py:152]
.pkg: 719 D discover PATH[4]=C:\WINDOWS [virtualenv\discovery\builtin.py:152]
.pkg: 719 D discover PATH[5]=C:\WINDOWS\System32\Wbem [virtualenv\discovery\builtin.py:152]
.pkg: 721 D discover PATH[6]=C:\WINDOWS\System32\WindowsPowerShell\v1.0 [virtualenv\discovery\builtin.py:152]
.pkg: 721 D discover PATH[7]=C:\WINDOWS\System32\OpenSSH [virtualenv\discovery\builtin.py:152]
.pkg: 721 D discover PATH[8]=C:\Program Files\ServiceNow [virtualenv\discovery\builtin.py:152]
.pkg: 721 D discover PATH[9]=C:\Program Files (x86)\Plantronics\Spokes3G [virtualenv\discovery\builtin.py:152]
.pkg: 721 D discover PATH[10]=C:\Program Files\PuTTY [virtualenv\discovery\builtin.py:152]
.pkg: 721 D discover PATH[11]=C:\Program Files\dotnet [virtualenv\discovery\builtin.py:152]
.pkg: 721 D discover PATH[12]=C:\MT-DS [virtualenv\discovery\builtin.py:152]
.pkg: 723 D discover PATH[13]=C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL [virtualenv\discovery\builtin.py:152]
.pkg: 724 D discover PATH[14]=C:\Program Files\Intel\Intel(R) Management Engine Components\DAL [virtualenv\discovery\builtin.py:152]
.pkg: 725 D discover PATH[15]=C:\Program Files\GitHub CLI [virtualenv\discovery\builtin.py:152]
.pkg: 725 D discover PATH[16]=C:\Program Files\Git\cmd [virtualenv\discovery\builtin.py:152]
.pkg: 725 D discover PATH[17]=C:\Program Files\PowerShell\7 [virtualenv\discovery\builtin.py:152]
.pkg: 728 D discover PATH[18]=C:\Users\<redacted>\.cargo\bin [virtualenv\discovery\builtin.py:152]
.pkg: 728 D discover PATH[19]=C:\Users\<redacted>\AppData\Local\Programs\Python\Launcher [virtualenv\discovery\builtin.py:152]
.pkg: 728 D discover PATH[20]=C:\Users\<redacted>\AppData\Local\Microsoft\WindowsApps [virtualenv\discovery\builtin.py:152]
.pkg: 731 D get interpreter info via cmd: 'C:\Users\<redacted>\AppData\Local\Microsoft\WindowsApps\python.exe' 'C:\path\to\\.venv\Lib\site-packages\virtualenv\discovery\py_info.py' q2fG6xERK5ePmizbgJqnKjWDDqp6FyaH QbJ4CRHth6W167Gy3uWk9LT1Ug0lY6Tf [virtualenv\discovery\cached_py_info.py:117]
.pkg: 1030 I failed to query C:\Users\<redacted>\AppData\Local\Microsoft\WindowsApps\python.exe with code 9009 err: 'Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Apps > Advanced app settings > App execution aliases.\n' [virtualenv\discovery\cached_py_info.py:35]
.pkg: 1037 D get interpreter info via cmd: 'C:\Users\<redacted>\AppData\Local\Microsoft\WindowsApps\python3.exe' 'C:\path\to\\.venv\Lib\site-packages\virtualenv\discovery\py_info.py' Vrz4LiHDmryaR4nOMVtDii56mDFbhbDB J1owRm9LjChO8MmlcEosTHlbj4Ms7Kiw [virtualenv\discovery\cached_py_info.py:117]
.pkg: 1315 I failed to query C:\Users\<redacted>\AppData\Local\Microsoft\WindowsApps\python3.exe with code 9009 err: 'Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Apps > Advanced app settings > App execution aliases.\n' [virtualenv\discovery\cached_py_info.py:35]
.pkg: 1317 D discover PATH[21]=C:\Users\<redacted>\AppData\Local\JetBrains\Toolbox\scripts [virtualenv\discovery\builtin.py:152]
.pkg: 1318 D discover PATH[22]=C:\Program Files\Graphviz\bin [virtualenv\discovery\builtin.py:152]
.pkg: 1318 D discover PATH[23]=C:\Users\<redacted>\AppData\Local\Programs\VSCodium\bin [virtualenv\discovery\builtin.py:152]
.pkg: 1320 D discover PATH[24]=C:\Users\<redacted>\AppData\Local\Microsoft\WinGet\Links [virtualenv\discovery\builtin.py:152]
.pkg: 1320 D discover PATH[25]=C:\Users\<redacted>\AppData\Local\Microsoft\WinGet\Packages\Gyan.FFmpeg_Microsoft.Winget.Source_8wekyb3d8bbwe\ffmpeg-6.0-full_build\bin [virtualenv\discovery\builtin.py:152]
.pkg: 1321 D discover PATH[26]=C:\Program Files\Git\usr\bin [virtualenv\discovery\builtin.py:152]
.pkg: 1323 D discover PATH[27]=C:\<redacted>\Apps\protoc-27.2-win64\bin [virtualenv\discovery\builtin.py:152]
.pkg: 1324 D discover PATH[28]=C:\Users\<redacted>\.local\bin [virtualenv\discovery\builtin.py:152]
ROOT: 1333 E Internal Error [tox\session\cmd\run\common.py:304]
Traceback (most recent call last):
File "C:\path\to\\.venv\Lib\site-packages\tox\session\cmd\run\common.py", line 293, in _queue_and_wait
env_list = next(envs_to_run_generator, [])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\session\cmd\run\common.py", line 348, in ready_to_run_envs
order, todo = run_order(state, to_run)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\session\cmd\run\common.py", line 368, in run_order
order = stable_topological_sort(todo)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\util\graph.py", line 46, in stable_topological_sort
identify_cycle(graph)
File "C:\path\to\\.venv\Lib\site-packages\tox\util\graph.py", line 71, in identify_cycle
raise ValueError(msg)
ValueError: lint
Exception in thread tox-interrupt:
Traceback (most recent call last):
File "C:\Users\<redacted>\AppData\Roaming\uv\python\cpython-3.11.6-windows-x86_64-none\Lib\threading.py", line 1045, in _bootstrap_inner
self.run()
File "C:\Users\<redacted>\AppData\Roaming\uv\python\cpython-3.11.6-windows-x86_64-none\Lib\threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "C:\path\to\\.venv\Lib\site-packages\tox\session\cmd\run\common.py", line 293, in _queue_and_wait
env_list = next(envs_to_run_generator, [])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\session\cmd\run\common.py", line 348, in ready_to_run_envs
order, todo = run_order(state, to_run)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\session\cmd\run\common.py", line 368, in run_order
order = stable_topological_sort(todo)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\util\graph.py", line 46, in stable_topological_sort
identify_cycle(graph)
File "C:\path\to\\.venv\Lib\site-packages\tox\util\graph.py", line 71, in identify_cycle
raise ValueError(msg)
ValueError: lint
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\path\to\\.venv\Scripts\tox.exe\__main__.py", line 10, in <module>
File "C:\path\to\\.venv\Lib\site-packages\tox\run.py", line 20, in run
result = main(sys.argv[1:] if args is None else args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\run.py", line 46, in main
return handler(state)
^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\session\cmd\legacy.py", line 115, in legacy
return run_sequential(state)
^^^^^^^^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\session\cmd\run\sequential.py", line 25, in run_sequential
return execute(state, max_workers=1, has_spinner=False, live=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\session\cmd\run\common.py", line 203, in execute
ordered_results: list[ToxEnvRunResult] = [name_to_run[env] for env in to_run_list]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\path\to\\.venv\Lib\site-packages\tox\session\cmd\run\common.py", line 203, in <listcomp>
ordered_results: list[ToxEnvRunResult] = [name_to_run[env] for env in to_run_list]
~~~~~~~~~~~^^^^^
KeyError: 'lint'
```
</details>
## Minimal example
<!-- If possible, provide a minimal reproducer for the issue. -->
```console
[tool.tox]
env_list = ["lint", "py311", "py312", "py313"]
[tool.tox.env.lint]
skip_install = true
commands = [
["ruff", "format"],
["ruff", "check", "--fix", "--show-fixes"],
["pyright"],
]
dependency_groups = ["dev"]
[tool.tox.env_run_base]
depends = ["lint"]
package = "wheel"
commands = [["pytest"]]
dependency_groups = ["test"]
```
| 0easy
|
Title: `decomposition.pca_reconstruction.PCAOutlierDetection`
Body: | 0easy
|
Title: Link to instructions for setting up the exchange on the installation page
Body: It's not obvious from the installation page that there are manual steps needed to setup the exchange, so we should link to this documentation from that page.
See #820
cc @Yenthe666 | 0easy
|
Title: inject cell error after pairing notebooks
Body: when a script is in a subdirectory (say `scripts/fit.py`) and we pair notebooks in a certain folder (say `ploomber nb --pair notebooks`), the notebooks are created relative to the scripts (`scripts/notebooks/fit.ipynb`), and the metadata is stored using a relative path (`notebooks`)
However, if we run the `ploomber nb --inject` command, ploomber will read the relative path in the metadata `notebooks` and it will fail since it won't find the notebooks (stored in `scripts/notebooks`)
The solution is to read the paired notebooks relative to the scripts.
To reproduce:
```
ploomber examples -n templates/ml-basic -o ml
# move fit.py to scripts/fit.py and update the pipeline.yaml
ploomber nb --pair notebooks
ploomber nb --inject
```
| 0easy
|
Title: Volatility indicator returns 0 values
Body:
**Describe the bug**
I tried to use the volatility indicator but it just returns 0 values. I checked my code if the close column has no value but it has and it's in datatype float so I don't understand why the indicator is giving me 0.0 values.
**To Reproduce**
I implemented it like this
new_d['volatility'] = ta.volatility(new_d['close'], tf='years')
**Expected behavior**
this is what I expect to return
https://imgur.com/lopMKFp
**Screenshots**
If applicable, add screenshots to help explain your problem.
But All I got was this
https://imgur.com/hnAuyrT
https://imgur.com/OBeBLSL
https://imgur.com/ghGb3ea
Thanks in advance for addressing my issue | 0easy
|
Title: woe: re-write formula to ln( P(0) / P(1) )
Body: In the current implementation I think I have P(1) / P(0). Ratio should be inverted to agree with most widespread documentation | 0easy
|
Title: It fails when use all severity types in "--allure-severities" option
Body: [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [ ] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
#### What is the expected behavior?
#### What is the motivation / use case for changing the behavior?
#### Please tell us about your environment:
- Allure version: 2.1.0
- Test framework: [email protected]
- Allure adaptor: [email protected]
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| 0easy
|
Title: KVO formula looks a bit in correct
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
Version: 0.3.2b0
**Describe the bug**
formula used in kvo doesnt match general formual for example https://www.investopedia.com/terms/k/klingeroscillator.asp on investopedia or other sources formula for VF is, VF = V×[2×((dm/cm)−1)]×T×100, but you have used (2 * dm / cm - 1) in line 35 of kvo.py, which should be (2 * (dm / cm - 1)) to give correct values
**To Reproduce**
just run kvo.py
Thank you
| 0easy
|
Title: Feature: Transformed captured output `$func()`
Body: After merging #5377 I remember about old proposal. I'm not remember the author but It's interesting idea.
It will be cool to have an ability to create transformer of captured output:
```xsh
json = $json(curl https://myjsonapi) # python mode, returns json object
def mytransformer(stdout):
# my transformations
return stdout
my = $mytransformer(echo 123) # python mode
echo $mytransformer(echo 123) | grep 1 # subprocess mode
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Ability to disable zoom on scroll
Body: By default gmaps uses scrolling to zoom. On a small device or screen this can be quite annoying. It would be good to have an option to disable this. | 0easy
|
Title: Nicer API for setting keyword call arguments programmatically
Body: `robot.running.Keyword.args` contains arguments used in a keyword call as a list. Arguments originating from normal Robot Framework data are always strings and they are in the exact same format as in the data. This means that arguments can contain variables and escape characters, and that named arguments are represented using the `name=value` syntax.
If arguments are modified programmatically, it is possible to use also other objects than strings. This doesn't, however, work with named arguments because in the `name=value` syntax the value is always a string. Automatic argument conversion handles conversion in common cases, but especially with more complex objects being able to use them directly would be convenient. Another pretty common and unexpected annoyance is that arguments that are strings need to follow the same escaping rules as normal data. Most importantly, `\` needs to be doubled and, because it's an escape character also in Python, we need to use data like `'c:\\\\temp\\\\new'` or `r'c:\\temp\\new'`.
I propose we enhance setting arguments as follows:
1. Support specifying named arguments as two-item tuples like `('name', 'value')` to allow using also non-strings as values. To avoid ambiguity with arguments possibly containing a literal `=`, we should also support positional arguments as one-item tuples like `('value',)`. In this usage we should still resolve variables, which requires users handling escaping themselves.
Example: `[('positional',), ('name', 'value'), ('path', r'c:\\temp\\new')]`
We support tuples like this also with the dynamic library API with arguments having default values. The approach thus has precedence and it has worked well.
2. Support giving arguments directly as a list of positional arguments and a dictionary of named arguments. In this usage we should use arguments directly without handling escapes. That then means that variables aren't resolved either, but automatic argument conversion and validation will still be done.
Example: `[['positional'], {'name': 'value', 'path': 'c:\\temp\\new'}]`
The above is pretty easy to implement. During execution we just need to handle different ways arguments can be specified when resolving arguments. No code changes are needed with `robot.running.Keyword`, but this new functionality needs to be documented and typing needs to be set accordingly. We also need to enhance `robot.result.Keyword`. With it it's better to always store arguments as strings, which means that we need to convert other arguments when they are set.
It's a bit questionable is this a good idea this late in RF 7.0 release cycle, we already have a release candidate out, but this makes some of the usages of the new `start/end_keyword` listener v3 methods (#3296) so much more convenient that we decided to still implement this. Changes are pretty small and ought to be safe, but big enough to warrant a second release candidate. | 0easy
|
Title: Weaviate: Simplify auth process
Body: We should simplify how users set up weaviate with docarray by doing the following:
1. Support embedded weviate (no auth)
2. Support weaviate with docker compose (no auth)
3. Support weaviate with WCS (auth with API Key) | 0easy
|
Title: feature request: GMM Naive Bayes
Body: nuff said. | 0easy
|
Title: `trace` without arguments raises exception
Body: ```xsh
trace
Exception in {'cls': 'ProcProxy', 'name': None, 'func': 'trace', 'alias': 'trace', 'pid': 68079}
xonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename>
Traceback (most recent call last):
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/procs/proxies.py", line 842, in wait
r = self.f(self.args, stdin, stdout, stderr, spec, spec.stack)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/procs/proxies.py", line 319, in proxy_five
return f(args, stdin, stdout, stderr, spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/aliases.py", line 786, in trace
return tracermain(args, stdin=stdin, stdout=stdout, stderr=stderr, spec=spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/tracer.py", line 231, in __call__
return super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/cli_utils.py", line 649, in __call__
result = dispatch(
^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/cli_utils.py", line 413, in dispatch
func = ns[_FUNC_NAME]
~~^^^^^^^^^^^^
KeyError: '_func_'
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Better error message when a DAG contains cycles
Body: By definition, a DAG can't contain cycles but users might inadvertently create them.
When a user creates a cycle, topological sorting doesn't make sense, then networkx throws an error.
Topological sorting happens here: https://github.com/ploomber/ploomber/blob/51f432c565361b294080190b1f2aba245e21160f/src/ploomber/dag/DAG.py#L889
It'd be better to catch that exception and raise another more informative error like "failed to process DAG, it contains cycles"
Add this test to test_dag.py https://github.com/ploomber/ploomber/blob/master/tests/dag/test_dag.py
To test it, add a dag with a cycle e.g.,:
```
task_a >> task_b >> task_a
```
catch exception to test error message with pytest raises https://docs.pytest.org/en/6.2.x/assert.html | 0easy
|
Title: Support Oracle 23ai BOOLEAN datatype
Body: ### Describe the bug
The latest Oracle DB release as of 7/22/24 - DB23AI - Supports native "BOOLEAN" types, whereas the type used to be captured by a SMALLINT. As a result, the documentation and behavior around CamelCase type "Boolean" is now out of date for Oracle DB:
https://support.oracle.com/knowledge/Oracle%20Database%20Products/3002488_1.html
"""
Unlike [String](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.String), which represents a string datatype that all databases have, not every backend has a real “boolean” datatype; some make use of integers or BIT values 0 and 1, some have boolean literal constants true and false while others dont. For this datatype, [Boolean](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Boolean) may render BOOLEAN on a backend such as PostgreSQL, BIT on the MySQL backend and _SMALLINT on Oracle_
"""
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.31
### DBAPI (i.e. the database driver)
cx_oracle
### Database Vendor and Major Version
OracleDB 23
### Python Version
3.10
### Operating system
Linux
### To Reproduce
```python
# create table w/ sqlalchemy
op.create_table('test', sa.Column('col1', sa.Boolean())
# query table in oracle
select col1 from test;
COL1
----
0
# create table using Boolean in oracle just fine
create table test2 (col2 boolean);
insert into test values (TRUE);
Col2
----
TRUE
```
### Error
The result of the behavior mismatch is we get some nasty error when trying to run queries using the .is_ flag - the .is_(False) feature works just fine on Postgres, and then fails when using Oracle. The fix, for now, I believe is to use BOOLEAN instead of Boolean type.
### Additional context
Oracle 23 only just came out recently, so I don't expect this to be done right away. See also: https://github.com/sqlalchemy/sqlalchemy/issues/10375 | 0easy
|
Title: Make challenge submission attempt rate limiting configurable
Body: Sometimes people end up hitting this rate limit too soon so we should make it configurable in the config panel.
https://github.com/CTFd/CTFd/blob/7fc05bd4e3bf18606871a0ba6ad11f70e2be77e0/CTFd/api/v1/challenges.py#L624-L648
Instead of a hard limit of 10 we should have an option to set the number and perhaps disable it. | 0easy
|
Title: "Publish Agent" window should sort Agents in the same order as the Library
Body: [https://github.com/Significant-Gravitas/AutoGPT/issues/9141](https://github.com/Significant-Gravitas/AutoGPT/issues/9141) is going to fix the order of Agents in the library, but the "Publish Agent" window also shows Agents in a different. random order.<br><br>This window should also instead sort Agents based on the most recently edited Agent.
Ideally we reuse the same code for both of these. | 0easy
|
Title: Use another augmented assignment statement
Body: ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
:eyes: Some source code analysis tools can help to find opportunities for improving software components.
:thought_balloon: I propose to [increase the usage of augmented assignment statements](https://docs.python.org/3/reference/simple_stmts.html#augmented-assignment-statements "Augmented assignment statements") accordingly.
### Describe the solution you'd like.
```diff
diff --git a/examples/martingale.py b/examples/martingale.py
index dcbcf7a..f4ffcfa 100644
--- a/examples/martingale.py
+++ b/examples/martingale.py
@@ -176,7 +176,7 @@ class MartingaleTrader(object):
target_value = total_buying_power - self.last_price
target_qty = int(target_value / self.last_price)
if self.streak_increasing:
- target_qty = target_qty * -1
+ target_qty *= -1
self.send_order(target_qty)
# Update our account balance
self.equity = float(self.api.get_account().equity)
```
### Describe an alternate solution.
_No response_
### Anything else? (Additional Context)
_No response_ | 0easy
|
Title: Add a name on every route
Body: As discussed in #761. | 0easy
|
Title: Add `--no-env` argument
Body: xonsh [has `--no-rc`](https://github.com/xonsh/xonsh/blob/6fa83f830579d2bad39b1842911951f55a9da22c/xonsh/main.py#L180-L186) to avoid loading RC files. But if we want to run completely clean xonsh we need to avoid inheritance environment.
For example sometimes I faced with issues when I want to run xonsh from xonsh and `$XONSH_HISTORY_FILE` goes to the new xonsh instance and lock the file system. I believe that the similar logic could produce the issues with any other use cases. For example we had issues around multiple running `xonsh -c ...` because the history backend uses the same file (fixed in #4178).
I suggest to add `--no-env` as well as [`--no-rc`](https://github.com/xonsh/xonsh/blob/6fa83f830579d2bad39b1842911951f55a9da22c/xonsh/main.py#L180-L186) argument that will run xonsh with `default_env` only:
https://github.com/xonsh/xonsh/blob/6fa83f830579d2bad39b1842911951f55a9da22c/xonsh/environ.py#L2499-L2500
So to get clean xonsh instance we can run `xonsh --no-rc --no-env`.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Add Python 3.12 to testing harness
Body: Since Python 3.12 has been out for a while, it's reasonable to expect some users are using it. Add it to the testing matrix. | 0easy
|
Title: Helm Chart / Kubernetes Manifests
Body: **Describe the feature you'd like**
I would like to have pre-configured Kubernetes resources
**Describe alternatives you've considered**
Create my own YAMLs | 0easy
|
Title: Collections: `Get From Dictionary` should accept a default value
Body: [Pop From Dictionary](https://robotframework.org/robotframework/latest/libraries/Collections.html#Pop%20From%20Dictionary) has support for returning a default value if the key isn't found in the dictionary. e.g.
```robot
Pop From Dictionary ${myDict} myKey myDefaultValue
````
I think it'd be great if `Get From Dictionary` also support having a default value passed to it. 🙂
e.g.
```robot
Get From Dictionary ${myDict} myKey myDefaultValue
```
| 0easy
|
Title: Issue while running the colab notebook on ploomber examples
Body: ```
File "/usr/local/lib/python3.7/dist-packages/ploomber/__init__.py", line 2, in <module>
from ploomber.dag.dag import DAG
File "/usr/local/lib/python3.7/dist-packages/ploomber/dag/dag.py", line 93, in <module>
from ploomber.dag.dagclients import DAGClients
File "/usr/local/lib/python3.7/dist-packages/ploomber/dag/dagclients.py", line 4, in <module>
from ploomber.tasks.abc import Task
File "/usr/local/lib/python3.7/dist-packages/ploomber/tasks/__init__.py", line 6, in <module>
from ploomber.tasks.notebook import NotebookRunner, ScriptRunner
File "/usr/local/lib/python3.7/dist-packages/ploomber/tasks/notebook.py", line 15, in <module>
from nbconvert.exporters.webpdf import WebPDFExporter
ModuleNotFoundError: No module named 'nbconvert.exporters.webpdf'
[Errno 2] No such file or directory: 'first-pipeline'
/content
total 4
drwxr-xr-x 1 root root 4096 May 3 13:42 sample_data
```
ploomber, version 0.18.1
| 0easy
|
Title: Allow filtering ingredients in the sync-ingredients command
Body: It would be useful to be able to filter ingredients by language when calling the `sync-ingredients` command. Something like `sync-ingredients --languages en,fr` | 0easy
|
Title: Add numpy doc strings
Body: Add numpy style doc strings to all functions apart from the main.py file.
<details>
<summary>Checklist</summary>
- [X] `gpt_engineer/ai.py`
> • For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.
- [X] `gpt_engineer/chat_to_files.py`
> • For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.
- [X] `gpt_engineer/collect.py`
> • For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.
- [X] `gpt_engineer/db.py`
> • For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.
- [X] `gpt_engineer/learning.py`
> • For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.
</details>
| 0easy
|
Title: Implement a way to identify `CircuitOperations` (operations with loops)
Body: **Is your feature request related to a use case or problem? Please describe.**
I've been doing a lot of transformations on `Circuit`s containing `CircuitOperation`s. Sometimes I need to manipulate `CircuitOperation`s. The way I nominally identify them is using `isinstance(..., cirq.CircuitOperation)`. But it turns out this doesn't work if the `CircuitOperation` is contained in a `cirq.TaggedOperation`.
**Describe the solution you'd like**
Implement a function like `cirq.measurement_key_objs` that I can trust to always tell me if an operation is a `CircuitOperation` (or effectively one under the hood).
**[optional] Describe alternatives/workarounds you've considered**
Writing my own custom identifier that handles TaggedOperations. But am I missing something?
**[optional] Additional context (e.g. screenshots)**
**What is the urgency from your perspective for this issue? Is it blocking important work?**
P3 - I'm not really blocked by it, it is an idea I'd like to discuss / suggestion based on principle
| 0easy
|
Title: Double check links to docs in CTFd
Body: Since moving CTFd's docs to docusaurus, we may have broken some links in CTFd itself. We should double check that none of them are broken. | 0easy
|
Title: Render Plotly graphs in Sphinx documentation
Body: Currently the Sphinx docs (https://ridgeplot.readthedocs.io/) do not render live plotly graphs. Instead, they contain static screenshots for all available examples.
We could make use of [Sphinx-Gallery](https://github.com/sphinx-gallery/sphinx-gallery) to help us use real Plotly graphs. See [**this example**](https://sphinx-gallery.github.io/stable/auto_examples/plot_9_plotly.html) from their docs. | 0easy
|
Title: `start/end_body_item` listener v3 methods missing from documentation in User Guide
Body: These methods are called with keywords and control structures if more specific methods aren't defined. Implementing only them is the same as implementing `start/end_keyword` with listener v2. The differences how control structures are handled in different listener versions should also be documented better. | 0easy
|
Title: ValueError with 4-hour data, but not 1-hour data
Body: Using most up to date versions of everything. As title states, I'm getting a ValueError when I run my code with 4-hour data, but not with 1-hour data. I'm using CSVs. 1-hour data runs perfectly as expected. However, when I export data in exactly the same way, just changing the time interval to 4-hour, I get this warning: "FutureWarning: reindexing with a non-unique Index is deprecated and will raise in a future version" which I don't get when I run the code with the 1-hour data, and then I get this error: "ValueError: cannot reindex on an axis with duplicate labels". The error is originating with df.ta.macd(append=True), however when I comment it out, it just throws the same error with a new line, in my case df.ta.msi(append=True).
Any ideas what's causing the error to occur with 4-hour data, but not 1-hour? | 0easy
|
Title: `add_marker` not working while plot is not active
Body: I am having this issue with the `add_marker()` method of `Signal1D` while working on Jupyter notebooks:
```python
import hyperspy.api as hs
import numpy as np
s = hs.signals.Signal1D(np.arange(500).reshape([5,10,10]))
marker_array = np.tile(np.arange(10), (5,1))
m = hs.plot.markers.vertical_line(x=marker_array)
s.add_marker(m)
```
The above code works perfectly fine if it is run in a single cell; the signal will be created and plotted with the markers in place. However if I split the code in different cells like below:
```python
# cell no. 1
import hyperspy.api as hs
import numpy as np
s = hs.signals.Signal1D(np.arange(500).reshape([5,10,10]))
#cell no. 2
marker_array = np.tile(np.arange(10), (5,1))
m = hs.plot.markers.vertical_line(x=marker_array)
s.add_marker(m)
```
and follow these steps:
1. Create the signal `s` by running cell no. 1.
2. Run cell no. 2 to create `marker_array` and marker object `m` and add to `s` using `add_marker()`.
3. Close the plot that appears.
4. Try to run cell no. 2 again.
I get this error:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-36-7302ff075bd5> in <module>
1 marker_array = np.tile(np.arange(10), (5,1))
2 m = hs.plot.markers.vertical_line(x=marker_array)
----> 3 s.add_marker(m)
c:\users\geri\documents\dev\hyperspy\hyperspy\signal.py in add_marker(self, marker, plot_on_signal, plot_marker, permanent, plot_signal, render_figure)
5791 self.plot()
5792 if m._plot_on_signal:
-> 5793 self._plot.signal_plot.add_marker(m)
5794 else:
5795 if self._plot.navigator_plot is None:
AttributeError: 'NoneType' object has no attribute 'add_marker'
```
Now if I run `s.plot()` and then run cell no. 2 again this error disappears and the markers are put on the plot, but it will always give me this error if the plot isn't active.
Any ideas? | 0easy
|
Title: Adding additional colorbar functionality
Body: Hello,
So I have never made edits to any package before but I was frustrated by the lack of changes to colorbar parameters that were in the original code, or at least from what I could figure out. Thus I decided to add a couple extra line to the radardisplay.py script under the plot_colorbar:
def plot_colorbar(self, **label_size =22,ticklabel_size = 22,fraction=0.046,** mappable=None, field=None, label=None,orient='vertical', cax=None, ax=None, fig=None,ticks=None, ticklabs=None):
Plot a colorbar.
Parameters
mappable : Image, ContourSet, etc.
Image, ContourSet, etc to which the colorbar applied. If None the
last mappable object will be used.
field : str
Field to label colorbar with.
**label_size : int
Font size of the colorbar label. Default is 22
ticklabel_size : int
Font size for the tick label on the colorbar. Default is 22
Fraction : int
Fraction of original axes to use for colorbar. Default of 0.046 seems to work well from my testing**
label : str
Colorbar label. None will use a default value from the last field
plotted.
orient : str
Colorbar orientation, either 'vertical' [default] or 'horizontal'.
cax : Axis
Axis onto which the colorbar will be drawn. None is also valid.
ax : Axes
Axis onto which the colorbar will be drawn. None is also valid.
fig : Figure
Figure to place colorbar on. None will use the current figure.
ticks : array
Colorbar custom tick label locations.
ticklabs : array
Colorbar custom tick labels.
if fig is None:
fig = plt.gcf()
if mappable is None:
mappable = self.plots[-1]
if label is None:
if field is None:
field = self.plot_vars[-1]
label = self._get_colorbar_label(field)
**cb = fig.colorbar(mappable, orientation=orient, ax=ax, cax=cax,fraction=fraction)
cb.ax.tick_params(labelsize=ticklabel_size)**
if ticks is not None:
cb.set_ticks(ticks)
if ticklabs:
cb.set_ticklabels(ticklabs)
**cb.set_label(label, fontsize = label_size)**
self.cbs.append(cb)
I'm not sure how to contribute this to the source code or even if this is the best way to do it but it works well for what I needed. So I thought I'd share it here so maybe it could be added in the future. | 0easy
|
Title: Deprecate `compile_uri_template()`
Body: The `compile_uri_template()` function is not consistent with compiled router regarding tailing slash.
> I don't know either what the plan for `compile_uri_template()` is. I think it is left as a utility method for building custom routers, and I suppose, may continue to exist as such.
> This PR does a good job at explaining the _status quo_, so maybe let's just merge it, and create a new issue for making this method slightly more consistent with `CompiledRouter`.
_Originally posted by @vytas7 in https://github.com/falconry/falcon/pull/1961#pullrequestreview-777018325_
The options may be
- to leave it as is, with the warning provided by the above referenced pr
- add a flag to select what behaviour to have
- deprecate and eventually remove it, since it's not used internally by falcon. | 0easy
|
Title: Consecutive duplicate word check should be disabled for Toki Pona
Body: ### Describe the problem
Word repetition happens a lot in toki pona (as shown in examples below), and is very likely to be on purpose rather than on accident.
Examples where duplicate words happen:
- Numbers: toki pona only has 6 words for numbers (0, 1, 2, 5, 20, and 100), so in order to make larger numbers like 3 or 74, it just combines existing number words (3 is "tu wan", lit. "2 1"). Because of this, a lot of numbers end up having to duplicate words. Ex:
- 4 is "tu tu" (2 2)
- 11 is "luka luka wan" (5 5 1)
- any number 40-99 is "mute mute [stuff]" (20 20 stuff)
- Double meanings: toki pona has some words that have multiple meanings. A good example is "tawa", which is means "to/towards" (preposition) and also "movement" (content word). So, a sentence like "o pana e sitelen tawa tawa mi", lit. "Give the 'moving image' (video, animation) to me", would trigger the check. Non-prepositions can have similar problems, although it's a bit less common
- Emphasis: You can reduplicate some words to strengthen their meaning. Ex:
- mama = parent, mama mama = grandparent
- mute = a lot, >3, or 20; mute mute = very very much, 40
With all of these, the check is way more likely to trigger when duplicate word use is intentional than when it's not. It's quite annoying to dismiss the check every time, and possibly nonsensical to put a separator (like a comma) in between the duplicated words
### Describe the solution you would like
Disabling the Consecutive Word Check for toki pona [tok]
### Describe alternatives you have considered
There could be a whitelist for some toki pona words, but this would exclude cases of emphasis, and would be harder than just disabling it for this language
### Screenshots
_No response_
### Additional context
_No response_ | 0easy
|
Title: updating a dropdown with `options=None` causes an exception
Body: see https://community.plot.ly/t/dash-core-components-callback-error/5655/5 for context | 0easy
|
Title: Dictionaries are not accepted as-is in argument conversion with unions containing TypedDicts
Body: When executing my library, arguments get cast to the first matching type instead of the actual type of the input. I believe this is to do with how Robot handles annotations. Rather than investigating too much into this, I have created an example to show the unexpected behaviour.
Please let me know if this is intentional.
Issues:
- arguments that do not match the type become `str`, see the `float` variable below
- arguments that match the first type are cast to that type, you can see a `dict` being cast to `str`. Order A fails, Order B works as expected
# Output from terminal
When executing the robot file:
```
robot TestSuite.robot
==============================================================================
TestSuite
==============================================================================
Foo .......expected | None | <class 'NoneType'>
order_a | None | <class 'NoneType'>
order_b | None | <class 'NoneType'>
expected | {'prop': 'bar'} | <class 'robot.utils.dotdict.DotDict'>
order_a | {'prop': 'bar'} | <class 'str'> ### This should not be str
order_b | {'prop': 'bar'} | <class 'robot.utils.dotdict.DotDict'>
expected | foo | <class 'str'>
order_a | foo | <class 'str'>
order_b | foo | <class 'str'>
expected | 4 | <class 'int'>
order_a | 4 | <class 'int'>
order_b | 4 | <class 'int'>
expected | 2.0 | <class 'float'>
order_a | 2.0 | <class 'str'> ### This should be float
order_b | 2.0 | <class 'str'> ### This should be float
expected | {'prop': 'bar'} | <class 'dict'>
order_a | {'prop': 'bar'} | <class 'str'> ### This should not be str
order_b | {'prop': 'bar'} | <class 'dict'>
Foo | PASS |
------------------------------------------------------------------------------
TestSuite | PASS |
1 test, 1 passed, 0 failed
==============================================================================
```
When exectuing the python file
```
python3 test.py
def types_order_a(self, var: str | MyDictType | None | int):
def types_order_b(self, var: None | MyDictType | str | int):
order_a | None | <class 'NoneType'>
order_b | None | <class 'NoneType'>
expected | None | <class 'NoneType'>
order_a | {'prop': 'bar'} | <class 'dict'>
order_b | {'prop': 'bar'} | <class 'dict'>
expected | {'prop': 'bar'} | <class 'dict'>
order_a | foo | <class 'str'>
order_b | foo | <class 'str'>
expected | foo | <class 'str'>
order_a | 4 | <class 'int'>
order_b | 4 | <class 'int'>
expected | 4 | <class 'int'>
order_a | 2.0 | <class 'float'>
order_b | 2.0 | <class 'float'>
expected | 2.0 | <class 'float'>
```
# Files
## Library: CustomLibrary.py
```py
from typing import TypedDict
from robot.api.logger import console
class MyDictType(TypedDict):
prop: str
class CustomLibrary:
def print_signatures(self):
console("def types_order_a(self, var: str | MyDictType | None | int):")
console("def types_order_b(self, var: None | MyDictType | str | int):")
def types_order_a(self, var: str | MyDictType | None | int):
console(f"order_a | {var} | {type(var)}")
def types_order_b(self, var: MyDictType | str | None | int):
console(f"order_b | {var} | {type(var)}")
def no_types(self, var):
console(f"expected | {var} | {type(var)}")
```
## Robot: TestSuite.robot
```robot
*** Settings ***
Library ./CustomLibrary.py
*** Test Cases ***
Foo
${var1}= Evaluate None
# Gets cast to a string for order A but not order B
${var2}= Create Dictionary prop=bar
${var3}= Set Variable foo
${var4}= Set Variable ${4}
# Gets cast to a string because there is no float type in the function
${var5}= Set Variable ${2.0}
# Dictionary that matches MyDictType
${var6}= Evaluate {"prop":"bar"}
${list}= Create List ${var1} ${var2} ${var3} ${var4} ${var5} ${var6}
FOR ${item} IN @{list}
CustomLibrary.No Types ${item}
CustomLibrary.Types Order A ${item}
CustomLibrary.Types Order B ${item}
END
```
## Python Runner: test.py
```py
from CustomLibrary import CustomLibrary
lib = CustomLibrary()
var1 = None
var2 = {"prop": "bar"}
var3 = "foo"
var4 = 4
var5 = 2.0
lib.print_signatures()
inputs = [var1, var2, var3, var4, var5]
for input in inputs:
lib.types_order_a(input)
lib.types_order_b(input)
lib.no_types(input)
```
| 0easy
|
Title: Project Recommendability metric API
Body: The canonical definition is here: https://chaoss.community/?p=3574 | 0easy
|
Title: Marketplace - agent page - change appearance of images
Body:
### Describe your issue.
<img width="972" alt="Screenshot 2024-12-17 at 19 02 13" src="https://github.com/user-attachments/assets/4beb5c75-26ed-4368-aea3-2440c2c4a730" />
Change border radius of all images to 26px
Reduce margin between images to 16px
| 0easy
|
Title: s3select should read credentials from the environment
Body: See https://github.com/betodealmeida/shillelagh/issues/328. | 0easy
|
Title: Webapp Shows Unnecessary DS Intructions for Local Public Suffixes
Body: This dialog should be much easier for local public suffixes. The web app has a list of such suffixes so it could easily conclude that it is unnecessary to show this:

@Rotzbua would you be interested in looking into this? | 0easy
|
Title: Running "interpreter --model i raises a LiteLLM:ERROR
Body: ### Describe the bug
After running
```interpreter --model i ```
without any previous configuration, the following error is raised (after some input is sent):
```shell
> wheres my chrime_analysis directory
16:16:08 - LiteLLM:ERROR: utils.py:1953 - Model not found or error in checking vision support. You passed model=i, custom_llm_provider=openai. Error: This model isn't mapped yet. model=i, custom_llm_provider=openai. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json.
```
### Reproduce
1.pip install open-interpreter
2. input at cmd:` interpreter --model i`
3. input up some question, like where is a random directory
### Expected behavior
It should work sucesscully, without any errors and without the need of the configuration (as it would run with your hosted LLM)
### Screenshots

### Open Interpreter version
0.4.3
### Python version
3.11.9
### Operating System name and version
Windows 11
### Additional context
_No response_ | 0easy
|
Title: Document `robot_running` and `dry_run_active` properties of the BuiltIn library in the User Guide
Body: In issue #4666 we introduced 2 new properties `robot_running` and `dry_run_active` to better support library developers, but these properties were not documented anywhere.
For RobotCode I regularly get problems that the libdoc cannot be generated (e.g. #d-biehl/robotcode/issues/180) and there are some suggestions for improvement here as well (e.g. #4906)
I think we should introduce a small chapter under chapter 4 in the user manual on this topic, with some examples like here https://github.com/d-biehl/robotcode/issues/180#issuecomment-1816141730 | 0easy
|
Title: What is intended way to implement Union?
Body: Here is the schema that i am trying to implement:
```gql
type Text {
id: ID
content: String
}
type Image {
id: ID
image_url: String
thumb_url: String
}
union Post = Text | Image
type User {
id: ID
posts: [Post]
}
type Query {
users: [User]
}
```
and execute this query:
```gql
{
users {
posts {
... on Image {
image_url
}
... on Text {
content
}
}
}
}
```
What I have tried:
1. _Mongoengine models_: make PostModel as separate Document, inherit TextModel and ImageModel from it and reference PostModel from UserModel.
_Graphene objects_: make types for Text, Image, User and Post as MongoengineObjectType.
(full code [here](https://github.com/graphql-python/graphene-mongo/files/3951781/option1.py.txt))
This returns errors `Unknown type "Image".` and `Unknown type "Text".`
In this case Post type has only fields `Cls: String, id: ID`.
2. _Mongoengine models_: same.
_Graphene objects_: inherit Post from graphene.Union and set Image and Text as types of that union.
(full code [here](https://github.com/graphql-python/graphene-mongo/files/3951811/option2.py.txt))
This returns `Cannot query field "posts" on type "User".`.
I suppose this happens because now graphene_mongo does not state that `User.posts` reference `Post`
So how should such schema be implemented?
| 0easy
|
Title: [ENH] Count the cumulative number of unique elements in a column
Body: # Brief Description
I would like to propose a function that counts the cumulative number of unique items in a column.
# Example implementation
```python
def cumulative_unique(df, column_name, new_column_name):
"""
Compute the cumulative number of unique items in a column that have been seen.
This function can be used to limit the number of source plates
that we use to seed colonies.
"""
df = df.copy()
unique_elements = set()
cumulative_unique = []
for val in df[column_name].values:
unique_elements.add(val)
cumulative_unique.append(len(unique_elements))
df[new_column_name] = cumulative_unique
return df
```
I'm happy for anybody else to run with the implementation above and improve upon it. | 0easy
|
Title: Marketplace - creator page - change font of avg rating and number of runs
Body:
### Describe your issue.
Please update the font of these two items to the "large-Geist" spec using the typography specs laid out here: https://www.figma.com/design/Ll8EOTAVIlNlbfOCqa1fG9/Agent-Store-V2?node-id=2759-9596&t=2JI1c3X9fIXeTTbE-1
the following specs:
Font: Geist
weight: semi bold
size: 18
line height: 28
<img width="502" alt="Screenshot 2024-12-16 at 20 43 16" src="https://github.com/user-attachments/assets/c18080e8-0df6-4514-9929-c5b16fbcb00d" />
### Upload Activity Log Content
_No response_
### Upload Error Log Content
_No response_ | 0easy
|
Title: 'NoneType' object has no attribute 'translate'
Body: Hey, I just noticed a bug in the translation to different langaues if I used one word:

| 0easy
|
Title: Add a `pretty` option when dumping fixtures
Body: When we dump fixtures:
```
piccolo fixtures dump my_app
```
It prints out the JSON, but it isn't formatted nicely. It would be good to have a `pretty` option:
```
piccolo fixtures dump my_app --pretty
```
Here's the relevant code:
https://github.com/piccolo-orm/piccolo/blob/74ea10d427464b05d4a0caba05ce317582d74203/piccolo/apps/fixtures/commands/dump.py#L111-L128
The simplest solution is to replace this line:
https://github.com/piccolo-orm/piccolo/blob/74ea10d427464b05d4a0caba05ce317582d74203/piccolo/apps/fixtures/commands/dump.py#L128
With this:
```python
if pretty:
print(json.dumps(json.loads(json_string), indent=4))
else:
print(json_string)
```
Though there may be a better way of doing it. We use Pydantic to create the JSON string, and there may be a way of getting it to do a pretty output, but I'm not sure. | 0easy
|
Title: Update Models For LLM Block
Body: | 0easy
|
Title: [ENH] Warn the user of non-existent column names passed to .select_columns()
Body: It would be nice if the user was given a warning when non-existent column names are passed to `.select_columns()`. For example, it would be helpful if the following chunk
```python
>>> test_df = pd.DataFrame(data=np.random.randn(20, 5), columns=["A", "B", "C", "col_D", "col_E"])
>>> test_df.select_columns(["A", "B", "G", "col_*"]).rename_column("A", "a")
a B col_D col_E
0 -0.307637 -0.057561 -0.355314 0.050262
1 1.263703 0.050407 -0.300649 1.057278
2 1.133523 -0.742104 -0.866652 -1.503412
3 1.776966 -0.106066 1.221996 -0.901457
4 0.841195 0.503199 0.064804 0.181194
```
also produced a warning (without raising an exception), à la
```bash
/Users/smu095/Documents/github/pyjanitor/janitor/functions.py:2780: UserWarning: Column 'G' not found.
```
Occasionally I find myself selecting columns, only to realise later on that I have misspelled the column name and it hasn't been selected at all. I would like to be warned ASAP if this is the case.
My back of the envelope suggestion is something like this:
```python
@pf.register_dataframe_method
@deprecated_alias(search_cols="search_column_names")
def select_columns(
df: pd.DataFrame, search_column_names: Iterable, invert: bool = False
) -> pd.DataFrame:
"""
Method-chainable selection of columns.
This method does not mutate the original DataFrame.
Optional ability to invert selection of columns available as well.
Method-chaining example:
.. code-block:: python
df = pd.DataFrame(...).select_columns(['a', 'b', 'col_*'], invert=True)
:param df: A pandas DataFrame.
:param search_column_names: A list of column names or search strings to be
used to select. Valid inputs include:
1) an exact column name to look for
2) a shell-style glob string (e.g., `*_thing_*`)
:param invert: Whether or not to invert the selection.
This will result in selection of the complement of the columns
provided.
:returns: A pandas DataFrame with the specified columns selected.
"""
wildcards = [col for col in search_column_names if "*" in col]
non_wildcards = set(search_column_names) - set(wildcards)
for col in non_wildcards:
if col not in df:
warnings.warn(f"Column '{col}' not found.")
full_column_list = []
for col in search_column_names:
search_string = translate(col)
columns = [col for col in df if re.match(search_string, col)]
full_column_list.extend(columns)
return (
df.drop(columns=full_column_list) if invert else df[full_column_list]
)
```
This is my first foray into contributing to open-source projects, any feedback is welcome. Thanks! | 0easy
|
Title: Missing exchange configuration between South Africa (ZA) and Lesotho (LS)
Body: ## Description
Lesotho have a interconnector with South Africa and rely on them for a large part of their electricity. It would be nice to get the config added as a first step in getting this exchange on the map. | 0easy
|
Title: 虎皮椒失效那个,怎么自己改一下?
Body:
虎皮椒失效那个,看到源码改了,那网站我自己怎么改一下? | 0easy
|
Title: Page privacy model renders 2 headers and icons
Body: <!--
Found a bug? Please fill out the sections below. 👍
-->
### Issue Summary
Page privacy model renders 2 headers and icons when a parent page has been made private.

<!--
A summary of the issue.
-->
### Steps to Reproduce
1. Set a page to private.
2. Visit a child page and click change privacy
Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?
I expect for only the second heading and icon to be displayed.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.12
- Django version: 5.0.6
- Wagtail version: 6.2 (latest) and 6.1
- Browser version: You can use https://www.whatsmybrowser.org/ to find this out.
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| 0easy
|
Title: CSV for teams+members+fields
Body: We should have this CSV format to export but for some reason we don't. Should be an easy implementation. | 0easy
|
Title: Kinto returns HTTP 500 on get_records endpoint (unbalanced parenthesis)
Body: **Steps to reproduce**
_docker run -p 8888:8888 kinto/kinto-server_
Running kinto 14.0.1.dev0.
Request
```
GET /v1/buckets/)EFg9=)%5E(M~%2037/collections/M*D;1Z/records HTTP/1.1
Host: 127.0.0.1:8888
```
Response
```
{
"code": 500,
"errno": 999,
"error": "Internal Server Error",
"message": "A programmatic error occured, developers have been informed.",
"info": "https://github.com/Kinto/kinto/issues/"
}
```
Log:
```
"GET /v1/buckets/)EFg9=)%5E(M~%2037/collections/M*D;1Z/records?" ? (? ms) unbalanced parenthesis at position 10 errno=999
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/pyramid/tweens.py", line 41, in excview_tween
response = handler(request)
File "/app/kinto/core/events.py", line 157, in tween
File "/usr/local/lib/python3.7/site-packages/pyramid/router.py", line 148, in handle_request
registry, request, context, context_iface, view_name
File "/usr/local/lib/python3.7/site-packages/pyramid/view.py", line 683, in _call_view
response = view_callable(context, request)
File "/usr/local/lib/python3.7/site-packages/pyramid/config/views.py", line 169, in __call__
return view(context, request)
File "/usr/local/lib/python3.7/site-packages/pyramid/config/views.py", line 188, in attr_view
File "/usr/local/lib/python3.7/site-packages/pyramid/config/views.py", line 214, in predicate_wrapper
File "/usr/local/lib/python3.7/site-packages/pyramid/viewderivers.py", line 323, in secured_view
result = permitted(context, request)
File "/usr/local/lib/python3.7/site-packages/pyramid/viewderivers.py", line 320, in permitted
return authz_policy.permits(context, principals, permission)
File "/app/kinto/core/authorization.py", line 94, in permits
context.fetch_shared_objects(permission, principals, self.get_bound_permissions)
File "/app/kinto/core/authorization.py", line 229, in fetch_shared_objects
by_obj_id = self._get_accessible_objects(principals, bound_perms, with_children=False)
File "/app/kinto/core/decorators.py", line 45, in decorated
re.error: unbalanced parenthesis at position 10
"GET /v1/buckets/)EFg9=)%5E(M~%2037/collections/M*D;1Z/records?" 500 (5 ms) agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36 errno=999 lang=en-US,en;q=0.9 time=2020-12-14T13:18:58.482000
``` | 0easy
|
Title: DotDict behaves inconsistent on equality checks. `x == y` != `not x != y` and not `x != y` == `not x == y`
Body: As found in #4955 `==` and `!=` do behave differently on DotDict.
While `==` behaves like `dict` the `!=` does behave like OrderedDict.
Quote:
This `not a == b` vs `a != b` was the reason the utest fail in the branch.
And i would consider this an actual bug in DotDict implementation?
```python
from collections import OrderedDict
from robot.utils import DotDict
# Normal dict: the order isn't relevant
norm_dict = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6, 'g': 7}
norm_dict_reverse = {'g': 7, 'f': 6, 'e': 5, 'd': 4, 'c': 3, 'b': 2, 'a': 1}
norm_dict == norm_dict_reverse # True
norm_dict != norm_dict_reverse # False
not norm_dict == norm_dict_reverse # False
not norm_dict != norm_dict_reverse # True
# OrderedDict: the order is important
order_dict = OrderedDict([('a', 1), ('b', 2), ('c', 3), ('d', 4), ('e', 5), ('f', 6), ('g', 7)])
order_dict_reverse = OrderedDict([('g', 7), ('f', 6), ('e', 5), ('d', 4), ('c', 3), ('b', 2), ('a', 1)])
order_dict == order_dict_reverse # False
order_dict != order_dict_reverse # True
not order_dict == order_dict_reverse # True
not order_dict != order_dict_reverse # False
#DotDict : the order isn't relevant with == but is relevant with !=
dot_dict = DotDict({'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6, 'g': 7})
dot_dict_reverse = DotDict({'g': 7, 'f': 6, 'e': 5, 'd': 4, 'c': 3, 'b': 2, 'a': 1})
dot_dict == dot_dict_reverse # True because order is not relevant with ==
dot_dict != dot_dict_reverse # True because order is relevant with !=
not dot_dict == dot_dict_reverse # False
not dot_dict != dot_dict_reverse # False
```
This also appears in Robot Framework Usage:
DotDict

OrderedDict

Normal python dict

Fix should just be adding this to DotDict:
```python
def __ne__(self, other):
return not self == other
```
I think it is easiest if you quickly add this and a test for it yourself. | 0easy
|
Title: Not compatible with Cython 3.1 due to Py2 code
Body: Cython 3.1 removes Py2 support, but gevent still has Py2 code in its code base, e.g.
https://github.com/gevent/gevent/blob/07d2b7e4a3c5881e0a3637062b3c7c7c0027971c/src/gevent/libev/corecext.pyx#L64-L69
This leads to build failures for users who have Cython 3.1 installed, as described in
https://github.com/cython/cython/issues/6490
A quick fix would be to exclude Cython 3.1 from the build dependencies:
https://github.com/gevent/gevent/blob/07d2b7e4a3c5881e0a3637062b3c7c7c0027971c/pyproject.toml#L21
However, since Python 2.x support seems no longer intended for gevent, removing the Py2 left-overs seems the best path forward. | 0easy
|
Title: About nonstandard output
Body: @medvedev1088
Hi,
I found that there ara two kinds of output in transactions.json.
One is nonstandard, and the other is pubkeyhash.
I thought the nonstandard output is the op_return output, but i found outputs of many (not all ) coinbase txs also are nonstandard. And address of miner is like“nonstandard3318537dfb3135df9f3d950dbdf8a7ae68dd7c7d”.
Here is an example:
transaction: {
"hash": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b", "block_number": 0,
"block_hash": "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f", "is_coinbase": true,
"outputs": [{"index": 0, "script_asm": "04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f OP_CHECKSIG", "script_hex": "4104678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5fac", "required_signatures": null, **"type": "nonstandard", "addresses": ["nonstandard3318537dfb3135df9f3d950dbdf8a7ae68dd7c7d"**], "value": 5000000000}], "input_count": 0, "output_count": 1, "input_value": 0, "output_value": 5000000000, "fee": 0}
The address above is 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa according to [https://www.blockchain.com/btc/tx/4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b]
I'm confused. Perhaps you know why?
Thank you very much!!!! | 0easy
|
Title: [UX] GPU name not canonicalized when launch on kubernetes
Body: When a kubernetes cluster have GPU: `GH200-480GB`
```
sky show-gpus
Kubernetes GPUs
GPU REQUESTABLE_QTY_PER_NODE TOTAL_GPUS TOTAL_FREE_GPUS
GH200-480GB 1 2 2
```
None of the following succeed:
1. `sky launch --gpus gh200 --cloud kubernetes`
2. `sky launch --gpus gh200-480gb --cloud kubernetes`
Only working one:
`sky launch --gpus GH200-480GB --cloud kubernetes`
We should make the two listed above work | 0easy
|
Title: Failed keywords inside skipped tests are not expanded
Body: We are just upgrading from 3.x to 6.0.2. We are moving our test framework to use the "skipped"-concept instead of the deprecated "critical/non-critical"-concept. We noticed that in log.html, the failed KWs are not automatically expanded for a test skipped using skipOnFailure.
Reproducing:
- click a failed skipped test in report.html.
- Failed nested KWs are not automatically expanded in log.html
For a failed non-skipped test, the failed KWs are automatically recursively expanded. The same happens for failed non-critical test in 3.x. We would expect and like the same behaviour for the failed skipped tests. Especially for nested KWs, it's irritating to manually expand the failed nested KWs to get to the lowest level failed KW. | 0easy
|
Title: ENH: Support encoding in pd.read_csv
Body: ### Is your feature request related to a problem? Please describe
`encoding="GBK"` not work in `xorbits.pandas` `pd.read_csv()`, but worked for `pandas`
### Describe the solution you'd like
add **kwargs in `https://github.com/xprobe-inc/xorbits/blob/039503be8f51bdf4977a0af4e7a1a8de346ee152/python/xorbits/_mars/dataframe/datasource/read_csv.py#L721` can fix this problem
### Describe alternatives you've considered
it is because that **kwargs were not transfered to pandas.
### Additional context
I tried to add it and it works.
| 0easy
|
Title: Add unfurl_links / unfurl_media to WebhookClient#send method
Body: The `WebhookClient#send` method does not support the following parameters.
* unfurl_links: bool
* unfurl_media: bool
As a workaround, developers can use `WebhookClient#send_dict` at the moment. We can improve the `#send` method to have these parameters for both `WebhookClient` and `AsyncWebhookClient`.
References:
* [The API document](https://api.slack.com/reference/messaging/link-unfurling#no_unfurling_please)
* We noticed this issue thanks to [Sam Schlinkert](https://twitter.com/sts10/status/1405550215306825728)
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [x] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [ ] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: Add installed capacities for Menorca (ES-IB-ME)
Body: Hi, I have noticed some missing data for the Spanish islands and as a local from Menorca and user of this service I would like to make a contribution.
# Installed capacities for ES-IB-ME
- Biomass: 0 W
- Geothermal: 0 W
- Hydro: 0 W
- Solar: 50.8 MW
- Wind: 0 W
- Nuclear: 0 W
- Hydro storage: 0 W
- Coal: 0 W
- Gas: 0 W
- Oil: 271.6 MW
# Sources
## IME
My main source is a study made by the IME (Institut Menorquí d'Estudis, Menorcan Institute of Studies) in 2018, which can be found in Spanish [here](https://www.ime.cat/WebEditor/Pagines/file/La%20primera%20transici%C3%B3n%20energ%C3%A9tica%20de%20Menorca.pdf).
Installed capacities for all electricity sources for the entire island are detailed on section 3.2 'Electrical System', namely 3:
1. **Central térmica (Oil)**
Corresponds to the only power plant in the island, comprised of 3 diesel motors and 5 gas turbines (using diesel as fuel). These account for a total of 271.6 MW of installed capacity.
2. **Parque eólico (Wind)**
Sadly, this wind farm reached its end-of-life in June 2024 and has been already closed down ([source1](https://www.menorca.info/menorca/local/2023/10/12/2029209/consell-dice-reparar-molinos-cerrar-parque-eolico-mila.html), [source2](https://www.eldiario.es/illes-balears/sociedad/adios-mila-primer-parque-eolico-balears_1_11496248.html)). This can also be observed in the historic production data for Menorca. Despite there being talks about possibly renewing the wind farm or even constructing a new one, that leaves the wind installed capacity to 0 W for the forseeable years.
3. **Parques fotovoltaicos (Solar)**
There are two photovoltaic farms in operation, of which one was planned to be expanded at the time (2018) as stated in section 3.4.3. That amplification was finalized in summer of 2023 and has now an installed capacity of 49.8 MW ([source1](https://www.cime.es/WebEditor/Pagines/file/PTI2020/01_Mem%C3%B2ria/0681-01_MEMORIA_APROBACION%20INICIAL_f%20-%2001_MEMORIA_APROBACION%20INICIAL_f.pdf) (page 108), [source2](https://www.menorca.info/menorca/local/2023/10/18/2032849/menorca-pone-marcha-son-salomo-mayor-parque-energia-renovable-balears.html)). Combined with the capacity of the second photovoltaic farm (1 MW), this gives a total solar installed capacity of 50.8 MW.
## REE
I also attach a report by REE (Red Electrica) about the electrical systems of the Balearic Islands as of the end of 2023, also in Spanish [here](https://www.ree.es/sites/default/files/publication/2024/07/downloadable/Baleares.pdf), since this one seems to be the main source when it comes to Spanish installed capacity data in the rest of the country. However, REE does not provide exact numbers on installed capacities per island, only the total for the entire archipelago.
Page 2 depicts the contribution percentage of every energy source over the total energy production of each Balearic island. Only 3 sources can be found for Menorca:
- 'Solar fotovoltaica' (photovoltaic solar) corresponds to category Solar
- 'Turbina de gas' (gas turbine) and 'Motores diesel' (diesel motors) correspond to category Oil, as demonstrated above
- Notably, wind energy sources are absent even though the stated wind farm was still operational. This can be explained by the fact the decision to close it down was set before the end of that year and production was already very poor due to constant breakdowns at the farm.
Even though exact numbers for installed capacities are not provided, the left graph on the same page can be used to calculate an approximation of the current capacity, which roughly matches the ones provided above. Moreover, I want to use this report as a proof that other sources of energy are not present in Menorca currently, and they can therefore be set as 0 W of installed capacity.
# Note
I ignore whether there is a preference to set installed capacities to ? W instead of 0 W if they are not provided explicitly by [ree.es](https://www.ree.es/). If that is the case, I apologize. I hope you can consider my contribution, given my personal insight as a local and the official and reliable sources provided.
Also, while researching for the matter, I have found information about the rest of the islands but I am not as acquainted with them at the moment. If contributions like this are appreciated I can consider reserching them too. | 0easy
|
Title: Marketplace - for the card navigation arrows, increase the size of the circular buttons from 38x38 to 52x52px
Body: ### Describe your issue.
For the circular navigation arrows, increase the size of the circular buttons from 38x38 to 52x52px
<img width="1402" alt="Screenshot 2024-12-13 at 17 08 20" src="https://github.com/user-attachments/assets/c3586fb6-9af5-41d6-8ae7-08f75940fa97" />
| 0easy
|
Title: Messages in test body cause crash when using templates and some iterations are skipped
Body: This is a regression caused by #4426. The root cause is this code that handles the situation where all iterations have been skipped:
```python
if all(item.skipped for item in result.body):
raise ExecutionFailed('All iterations skipped.', skip=True)
```
This code works fine normally, but if `result.body` contains messages, it fails with this error:
```
AttributeError: 'Message' object has no attribute 'skipped'
```
Luckily tests having messages directly in their body is rare. I think the only way that could happen is when using a listener that logs something in `start_test` or if results are modified programmatically. Crashes are nevertheless always bad, and based on a thread on our Slack someone had encountered this issue, so this requires a pretty high severity.
| 0easy
|
Title: Summary of updated snapshots
Body: **Is your feature request related to a problem? Please describe.**
When I run snapshot update and only a few snapshots have changed or a few new ones get created it currently gives a summary saying "x snapshots have been updated".
**Describe the solution you'd like**
It would be wonderful if it could print a complete list of the names of each of the snapshots so that you can feel more confident that only the ones you expected to be updated / created were changed. This would give me more confidence when updating snapshots and a clear list of the ones I should go look at manually to validate their correctness. Ideally it would have a separate section where it prints the names of any snapshots that were updated and then another section for printing the names of newly created snapshots.
Thank you very much for this awesome library it saves me so much time and effort and has beautiful syntax.
| 0easy
|
Title: Querying based on value of ReferenceField
Body: I have a model defined like so:
```
from mongoengine import Document, StringField, ReferenceField
class Bar(Document):
pass
class Foo(Document):
parent = ReferenceField('Bar')
name = StringField(default='')
```
If I set up my schema like this:
```
from graphene_mongo import MongoengineConnectionField, MongoengineObjectType
from .models import (Foo as FooModel, Bar as BarModel)
class Bar(MongoengineObjectType):
class Meta:
model = BarModel
interfaces = (Node,)
class Foo(MongoengineObjectType):
class Meta:
model = FooModel
interfaces = (Node,)
class Query(graphene.ObjectType):
node = Node.Field()
foos = MongoengineConnectionField(Foo)
bars = MongoengineConnectionField(Bar)
schema = graphene.Schema(query=Query, types=[Foo, Bar])
```
Should I be able to query for `Foo` with `parent` equal to the id of a `Bar`? Example query:
```
{
foos(parent: "some_valid_id_in_string_form") {
edges {
node {
name
}
}
}
}
```
| 0easy
|
Title: GraphicsScene.mouseMoveEvent duplicates events
Body: The current implementation of GraphicsScene.mouseMoveEvent calls the base class implementation twice, when QMouseEvent.buttons() is something else than Qt.MouseButton.NoButton. The duplicate call happens here:
https://github.com/pyqtgraph/pyqtgraph/blob/88c3127ddf021407bdfedbbfff898036dd84df79/pyqtgraph/GraphicsScene/GraphicsScene.py#L182
I don’t see any reason at this point to call the base class handler again. In general, I think the events should be duplicated and handled twice. | 0easy
|
Title: Add Documentation and Tests for ExplainedVariance
Body: We need to add documentation and tests for `ExplainedVariance`
@DistrictDataLabs/team-oz-maintainers
| 0easy
|
Title: Using serverless-offline with Lazy Listeners
Body: ### Reproducible in:
```Editable install with no version control (slack-bolt==1.11.0)
slack-sdk==3.13.0
Python 3.8.8
ProductName: macOS
ProductVersion: 12.0
BuildVersion: 21A5506j
Darwin Kernel Version 21.1.0: Thu Aug 19 02:54:44 PDT 2021; root:xnu-8019.40.29~26/RELEASE_ARM64_T8101
```
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1. Execute the source code documented below, and try to run with `sls offline`
2. When hitting the handler via a slack invocation, an error is thrown when running the lazy listener (see stack trace below)
```
Failed to run a middleware middleware (error: An error occurred (ResourceNotFoundException) when calling the Invoke operation: Function not found: arn:aws:lambda:eu-west-1:673319237159:function:Fake)
Traceback (most recent call last):
File "/Users/jordangibson/Source/python/slack-message-handler/venv/lib/python3.8/site-packages/slack_bolt/app/app.py", line 545, in dispatch
] = self._listener_runner.run(
File "/Users/jordangibson/Source/python/slack-message-handler/venv/lib/python3.8/site-packages/slack_bolt/listener/thread_runner.py", line 102, in run
self._start_lazy_function(lazy_func, request)
File "/Users/jordangibson/Source/python/slack-message-handler/venv/lib/python3.8/site-packages/slack_bolt/listener/thread_runner.py", line 194, in _start_lazy_function
self.lazy_listener_runner.start(function=lazy_func, request=copied_request)
File "/Users/jordangibson/Source/python/slack-message-handler/venv/lib/python3.8/site-packages/slack_bolt/adapter/aws_lambda/lazy_listener_runner.py", line 27, in start
invocation = self.lambda_client.invoke(
File "/Users/jordangibson/Source/python/slack-message-handler/venv/lib/python3.8/site-packages/botocore/client.py", line 391, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/jordangibson/Source/python/slack-message-handler/venv/lib/python3.8/site-packages/botocore/client.py", line 719, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the Invoke operation: Function not found: arn:aws:lambda:eu-west-1:673319237159:function:Fake
```
### Expected result:
The lazy listener function to be invoked as expected
### Actual result:
The application crashes and errors out, responding with a 500 Internal Error
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
## Observations
This is reproducible locally using serverless-offline, however when I deploy it to aws lambda, I receive a similar (although different) error. Looks to be permissions related, but also seems to be failing on the same issue so I thought I'd include it here in case it's possibly related
```
2021-12-27 04:49:28,775 Failed to run a middleware middleware (error: An error occurred (AccessDeniedException) when calling the Invoke operation: User: arn:aws:sts::673319237159:assumed-role/slack-message-handler-dev-eu-west-1-lambdaRole/slack-message-handler-dev-hello is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:eu-west-1:673319237159:function:slack-message-handler-dev-hello because no identity-based policy allows the lambda:InvokeFunction action)
Traceback (most recent call last):
File "/var/task/slack_bolt/app/app.py", line 545, in dispatch
] = self._listener_runner.run(
File "/var/task/slack_bolt/listener/thread_runner.py", line 102, in run
self._start_lazy_function(lazy_func, request)
File "/var/task/slack_bolt/listener/thread_runner.py", line 194, in _start_lazy_function
self.lazy_listener_runner.start(function=lazy_func, request=copied_request)
File "/var/task/slack_bolt/adapter/aws_lambda/lazy_listener_runner.py", line 27, in start
invocation = self.lambda_client.invoke(
File "/var/task/botocore/client.py", line 391, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/task/serverless_sdk/vendor/wrapt/wrappers.py", line 602, in __call__
return self._self_wrapper(self.__wrapped__, self._self_instance,
File "/var/task/serverless_sdk/__init__.py", line 499, in wrapper
raise error
File "/var/task/serverless_sdk/__init__.py", line 495, in wrapper
response = wrapped(*args, **kwargs)
File "/var/task/botocore/client.py", line 719, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the Invoke operation: User: arn:aws:sts::673319237159:assumed-role/slack-message-handler-dev-eu-west-1-lambdaRole/slack-message-handler-dev-hello is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:eu-west-1:673319237159:function:slack-message-handler-dev-hello because no identity-based policy allows the lambda:InvokeFunction action
```
## handler.py
```python
import logging
import time
from slack_bolt import App
from slack_bolt.adapter.aws_lambda import SlackRequestHandler
# process_before_response must be True when running on FaaS
app = App(process_before_response=True)
@app.middleware # or app.use(log_request)
def log_request(logger, body, next):
logger.debug(body)
return next()
command = "/hello-bolt-python-lambda"
def respond_to_slack_within_3_seconds(body, ack):
if body.get("text") is None:
ack(f":x: Usage: {command} (description here)")
else:
title = body["text"]
ack(f"Accepted! (task: {title})")
def process_request(respond, body):
time.sleep(5)
title = body["text"]
respond(f"Completed! (task: {title})")
app.command(command)(ack=respond_to_slack_within_3_seconds, lazy=[process_request])
SlackRequestHandler.clear_all_log_handlers()
logging.basicConfig(format="%(asctime)s %(message)s", level=logging.DEBUG)
def main(event, context):
slack_handler = SlackRequestHandler(app=app)
return slack_handler.handle(event, context)
```
## serverless.yml
```yaml
service: slack-message-handler
app: slack-message-handler
org: jgibson37
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
region: eu-west-1
environment:
SLACK_BOT_TOKEN: ${env:SLACK_BOT_TOKEN}
SLACK_SIGNING_SECRET: ${env:SLACK_SIGNING_SECRET}
functions:
slack-message-handler:
handler: handler.main
events:
- http:
path: slack/events
method: post
plugins:
- serverless-python-requirements
- serverless-offline
custom:
pythonRequirements:
dockerizePip: non-linux
serverless-offline:
noPrependStageInUrl: true
```
| 0easy
|
Title: Marketplace - Increase the margin between the arrow buttons and the separator line to 60px
Body:
### Describe your issue.
Increase the margin between the arrow buttons and the separator line to 60px, there's currently no margin between them.
<img width="1522" alt="Screenshot 2024-12-13 at 17 25 28" src="https://github.com/user-attachments/assets/91e02ace-920a-4e69-9a12-2c55d9a63ff0" />
| 0easy
|
Title: TRIMA calculation - with and without TALIB
Body: **Describe the bug**
ta.trima without TALIB installed returns (occasionally) different values compared to TRIMA with TALIB.
TALIB uses two calculation paths, depending on even/odd state of the _length_.
Pandas-TA uses only one calculation path regardless of length evenness/oddness.
Discrepancy in calculation appears when:
- no TALIB is used
- period is even
**Expected behavior**
ta.trima should return the same results as TALIB TRIMA for all lengths (even or odd)
**Additional context**
Results get consistent for even lengths when we change the code in [trima.py](https://github.com/twopirllc/pandas-ta/blob/main/pandas_ta/overlap/trima.py):
```python
half_length = round(0.5 * (length + 1))
sma1 = sma(close, length=half_length)
trima = sma(sma1, length=half_length)
```
into this code to capture even/odd variations of length)
```python
sma1 = sma(close, length = round(0.5 * (length + 0.5)))
trima = sma(sma1, length = round(0.5 * (length + 1.5)))
``` | 0easy
|
Title: Support prefix in environ.Env
Body: Sometimes it is desirable to be able to prefix all environment variables. pydantic offers this in this manner:
```python
class Config:
env_prefix = 'my_prefix_' # defaults to no prefix, i.e. ""
```
Ideally, something like this can be implemented:
```python
import environ
env = environ.Env()
env.prefix = "PREFIX_"
``` | 0easy
|
Title: Job Opportunities metric API
Body: The canonical definition is here: https://chaoss.community/?p=3567 This one may be more difficult to resolve with trace data alone. | 0easy
|
Title: Subsequent asserts seem to ignore the matcher
Body: Hello! I've been recently exploring this tool and I wanted to create a fixture that I can re-use with my specific matcher, but it seems that subsequent calls in the same test ignore this matcher. I'm not sure if this is intended or not. I wasn't able to find a definitive answer. Apologies if this has been asked before!
<!-- A clear and concise description of what the bug is. -->
**To reproduce**:
1. Run the following test with pytest:
```python
@pytest.fixture
def my_snapshot(snapshot):
return snapshot(matcher=path_type({"field": (str,)}))
def test_snapshotting(my_snapshot):
assert {"field": "string"} == my_snapshot
assert {"field": "string"} == my_snapshot
```
This results in the following snapshot:
```
# serializer version: 1
# name: test_snapshotting
dict({
'field': str,
})
# ---
# name: test_snapshotting.1
dict({
'field': 'string',
})
# ---
```
**Expected behavior**
I would've expected that both snapshots would compare the type, instead of only the first one is saved with the type and the second one saves the value like the matcher wasn't there at all.
Providing the matcher to both asserts with `my_snapshot` solves this issue, but I don't find this copy-paste feasible too much in a codebase with over a thousand tests. A generic fixture that I define once and re-use would be ideal.
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
**Environment (please complete the following information):**
- OS: MacOS 13.4.1
- Syrupy Version: 4.4.0
- Python Version: 3.11
**Additional context**
<!-- Add any other context about the problem here. -->
| 0easy
|
Title: Error thrown when following doc example for persisting the token through the consumer’s session property
Body: **Describe the bug**
I was following the doc examples for auth to save my access token for a session. However, the code throws an error saying the consumer has no attribute '_Consumer__session':
```
AttributeError at /api/v1/jira/issues/
'Jira' object has no attribute '_Consumer__session'
```
**To Reproduce**
Use the following snippet from the documentation to make a consumer:
```pythhon
class Jira(Consumer):
def __init__(self, access_token):
self.session.params["access_token"] = access_token
@get("/rest/api/3/project/search")
def get_projects(self):
"""Get paginated list of projects accessible by current user"""
pass
...
jira_client = JiraConsumer(access_token=access_token, base_url=base_url)
data = jira_client.get_projects()
```
**Expected behavior**
My custom consumer should get authenticated and I should be able to get data from the endpoint.
**Additional context**
I am using Uplink with Django. Also, if I supply the access token while creating a Jira client like this it works perfectly.
```
jira_client = JiraConsumer(base_url=base_url, auth=BearerToken(access_token))
``` | 0easy
|
Title: Feature: add NatsMessage ack_sync method
Body: We should add an extra `ack_sync` proxy method the same way with regular `ack` we did - https://github.com/airtai/faststream/blob/main/faststream/nats/message.py#L10 | 0easy
|
Title: Option to hide action menu (e.g. "View Source") in Altair plots
Body: **Is your feature request related to a problem? Please describe.**
Altair plots always get rendered with the action menu (e.g. "View Source")
**Describe the solution you'd like**
Include option to hide this part
**Describe alternatives you've considered**
The alternative is to not use Datapane and rather rely on Altair's embed options (see [here](https://github.com/altair-viz/altair/issues/673))
**Additional context**
n/a | 0easy
|
Title: Saved snapshot fails to match if repr ends with newline
Body: **Describe the bug**
It appears that if you snapshot test against an object whose repr ends in a newline, then the test will fail even after running `--snapshot-update`
**To reproduce**
<!--
Steps to reproduce the behavior:
1. Setup '...'
2. Run command '....'
3. See error
-->
```python
# test_syrupy.py
class X:
def __repr__(self):
return "X\n"
def test_syrupy(snapshot):
assert X() == snapshot
```
Then run `pytest test_syrupy.py --snapshot-update` then `pytest teest_syrupy.py`
The test fails with the following output
```
test_syrupy.py F [100%]
=================================================================================================== FAILURES ===================================================================================================
_________________________________________________________________________________________________ test_syrupy __________________________________________________________________________________________________
snapshot = X
def test_syrupy(snapshot):
> assert X() == snapshot
E assert [+ received] == [- snapshot]
E - X
E + X
```
**Expected behavior**
I expect the test to pass
**Environment (please complete the following information):**
- OS: MacOS 14.7
- Syrupy Version: 4.8.0
- Python Version: 3.11.4
| 0easy
|
Title: Person's title translations giving confusing choices in other languages
Body: The same translations of the English terms in address book's dropdown list of titles are causing confusion in other languages.
The original choices in English:
https://github.com/django-oscar/django-oscar/blob/1a3372e1728d7b464693da6c78525ef4f13176a1/src/oscar/apps/address/abstract_models.py#L22
are being translated to a same term: Miss/Mrs/Ms turns Frau/Frau/Frau in German or Mlle/Mme/Mme in French or Panna/Pani/Pani in Polish. It's impossible to differentiate the title chosen.
https://github.com/django-oscar/django-oscar/blob/1a3372e1728d7b464693da6c78525ef4f13176a1/src/oscar/locale/de/LC_MESSAGES/django.po#L45-L55
<img width="622" alt="Screenshot 2021-04-28 at 11 02 43" src="https://user-images.githubusercontent.com/872730/116380855-a5d5da00-a814-11eb-9065-9cdfc0df3801.png">
One can try it out with different languages in the address book in the [sandbox](https://latest.oscarcommerce.com/en-gb/accounts/addresses/add/).
I'd propose, instead of searching for direct translations, that could be difficult or awkward to use ("panna" in Polish or "Fräulein" in German for Miss are rather old-fashioned and not being used often), we leave just one term for a woman, hopefully it's sensible and inclusive to all in question.
| 0easy
|
Title: Dataset Import Doesn't Validate Against Model Constraints
Body: **Describe the bug**
When importing a dataset from a file, the dataset is not validated against the model's constraints. This allows invalid data to be imported into the database.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a model with a field having constraints, such as `max_length`.
```python
class Book(models.Model):
name = models.CharField(max_length=10)
```
2. Prepare a dataset file with data that violates the constraints, such as in this case, having a `name` field value longer than 10 characters.
3. Use django-import-export to import the dataset into the model.
4. Observe that the dataset is imported without validation against the model constraints.
**Expected behavior**
The import process should validate the dataset against the model's constraints and raise errors for any violations, such as exceeding the `max_length`.
**Additional context**
This lack of validation can lead to data integrity issues and runtime errors. Any guidance on where to start implementing this validation in the codebase would be greatly appreciated.
| 0easy
|
Title: Error making request when request has no body and there is a `report::hide_request::body` configuration.
Body: ## Bug report
### Environment
- Operating System: Mac OS
- ScanAPI version: 2.3.0
### Description of the bug
Error making request when request has no body and there is a `report::hide_request::body` configuration.
```shell
▶ scanapi run
Loading file ./scanapi.conf
Loading file scanapi.yaml
Writing documentation
Making request GET http://demo.scanapi.dev/api/v1/health/
Error to make request `http://demo.scanapi.dev/api/v1/health/`.
the JSON object must be str, bytes or bytearray, not 'NoneType'
Making request POST http://demo.scanapi.dev/api/v1/rest-auth/login/
The documentation was generated successfully.
It is available at /Users/camilamaia/workspace/scanapi-org/test-tutorial/scanapi-report.html
```

### Expected behavior ?
Health request to appear in the report
### How to reproduce the the bug ?
specification file `scanapi.yaml`:
```yaml
endpoints:
- name: snippets-api
path: http://demo.scanapi.dev/api/v1/
headers:
Content-Type: application/json
requests:
- name: health
method: get
path: /health/
tests:
- name: status_code_is_200
assert: ${{ response.status_code == 200 }}
- name: body_equals_ok
assert: ${{ response.json() == "OK!" }}
- name: get_token
path: /rest-auth/login/
method: post
body:
username: ${USER}
password: ${PASSWORD}
```
Configuration file `scanapi.conf`:
```yaml
report:
hide_request:
body:
- password
hide_response:
body:
- key
``` | 0easy
|
Title: Order of Agents in the Library has become random
Body: In the Library Screen, Agents used to be sorted by date of last run. However, a bug has been introduced that is causing agents to appear in a random order.
## Steps to Reproduce:
1. Have multiple agents in your library that have been ran
2. Go to the Library tab and observe the order is random.
## Desired Behaviour:
1. Agents should be ordered by date of last SAVE/EDIT, with the **most recently edited** agents at the **TOP** of the list.
Sorting Agents by date of last edit/save means that agents that have never been ran are still easy to find in the library. However if sorting by Save date for whatever reason is not technically feasible (i.e if we don't store this date yet) a good compromise would be sorting by date of last run. | 0easy
|
Title: clean up tests and better error message with R notebooks
Body: we need to address a few things from this PR: https://github.com/ploomber/ploomber/pull/1101
1. refactor unit tests: https://github.com/ploomber/ploomber/pull/1101#discussion_r1212466632
2. better error message when trying to run an R notebook with ploomber-engine https://github.com/ploomber/ploomber/pull/1101#discussion_r1211999094
| 0easy
|
Title: Clarify error output when color format is incorrect
Body: I noticed that this gets triggered if `edge_color` is the wrong type, but the message only says `face_color`.
I'm not 100% sure this is the line that triggered the error because the message seems to be hard coded in a couple of places.
https://github.com/napari/napari/blob/19fb8923790eada0ad9cea752d26a31e5c2140ea/napari/layers/shapes/shapes.py#L1594 | 0easy
|
Title: decorated functions are not properly typechecked
Body: <!-- Describe the bug report / feature request here -->
We will not catch typecheck errors on functions with have e.g. `@timeline.event` or `@usage_lib.entrypoint`.
See https://mypy.readthedocs.io/en/stable/generics.html#declaring-decorators
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
| 0easy
|
Title: TD Sequential + Charting
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
0.2.23b0
**Is your feature request related to a problem? Please describe.**
The TD Sequential is a very popular indicator yet there are not many libraries available for it in Python.
Existing libraries such as in JavaScript are very bulky and do not scale very well.
The essence of TD Sequential is chart analysis and so we really need a way to embed the sequences on the chart above the candles.
**Describe the solution you'd like**
A Pandas TD Sequential indicator that addresses issues of scalability. (e.g. capable of creating a TD Sequential column for a dataset of 1 million samples).
**Describe alternatives you've considered**
There are TD Sequential indicators on TradingView however I would like one in Python.
**Additional context**
I've created a Python TD Sequential that you may be able to work off.
I originally ported a JavaScript implementation but then added a simpler TD Sequential that could scale much better.
I've also heavily commented my simple implementation TDSequentialSimples() and added a Toy Dataset.
```python
#! /usr/bin/env python
import argparse
import os
import sys
import logging
import pandas
"""
TD Sequential
A Python3 implementation of DeMark's TD Sequential.
Input: Pandas DataFrame with 'high','low','open','close' header
Output: detailed DataFrame of Sequential.
Originally ported from JavaScript code.
Toy Dataset example:
python3 TDSequential.py
Add function to create sequences of sequentials:
shift = pandas.DataFrame(
[td.shift(-x).transpose().values.tolist()[0] for x in range(1,context.SEQWINDOW + 1)]).transpose().dropna().values.tolist()
"""
root = logging.getLogger()
root.setLevel(logging.INFO)
#TODO Add function to convert table of sequences to list of sequences k numbers long.
class TDSequential(object):
"""
Object class
"""
def __init__(self):
self.resetSetupCounterAfterCountdownHit13 = True
self.resetCountdownOnTDST = True
def TDSequentialSimples(self,ohlc):
'''Efficient Sequential Programmed by Troy.'''
# Return empty list if data is too small
if len(ohlc.values.tolist()) < 1: return []
# Convert all column labels to lower case
ohlc.columns = [x.lower() for x in ohlc.columns]
# Create buy and sell columns to count sequentials
ohlc['buy'] = ohlc['close'] * 0 ; ohlc['sell'] = ohlc['close'] * 0
# Convert buy and sell columns to int to save memory
ohlc.buy = ohlc.buy.astype(int) ; ohlc.sell = ohlc.sell.astype(int)
#Loop over ohlc data with reference to position (i)
for i, item in ohlc.iterrows():
# Only analyse data if there is enough trailing data
# (sequential looks back 4 and 5 canldes)
if i >= 5:
# Core part of sequential : < or > 4 bars earlier
closeLessThanCloseOf4BarsEarlier = ohlc.loc[i,'close'] < ohlc.loc[i-4,'close']
closeGreaterThanCloseOf4BarsEarlier = ohlc.loc[i,'close'] > ohlc.loc[i-4,'close']
# Logic for sequential : last 2 bars < or > 4 bars earlier
bear = ohlc.loc[i-1,'close'] > ohlc.loc[i-5,'close'] and closeLessThanCloseOf4BarsEarlier
bull = ohlc.loc[i-1,'close'] < ohlc.loc[i-5,'close'] and closeGreaterThanCloseOf4BarsEarlier
# Seuquential can either be last 2 bar pattern OR 4 bars earlier + current green count
if bear or (ohlc.loc[i-1,'buy'] > 0 and closeLessThanCloseOf4BarsEarlier):
# Add number to buy counter
ohlc.loc[i,'buy'] = (ohlc.loc[i-1,'buy']+1-1) % 9 + 1
# Same logic for red counter
elif bull or (ohlc.loc[i-1,'sell'] > 0 and closeGreaterThanCloseOf4BarsEarlier):
# Minus number from sell counter
ohlc.loc[i,'sell'] = (ohlc.loc[i-1,'sell']+1-1)%9+1
# Combine sell and buy columns into one sequential column
ohlc['sequential'] = (ohlc['buy']*-1) + (ohlc['sell']*1)
# convert output to integer instead of double
ohlc.sequential = ohlc.sequential.astype(int)
# Function is ammending ohlc without need for return (pass by value)
return ohlc['sequential']
def TDSequential(self,ohlc,simple=False):
if len(ohlc.values.tolist()) < 1: return []
result = []
ohlc.columns = [x.lower() for x in ohlc.columns]
for i, item in ohlc.iterrows():
resultObj = self.getResultObj()
if i >= 5:
resultObj['sellCoundownIndex'] = result[i - 1]['sellCoundownIndex']
resultObj['buyCoundownIndex'] = result[i - 1]['buyCoundownIndex']
resultObj['sellSetup'] = result[i - 1]['sellSetup']
resultObj['buySetup'] = result[i-1]['buySetup']
resultObj['TDSTBuy'] = result[i-1]['TDSTBuy']
resultObj['TDSTSell'] = result[i-1]['TDSTSell']
resultObj['sellSetupPerfection'] = result[i-1]['sellSetupPerfection']
resultObj['buySetupPerfection'] = result[i-1]['buySetupPerfection']
closeLessThanCloseOf4BarsEarlier = ohlc.loc[i,'close'] < ohlc.loc[i-4,'close']
closeGreaterThanCloseOf4BarsEarlier = ohlc.loc[i,'close'] > ohlc.loc[i-4,'close']
resultObj['bearishFlip'] = ohlc.loc[i-1,'close'] > ohlc.loc[i-5,'close'] and closeLessThanCloseOf4BarsEarlier
resultObj['bullishFlip'] = ohlc.loc[i-1,'close'] < ohlc.loc[i-5,'close'] and closeGreaterThanCloseOf4BarsEarlier
#NOTE bearishflip / bullishflip
if resultObj['bearishFlip'] or (result[i-1]['buySetupIndex'] > 0 and closeLessThanCloseOf4BarsEarlier):
resultObj['buySetupIndex'] = (result[i-1]['buySetupIndex']+1-1) % 9 + 1
resultObj['TDSTBuy'] = max(ohlc.loc[i,'high'], result[i-1]['TDSTBuy'])
elif resultObj['bullishFlip'] or (result[i-1]['sellSetupIndex'] > 0 and closeGreaterThanCloseOf4BarsEarlier):
resultObj['sellSetupIndex'] = (result[i-1]['sellSetupIndex']+1-1)%9+1
resultObj['TDSTSell'] = min(ohlc.loc[i,'low'], result[i-1]['TDSTSell'])
if resultObj['buySetupIndex'] == 9:
resultObj['buySetup'] = True
resultObj['sellSetup'] = False
resultObj['sellSetupPerfection'] = False
resultObj['buySetupPerfection'] = (ohlc.loc[i-1,'low'] < ohlc.loc[i-3,'low'] and ohlc.loc[i-1,'low'] < ohlc.loc[i-2,'low']) or (ohlc.loc[i,'low'] < ohlc.loc[i-3,'low'] and ohlc.loc[i,'low'] < ohlc.loc[i-2,'low'])
if resultObj['sellSetupIndex'] == 9:
resultObj['sellSetup'] = True
resultObj['buySetup'] = False
resultObj['buySetupPerfection'] = False
resultObj['sellSetupPerfection'] = (ohlc.loc[i-1,'high'] > ohlc.loc[i-3,'high'] and ohlc.loc[i-1,'high'] > ohlc.loc[i-2,'high']) or (ohlc.loc[i,'high'] > ohlc.loc[i-3,'high'] and ohlc.loc[i,'high'] > ohlc.loc[i-2,'high'])
resultObj = self.calculateTDBuyCountdown(result,resultObj,ohlc,item,i)
resultObj = self.calculateTDSellCountdown(result,resultObj,ohlc,item,i)
result+=[resultObj]
# if you run this seq with simple=True you get a single column output...
if simple:
#Return a simple version of the indicator...
#buySetupIndex,sellSetupIndex -> main sequential items
result2 = pandas.DataFrame()
#Had to convert dictionary format to pandas before calculating sequential
result = pandas.DataFrame(result)
#converted sell sequences to negative values and combined sell/buy sequences...
result2['sequential'] = (result['buySetupIndex']*-1)+(result['sellSetupIndex']*1)
result = result2
return result
#TODO maybe create an alt version of this with just a list...
def getResultObj(self):
resultObj ={
'buySetupIndex':0,
'sellSetupIndex':0,
'buyCoundownIndex':0,
'sellCoundownIndex':0,
'countdownIndexIsEqualToPreviousElement':True,
'sellSetup':False,
'buySetup':False,
'sellSetupPerfection':False,
'buySetupPerfection':False,
'bearishFlip':False,
'bullishFlip':False,
'TDSTBuy':0,
'TDSTSell':0,
'countdownResetForTDST':False}
return resultObj
def calculateTDSellCountdown(self,result,resultObj,ohlc,item,i):
#TODO used item['TDSTSell'] but item is ohcl??
if (result[i-1]['sellSetup'] and resultObj['buySetup']) or (self.resetCountdownOnTDST and ohlc.loc[i,'close']<resultObj['TDSTSell']):
resultObj['sellCoundownIndex']=0
resultObj['countdownResetForTDST']=True
elif resultObj['sellSetup']:
if ohlc.loc[i,'close']>ohlc.loc[i-2,'high']:
resultObj['sellCoundownIndex'] = (result[i-1]['sellCoundownIndex']+1-1)%13+1
resultObj['countdownIndexIsEqualToPreviousElement'] = False
if resultObj['sellCoundownIndex']==13 and result[i-1]['sellCoundownIndex']==13:
resultObj['sellCoundownIndex']=0
if self.resetSetupCounterAfterCountdownHit13 and (resultObj['sellCoundownIndex']==13 and resultObj['sellSetupIndex']>0):
resultObj['sellSetupIndex']=1
if resultObj['sellCoundownIndex']!=13 and result[i-1]['sellCoundownIndex']==13:
resultObj['sellSetup']=False
resultObj['sellSetupPerfection']=False
resultObj['sellCoundownIndex']=0
return resultObj
def calculateTDBuyCountdown(self,result,resultObj,ohlc,item,i):
if (result[i-1]['buySetup'] and resultObj['sellSetup']) or (self.resetCountdownOnTDST and ohlc.loc[i,'close']>resultObj['TDSTBuy']):
resultObj['buyCoundownIndex']=0
resultObj['countdownResetForTDST']=True
elif resultObj['buySetup']:
if ohlc.loc[i,'close']<ohlc.loc[i-2,'low']:
resultObj['buyCoundownIndex'] = (result[i-1]['buyCoundownIndex']+1-1)%+1
resultObj['countdownIndexIsEqualToPreviousElement']=False
if resultObj['buyCoundownIndex']==13 and result[i-1]['buyCoundownIndex']==13:
resultObj['buyCoundownIndex']=0
if self.resetSetupCounterAfterCountdownHit13 and (resultObj['buyCoundownIndex']==13 and resultObj['buySetupIndex']>0):
resultObj['buySetupIndex']=1
if resultObj['buyCoundownIndex']!=13 and result[i-1]['buyCoundownIndex']==13:
resultObj['buySetup']=False
resultObj['buySetupPerfection']=False
resultObj['buyCoundownIndex']=0
return resultObj
"""
Main section
"""
def main(args):
logging.info('Testing TDSequentia Toy Dataset')
td = TDSequential()
data=[[3810.16,3701.23,3797.14,3642.0],
[3882.14,3796.4500000000003,3858.56,3750.4500000000003],
[3862.7400000000002,3857.57,3766.78,3730.0],
[3823.64,3767.2000000000003,3792.01,3703.57],
[3840.9900000000002,3790.09,3770.96,3751.0],
[4027.71,3771.12,3987.6,3740.0],
[4017.9,3987.62,3975.4500000000003,3921.53],
[4069.8,3976.76,3955.13,3903.0],
[4006.81,3955.4500000000003,3966.65,3930.04],
[3996.01,3966.06,3585.88,3540.0],
[3658.0,3585.88,3601.31,3465.0],
[3618.19,3601.31,3583.13,3530.0],
[3611.1,3584.1,3476.81,3441.3],
[3671.87,3477.56,3626.09,3467.02],
[3648.42,3626.08,3553.06,3516.62],
[3645.0,3553.06,3591.84,3543.51],
[3634.7000000000003,3591.84,3616.21,3530.39],
[3620.0,3613.32,3594.87,3565.75],
[3720.0,3594.87,3665.3,3594.23],
[3693.73,3665.75,3539.28,3475.0],
[3559.51,3539.26,3526.9,3475.5],
[3608.5,3526.88,3570.9300000000003,3434.85],
[3607.98,3570.41,3552.82,3514.5]]
#ohcl = pandas.read_csv('outputcsv.csv')
ohcl = pandas.DataFrame(data)
#NOTE mixed case will work... High/high works fine....
ohcl.columns = ['High','open','Close','low']
print(ohcl)
#sys.exit()
#x = pandas.DataFrame(td.TDSequential(ohcl,simple=True))
x = pandas.DataFrame(td.TDSequentialSimples(ohcl))
print(x.values.tolist())
print(x.columns.tolist())
#y = pandas.concat([ohcl,x],axis=1)
print(ohcl)
if __name__ == '__main__':
"""
Argparse section
"""
parser = argparse.ArgumentParser(description=__doc__)
#parser.add_argument('input', help='Text file in raw format. ')
#parser.add_argument('output', help='File to output list of sentences. ')
#parser.add_argument('-n',
# help='Minimum number of chars (default 5)',
# type=int, default=5, dest='minlength')
args = parser.parse_args()
main(args)
``` | 0easy
|
Title: Revise Application.cpu_usage() for atspi backend
Body: The initial implementation of linux.Application.cpu_usage() provides general cpu usage as reported by `ps` linux utility. From `ps` man:
> %cpu - cpu utilization of the process in
> "##.#" format. Currently, it is
> the CPU time used divided by the
> time the process has been running
> (cputime/realtime ratio), expressed
> as a percentage. It will not add
> up to 100% unless you are lucky.
We might investigate for a better option to report CPU usage over the specified period. | 0easy
|
Title: [ENH] Modify dataframe list with union of their categoricals
Body: The problem: vertical `pd.concat([df1, df2, ...])` converts categoricals to strings if the categories do not perfectly overlap between all dataframes.
The solution: use pandas' `union_categoricals` to modify the original dataframes to add all categories to all dataframes prior to the concatenation.
How to use `union_categoricals`:
From https://stackoverflow.com/questions/45639350/retaining-categorical-dtype-upon-dataframe-concatenation :
```python
from pandas.api.types import union_categoricals
uc = union_categoricals([df1.x,df2.x])
df1.x = pd.Categorical( df1.x, categories=uc.categories )
df2.x = pd.Categorical( df2.x, categories=uc.categories )
```
The final function could iterate over all columns whose names match and are categorical. It can alternatively take an explicit list of particular columns. | 0easy
|
Title: Use f-strings instead of `str.format`
Body: Currently, TensorLy uses the format-method for string formatting. However, in Python 3.6, f-strings were introduced, so the only reason to use the format-method instead of f-strings is for support of Python 3.5, which reached end-of-life in 2020. We can therefore safely replace all calls to `str.format` with f-strings, which is more readable. See example below:
```python
x = 1.
s1 = "{}".format(x)
s2 = f"{x}"
s3 = "{:.1f}".format(x)
s4 = f"{x:.1f}"
print(s1 == s2)
print(s3 == s4)
```
```
True
True
```
More information is available in this tutorial: https://realpython.com/python-f-strings/ | 0easy
|
Title: error 'numpy.float64' object is not iterable
Body: How can I solve this error?

```
X_train, X_test, y_train, y_test = train_test_split(
df[df.columns[1:-1]],
df['Price'],
test_size=0.25,
random_state=123,
)
# train models with AutoML
automl = AutoML(mode="Explain")
automl.fit(X_train, y_train)
# compute the MSE on test data
predictions = automl.predict(X_test)
print("Test MAE:", mean_absolute_error(y_test, predictions))
```
These are output
output:
Linear algorithm was disabled.
AutoML directory: AutoML_8
The task is regression with evaluation metric rmse
AutoML will use algorithms: ['Baseline', 'Decision Tree', 'Random Forest', 'Xgboost', 'Neural Network']
AutoML will ensemble available models
AutoML steps: ['simple_algorithms', 'default_algorithms', 'ensemble']
* Step simple_algorithms will try to check up to 2 models
1_Baseline rmse 35.216024 trained in 0.38 seconds
Exception while producing SHAP explanations. module 'numpy' has no attribute 'bool'.
`np.bool` was a deprecated alias for the builtin `bool`. To avoid this error in existing code, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
Continuing ...
2_DecisionTree rmse 19.749834 trained in 1.46 seconds
* Step default_algorithms will try to check up to 3 models
Exception while producing SHAP explanations. module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
Continuing ...
3_Default_Xgboost rmse 9.812939 trained in 6.66 seconds
There was an error during 3_Default_Xgboost training.
Please check AutoML_8\errors.md for details.
4_Default_NeuralNetwork rmse 13.374119 trained in 4.76 seconds
There was an error during 4_Default_NeuralNetwork training.
...
Ensemble rmse 9.812939 trained in 0.14 seconds
AutoML fit time: 19.64 seconds
AutoML best model: 3_Default_Xgboost
Test MAE: 6.264868275003538
And this is error.md
## Error for 3_Default_Xgboost
'numpy.float64' object is not iterable
Traceback (most recent call last):
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\base_automl.py", line 1095, in _fit
trained = self.train_model(params)
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\base_automl.py", line 386, in train_model
mf.save(results_path, model_subpath)
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\model_framework.py", line 490, in save
preprocessing = [p.to_json() for p in self.preprocessings]
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\model_framework.py", line 490, in <listcomp>
preprocessing = [p.to_json() for p in self.preprocessings]
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\preprocessing\preprocessing.py", line 582, in to_json
preprocessing_params["scale_y"] = self._scale_y.to_json()
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\preprocessing\scale.py", line 76, in to_json
data_json["X_min_values"] = list(self.X_min_values)
TypeError: 'numpy.float64' object is not iterable
Please set a GitHub issue with above error message at: https://github.com/mljar/mljar-supervised/issues/new
## Error for 4_Default_NeuralNetwork
'numpy.float64' object is not iterable
Traceback (most recent call last):
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\base_automl.py", line 1095, in _fit
trained = self.train_model(params)
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\base_automl.py", line 386, in train_model
mf.save(results_path, model_subpath)
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\model_framework.py", line 490, in save
preprocessing = [p.to_json() for p in self.preprocessings]
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\model_framework.py", line 490, in <listcomp>
preprocessing = [p.to_json() for p in self.preprocessings]
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\preprocessing\preprocessing.py", line 582, in to_json
preprocessing_params["scale_y"] = self._scale_y.to_json()
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\preprocessing\scale.py", line 76, in to_json
data_json["X_min_values"] = list(self.X_min_values)
TypeError: 'numpy.float64' object is not iterable
Please set a GitHub issue with above error message at: https://github.com/mljar/mljar-supervised/issues/new
## Error for 5_Default_RandomForest
'numpy.float64' object is not iterable
Traceback (most recent call last):
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\base_automl.py", line 1095, in _fit
trained = self.train_model(params)
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\base_automl.py", line 386, in train_model
mf.save(results_path, model_subpath)
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\model_framework.py", line 490, in save
preprocessing = [p.to_json() for p in self.preprocessings]
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\model_framework.py", line 490, in <listcomp>
preprocessing = [p.to_json() for p in self.preprocessings]
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\preprocessing\preprocessing.py", line 582, in to_json
preprocessing_params["scale_y"] = self._scale_y.to_json()
File "c:\Users\user\anaconda3\envs\mljar\lib\site-packages\supervised\preprocessing\scale.py", line 76, in to_json
data_json["X_min_values"] = list(self.X_min_values)
TypeError: 'numpy.float64' object is not iterable
Please set a GitHub issue with above error message at: https://github.com/mljar/mljar-supervised/issues/new | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.