text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Add evobits to sponsors list
Body: The OA servers are sponsored by Evobits https://evobitsit.com/ .. they need to be added to our list of sponsors.
Icon: https://evobitsit.com/img/favicon/favicon.ico
| 0easy
|
Title: It's not always clear the some results are paginated.
Body: When querying sites { pages {} } grapple enforces a hard limit on the number of responses returned. When you get a response there is not indication that there are more pages to be fetched or that there was a default limit applied. I feel like we should update these interfaces to have some indication of total count, etc. | 0easy
|
Title: Timezones in appends are ignored
Body: #### Arctic Version
```
1.62.0
```
#### Arctic Store
```
VersionStore
```
#### Platform and version
Python
#### Description of problem and/or code sample that reproduces the issue
After writing a DataFrame to VersionStore with timestamp index and no timezone, new appends to the same symbol will silently ignore the timezone of the new DataFrame being added. Below is a code example that reproduces the problem:
```python
import numpy as np
import pandas as pd
from arctic import Arctic
store = Arctic('localhost:27017')
store.initialize_library('mylib')
mylib = store['mylib']
ts1 = pd.Timestamp('2018-02-02')
df1 = pd.DataFrame(np.random.randn(1), index=[ts1])
print 'df1.index.tz:', df1.index.tz
mylib.write('s1', df1)
ts2 = pd.Timestamp('2018-02-03', tz='Europe/London')
df2 = pd.DataFrame(np.random.randn(1), index=[ts2])
print 'df2.index.tz:', df2.index.tz
mylib.append('s1', df2)
df3 = mylib.read('s1').data
print 'df3.index.tz after append:', df3.index.tz
mylib.write('s1', df2)
df3 = mylib.read('s1').data
print 'df3.index.tz after write:', df3.index.tz
``` | 0easy
|
Title: Improve Access Logs
Body: We are moving some of our django applications to run inside a lambda. When we did so we lost access logs and gained logs that don't have much information. It would be useful if the Mangum Request/Response had additional log information or a mechanism was added to make logging access logs easy.
Example. WSGI access logs
```
GET /v2/resource HTTP/1.1" 200 686
GET /v2/resource/123 HTTP/1.1" 200 686
POST /v2/resource/123 HTTP/1.1" 400 128
```
Mangum Access Logs for the same thing
```
HTTPCycleState.REQUEST: 'http.response.start' event received from application.
HTTPCycleState.RESPONSE: 'http.response.body' event received from application.
HTTPCycleState.REQUEST: 'http.response.start' event received from application.
HTTPCycleState.RESPONSE: 'http.response.body' event received from application.
HTTPCycleState.REQUEST: 'http.response.start' event received from application.
HTTPCycleState.RESPONSE: 'http.response.body' event received from application.
``` | 0easy
|
Title: Typo in NeighborLoader
Body: ### 📚 Describe the documentation issue
There is a small typo in the docs (https://github.com/pyg-team/pytorch_geometric/blob/f5c829344517c823c24abb08ce2fc7cf00ff29f7/torch_geometric/loader/neighbor_loader.py#L17)
### Suggest a potential alternative/fix
It should say: ` More specifically, :obj:`num_neighbors` denotes how many neighbors are` | 0easy
|
Title: Add cache for list_libraries
Body: #### Description of problem and/or code sample that reproduces the issue
List libraries is O(n) for number of libraries, so is slow for large numbers of databases and libraries, and over high latency links. Provide a cache of the libraries in a top-level database to speed up initial connections. | 0easy
|
Title: The correlation criteria for test pass/fail is inconsistent
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
0.3.45b0
```
**Do you have _TA Lib_ also installed in your environment?**
No
**Did you upgrade? Did the upgrade resolve the issue?**
Yes, force an re-install. Issue remains.
**Describe the bug**
During my adhoc tests, I realized that test compares the result with other lib (ta-lib), which is fine. It also compares value by value through
```python
pdt.assert_series_equal(result, expected, check_names=False)
```
But, the code makes a correlation with the 2 results which is not correct.
```python
corr = pandas_ta.utils.df_error_analysis(result, expected, col=CORRELATION)
self.assertGreater(corr, CORRELATION_THRESHOLD)
```
Correlation is not a good critera because results may have correlation somehow.
The test that I did was to change the results of ta-lib to the candle high values for sma function. The test result is PASS, why? Because the result has correlation with high values.
> Example:
> For a correct PASS:
> test_sma(), correlation = 1.0
> AssertionError: Series are different
> Series values are different (0.01908 %)
> :. series are a bit different, and correlation is 1.
>
> For a incorrect PASS (positive-false):
> test_sma(), correlation = 0.9991919523534346
> AssertionError: Series are different
> Series values are different (99.96184 %)
> :. series are completely different, but correlation is almost 1 !
> The test result is a Positive-False, because correlation criteria is not wise.
**To Reproduce**
get the demo code at:
https://github.com/ffhirata/pandas-ta/tree/testCorrelationFailure
Run:
> python -m unittest -v tests.test_indicator_overlap.main
or Check the report output in attachment, issue_report.pdf.
[issue_report.pdf](https://github.com/twopirllc/pandas-ta/files/7990009/issue_report.pdf)
**Expected behavior**
The criteria should be in comparison of value by value, if a row is not equal than a justification should be provide, ie, line by line.
The test results could be organized in campaigns with static reports, because of manual justification.
or
The criteria should be based on assertion output of Series values are different (xxx %).
**Screenshots**
none
**Additional context**
The confidence of lib has to be reviewed.
Thanks for using Pandas TA!
| 0easy
|
Title: [Feature request] Add apply_to_images to SuperPixels
Body: | 0easy
|
Title: Inference database dummy data fill ability
Body: There's some dummy data [here](https://github.com/LAION-AI/Open-Assistant/tree/main/backend/test_data) which was used for filling the data collection backend DB, could maybe be reused
In that backend we had a setting which was used on server start to determine whether to fill with data, see [here](https://github.com/LAION-AI/Open-Assistant/blob/main/backend/oasst_backend/config.py#L194) | 0easy
|
Title: [Documentation] Add text on how to create custom transform especially when you need to pass extra data
Body: Source of the request: https://www.reddit.com/r/computervision/comments/1gju7oe/need_help_from_albumentations_users/ | 0easy
|
Title: Better support for bytes with `Should Contain`
Body: To use `Should Contains` with bytes, item that is searched needs to be first converted to bytes.
```robot
Test
${binaryVar} = Convert To Bytes \x00\x01\x02\xA0\xB0
Should Contain ${binary1Var} \xA0
```
result:
```
TypeError: a bytes-like object is required, not 'str'
```
for int the value is converted automatically:
```robot
Test Should Contain int
${binary1Var} = Convert To Bytes \x00\x01\x02\xA0\xB0
${item} = Convert To Bytes \xC0
Should Contain ${binary1Var} ${1000}
```
this fails with:
```
ValueError: byte must be in range(0, 256)
```
this is expected. | 0easy
|
Title: [Feature request] Add apply_to_images to ToRGB
Body: | 0easy
|
Title: Bad error message when using Rebot with a non-existing JSON output file
Body: To reproduce:
```
$ rebot non_existing.json
[ ERROR ] Reading JSON source 'non_existing.json' failed: Loading JSON data failed: Invalid JSON data: JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Try --help for usage information.
```
The error is much better when trying to use a non-existing XML output file:
```
$ rebot non_existing.xml
[ ERROR ] Reading XML source 'non_existing.xml' failed: No such file or directory
Try --help for usage information.
``` | 0easy
|
Title: [DOC] Add a reference for MERLIN anomaly detector
Body: ### Describe the issue linked to the documentation
we recently added the MERLIN anomaly detection algorithm but there is no reference to the original paper
https://ieeexplore.ieee.org/document/9338376
need to add this to the docstrings
### Suggest a potential alternative/fix
_No response_ | 0easy
|
Title: should check both 'http_proxy' and 'HTTP_PROXY' when get proxy info from environment variables
Body: ## SUMMARY
I just faced an embarrassed problem that command `st2 action list` keeps return 503.
I tried `st2 --debug action list` and the result in line `HTTP_PROXY`(**uppercase**) was empty. This mislead me that the proxy info was set(which is not).
I finally found that the root cause is a **lowercase** `http_proxy` env variable was set in the system.
So I think it maybe more friendly to check both 'http_proxy' and 'HTTP_PROXY' to get proxy info when using st2 command with `--debug` option.
related code:
https://github.com/StackStorm/st2/blob/5c4e5f8e93c7ed83c4aa4d196085c2912ae7b285/st2client/st2client/base.py#L404
### STACKSTORM VERSION
3.3
##### OS, environment, install method
centos 7.6, custom install
## Steps to reproduce the problem
1) set **lowercase** proxy environment variable, eg. `export http_proxy='127.0.0.1:8888'`
2) run `st2 --debug action list`
## Expected Results
proxy info should be printed
## Actual Results
proxy info did not show, value of HTTP_PROXY is empty.
Thanks!
| 0easy
|
Title: Consider to remove TRANSITION_TARGETS
Body: instead change ``BaseOrder.get_transition_name(target)`` to accept the transition function.
Decorate each transition function with a verbose name. | 0easy
|
Title: How to compute the band of VWAP?
Body: **Which version are you running? The latest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
0.3.14b0
```
**Do you have _TA Lib_ also installed in your environment?**
```sh
$ pip list
...
TA-Lib 0.4.21
...
```
**Did you upgrade? Did the upgrade resolve the issue?**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Describe the bug**
The help only shows VWAP. But there is no mentioning the bands. On the 2nd link, it mentions `sigma = sqrt(sum_{i=1}^N(x_i-x_bar)^2/N)`. But this is not weighted by the volume. Since the x_bar is averaged using volume, should the computation of sigma also weighted?
```python
vwap(high, low, close, volume, anchor=None, offset=None, **kwargs)
Volume Weighted Average Price (VWAP)
The Volume Weighted Average Price that measures the average typical price
by volume. It is typically used with intraday charts to identify general
direction.
Sources:
https://www.tradingview.com/wiki/Volume_Weighted_Average_Price_(VWAP)
https://www.tradingtechnologies.com/help/x-study/technical-indicator-definitions/volume-weighted-average-price-vwap/
https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:vwap_intraday
Calculation:
tp = typical_price = hlc3(high, low, close)
tpv = tp * volume
VWAP = tpv.cumsum() / volume.cumsum()
Args:
high (pd.Series): Series of 'high's
low (pd.Series): Series of 'low's
close (pd.Series): Series of 'close's
volume (pd.Series): Series of 'volume's
anchor (str): How to anchor VWAP. Depending on the index values, it will
implement various Timeseries Offset Aliases as listed here:
https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#timeseries-offset-aliases
Default: "D".
offset (int): How many periods to offset the result. Default: 0
Kwargs:
fillna (value, optional): pd.DataFrame.fillna(value)
fill_method (value, optional): Type of fill method
Returns:
pd.Series: New feature generated.
```
**To Reproduce**
NA
**Expected behavior**
See above.
**Screenshots**
NA
**Additional context**
NA | 0easy
|
Title: 如何正确提issues (How to properly raise issues)
Body: ### 如何正确提issues
1. 提问前建议先自己去尝试解决,可以借助一些搜索引擎(谷歌/必应等等)。如果实在无法自己解决再发issues,在提issues之前,请先仔细阅读《[提问的智慧](https://github.com/ryanhanwu/How-To-Ask-Questions-The-Smart-Way/blob/main/README-zh_CN.md)》;
2. 提问时候必须提供如下信息,以便于定位问题所在:系统平台,出现问题的环节,python环境版本,torch版本,所用分支,所用数据集,授权证明截图,问题描述,完整的日志截图;
3. 提问时候态度要友好。
### 什么样的issues会被close
1. 伸手党;
2. 一键包/环境包相关;
3. 提供的信息不全;
4. 所用的数据集是无授权数据集(游戏角色/二次元人物暂不归为此类,但是训练时候也要小心谨慎。如果能联系到官方,必须先和官方联系并核实清楚)。
### 参考格式(可以直接复制)
**系统平台:** 在此处填写你所用的平台,例如:Windows
**出现问题的环节:** 安装依赖/推理/训练/预处理/其它
**Python版本:** 在此填写你所用的Python版本,可用 `python -V` 查询
**PyTorch版本:** 在此填写你所用的PyTorch版本,可用 `pip show torch` 查询
**所用分支:** 在此填写你所用的代码分支
**所用数据集:** 在此填写你训练所用数据集的来源,如果只是推理,可留空
**授权证明截图:**
在此添加授权证明截图,如果是数据集是自己的声音或数据集为游戏角色/二次元人物或没有训练需求,此处可留空
**问题描述:** 在这里描述自己的问题,越详细越好
**日志截图:**
在此添加完整的日志截图,便于定位问题所在 | 0easy
|
Title: Docs: Add information on the default OpenAPI files available
Body: ### Summary
In https://docs.litestar.dev/latest/usage/openapi we don't really *specifically* say anything about the availability of `$openapi_path/openapi.{json,yaml,yml}` except in 2 places, 1 of which says it is deprecated:
- https://docs.litestar.dev/latest/usage/openapi/ui_plugins.html#providing-a-subclass-of-openapicontroller (but just glossing over)
- https://docs.litestar.dev/latest/usage/openapi/ui_plugins.html#configuring-the-openapi-root-path (but just glossing over)
It would be nice to document this when people need them e.g., for Scalar API client's collection import feature | 0easy
|
Title: add model optional args
Body:
### Description
users should have the ability to add model optional arguments in the yaml file
| 0easy
|
Title: Enhancement: Add a rejection sampler
Body: ### Summary
Currently the `batch` method fails with a validation error if any of the generated rows fail the schema validators. To allow use of the package in a testing environment, it would be useful to be able to generate a dataframe of any size using a rejection sampler method. This method should store the random seeds of successful builds in order to reproduce the same dataframe each time.
I have created a class that performs these actions included below. Given this is something I have needed for my project, it could be a useful feature for others wanting to use Polyfactory for testing. I built it based off the original pydantic factories package, but I imagine it would be pretty similar for the additional Factory options in Polyfactory.
### Basic Example
```
import time
import json
import pandas as pd
from polyfactory.factories.pydantic_factory import ModelFactory
class RejectionSampler:
"""Function to create a synthetic dataset based off the pydantic schema,
dropping rows that do not meet the validation set up in the schema.
Parameters
----------
factory (ModelFactory): pydantic factories ModelFactory created from pydantic schema
size (int): Length of dataset to create
"""
def __init__(self, factory: ModelFactory, size: int) -> None:
self.factory = factory
self.size = size
self.used_seeds = []
def setup_seeds(self):
start = time.time()
synthetic_data = pd.DataFrame()
# start seed at 1, increase seed by 1 each pass/fail of factory.build() to ensure reproducibility
seed_no = 1
for _ in range(self.size):
result = None
while not result:
try:
self.factory.seed_random(seed_no)
result = self.factory.build()
result_dict = json.loads(result.json())
synthetic_data = synthetic_data.append(
pd.DataFrame(result_dict, index=[0])
)
self.used_seeds += [seed_no]
seed_no += 1
result = True
except ValidationError:
seed_no += 1
end = time.time()
print(f"finished, took {seed_no-1} attempts to generate {self.size} rows")
print(f"took {end-start} seconds to setup seeds")
def generate(self):
start = time.time()
synthetic_data = pd.DataFrame()
for seed in self.used_seeds:
self.factory.seed_random(seed)
result = self.factory.build()
result_dict = json.loads(result.json())
synthetic_data = synthetic_data.append(pd.DataFrame(result_dict, index=[0]))
end = time.time()
print(f"took {end-start} seconds to generate new data")
return synthetic_data
```
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | 0easy
|
Title: [UX] sky exec CLI does not support dryrun, while sdk does
Body: ```bash
sky exec --dryrun test echo
Usage: sky exec [OPTIONS] [CLUSTER] [ENTRYPOINT]...
Try 'sky exec -h' for help.
Error: No such option: --dryrun Did you mean --detach-run?
```
While [sdk does](https://github.com/skypilot-org/skypilot/blob/2c4849b6f73499740f495f84a29ac4af98d25073/sky/client/sdk.py#L481)
Version:
```
sky -c
skypilot, commit 2c4849b6f73499740f495f84a29ac4af98d25073
``` | 0easy
|
Title: Feature: add `xontext` to builtin aliases
Body: I found that [`xc` from xontrib-rc-awesome](https://github.com/anki-code/xontrib-rc-awesome/blob/c0a0e159a14a8560e17b70cd8ac23c451274fde0/xontrib/rc_awesome.xsh#L98-L115) is very useful alias:
```xsh
from shutil import which as _which
# Alias to get Xonsh Context.
# Read more: https://github.com/anki-code/xonsh-cheatsheet/blob/main/README.md#install-xonsh-with-package-and-environment-management-system
@aliases.register("xc")
def _alias_xc():
"""Get xonsh context."""
print('xpython:', imp.sys.executable, '#', $(@(imp.sys.executable) -V).strip())
print('xpip:', $(which xpip).strip()) # xpip - xonsh's builtin to install packages in current session xonsh environment.
print('')
print('xonsh:', $(which xonsh))
print('python:', $(which python), '#', $(python -V).strip())
print('pip:', $(which pip))
if _which('pytest'):
print('pytest:', $(which pytest))
print('')
envs = ['CONDA_DEFAULT_ENV']
for ev in envs:
if (val := __xonsh__.env.get(ev)):
print(f'{ev}:', val)
```
It will be grate to have `xontext` alias:
```xsh
xontext
# [Current xonsh session]
# xpython: /Users/pc/.local/xonsh-env/bin/python # Python 3.12.2
# xpip: /Users/pc/.local/xonsh-env/bin/python -m pip
#
# [Current commands environment]
# xonsh: /Users/pc/.local/xonsh-env/xbin/xonsh # https://github.com/anki-code/xonsh-install
# python: /opt/homebrew/bin/python # Python 3.11.6
# pip: /opt/homebrew/bin/pip
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Raise helpful error message when importing optional dependencies that are not installed.
Body: ### Description
<!-- Please provide a general introduction to the issue/proposal. -->
If cartopy is installed without optional dependencies, it would be nice to have a helpful error message when trying to use functions that depend on such libraries. Currently, just an ImportError is raised. It would be more useful to notify the user of the optional dependency (possibly including a documentation link).
<!--
If you are reporting a bug, attach the *entire* traceback from Python.
If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc.
If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy
questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy
-->
#### Code to reproduce
```python
from cartopy.crs import epsg
epsg(32633)
```
#### Traceback
```python
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-a9ebc24438bc> in <module>
----> 1 epsg(32633)
~/miniconda3/envs/spechomo/lib/python3.8/site-packages/cartopy/crs.py in epsg(code)
2555 """
2556 import cartopy._epsg
-> 2557 return cartopy._epsg._EPSGProjection(code)
~/miniconda3/envs/spechomo/lib/python3.8/site-packages/cartopy/_epsg.py in __init__(self, code)
40 class _EPSGProjection(ccrs.Projection):
41 def __init__(self, code):
---> 42 import pyepsg
43 projection = pyepsg.get(code)
44 if not (isinstance(projection, pyepsg.ProjectedCRS) or
ModuleNotFoundError: No module named 'pyepsg'
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Linux-64
### Cartopy version
cartopy 0.18.0 py38h88488af_4 conda-forge
### conda list
```
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 1_gnu conda-forge
argon2-cffi 20.1.0 py38h1e0a361_2 conda-forge
async_generator 1.10 py_0 conda-forge
attrs 20.2.0 pyh9f0ad1d_0 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 py_2 conda-forge
backports.functools_lru_cache 1.6.1 py_0 conda-forge
bleach 3.2.1 pyh9f0ad1d_0 conda-forge
blosc 1.20.1 he1b5a44_0 conda-forge
boost-cpp 1.74.0 h9359b55_0 conda-forge
branca 0.4.1 py_0 conda-forge
brotli 1.0.9 he1b5a44_2 conda-forge
brotlipy 0.7.0 py38h8df0ef7_1001 conda-forge
brunsli 0.1 he1b5a44_0 conda-forge
bzip2 1.0.8 h516909a_3 conda-forge
c-ares 1.16.1 h516909a_3 conda-forge
ca-certificates 2020.6.20 hecda079_0 conda-forge
cairo 1.16.0 h488836b_1006 conda-forge
cartopy 0.18.0 py38h88488af_4 conda-forge
certifi 2020.6.20 py38h924ce5b_2 conda-forge
cffi 1.14.3 py38h1bdcb99_1 conda-forge
cfitsio 3.470 hce51eda_7 conda-forge
chardet 3.0.4 py38h924ce5b_1008 conda-forge
charls 2.1.0 he1b5a44_2 conda-forge
click 7.1.2 pyh9f0ad1d_0 conda-forge
click-plugins 1.1.1 py_0 conda-forge
cligj 0.6.0 pyh9f0ad1d_0 conda-forge
cloudpickle 1.6.0 py_0 conda-forge
cmocean 2.0 pypi_0 pypi
colorama 0.4.4 pyh9f0ad1d_0 conda-forge
cryptography 3.1.1 py38hb23e4d4_1 conda-forge
curl 7.71.1 he644dc0_8 conda-forge
cycler 0.10.0 py_2 conda-forge
cython 0.29.21 pypi_0 pypi
cytoolz 0.11.0 py38h1e0a361_1 conda-forge
dask-core 2.30.0 py_0 conda-forge
decorator 4.4.2 py_0 conda-forge
defusedxml 0.6.0 py_0 conda-forge
dill 0.3.2 pyh9f0ad1d_0 conda-forge
entrypoints 0.3 py38h32f6830_1002 conda-forge
expat 2.2.9 he1b5a44_2 conda-forge
fast-histogram 0.9 pypi_0 pypi
fiona 1.8.17 py38h676c6b2_1 conda-forge
folium 0.11.0 py_0 conda-forge
fontconfig 2.13.1 h1056068_1002 conda-forge
freetype 2.10.3 he06d7ca_0 conda-forge
freexl 1.0.5 h516909a_1002 conda-forge
future 0.18.2 pypi_0 pypi
gdal 3.1.3 py38h9edfc58_1 conda-forge
geoarray 0.10.0 py38h32f6830_0 conda-forge
geojson 2.5.0 py_0 conda-forge
geopandas 0.8.1 py_0 conda-forge
geos 3.8.1 he1b5a44_0 conda-forge
geotiff 1.6.0 h5d11630_3 conda-forge
gettext 0.19.8.1 hf34092f_1003 conda-forge
giflib 5.2.1 h516909a_2 conda-forge
gitdb 4.0.5 py_0 conda-forge
gitpython 3.1.9 py_0 conda-forge
glib 2.66.1 he1b5a44_1 conda-forge
hdf4 4.2.13 hf30be14_1003 conda-forge
hdf5 1.10.6 nompi_h54c07f9_1110 conda-forge
icu 67.1 he1b5a44_0 conda-forge
idna 2.10 pyh9f0ad1d_0 conda-forge
imagecodecs 2020.5.30 py38h63741c2_4 conda-forge
imageio 2.9.0 py_0 conda-forge
importlib-metadata 2.0.0 py_1 conda-forge
importlib_metadata 2.0.0 1 conda-forge
ipykernel 5.3.4 py38h1cdfbd6_1 conda-forge
ipympl 0.5.8 pyh9f0ad1d_0 conda-forge
ipython 7.18.1 py38h1cdfbd6_1 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
ipywidgets 7.5.1 pyh9f0ad1d_1 conda-forge
iso8601 0.1.13 pypi_0 pypi
jedi 0.17.2 py38h32f6830_1 conda-forge
jinja2 2.11.2 pyh9f0ad1d_0 conda-forge
joblib 0.17.0 py_0 conda-forge
jpeg 9d h516909a_0 conda-forge
json-c 0.13.1 hbfbb72e_1002 conda-forge
json5 0.9.5 pyh9f0ad1d_0 conda-forge
jsonschema 3.2.0 py_2 conda-forge
jupyter_client 6.1.7 py_0 conda-forge
jupyter_core 4.6.3 py38h32f6830_2 conda-forge
jupyterlab 2.2.8 py_0 conda-forge
jupyterlab-git 0.22.1 py_0 conda-forge
jupyterlab_pygments 0.1.2 pyh9f0ad1d_0 conda-forge
jupyterlab_server 1.2.0 py_0 conda-forge
jxrlib 1.1 h516909a_2 conda-forge
kealib 1.4.13 h33137a7_1 conda-forge
kiwisolver 1.2.0 py38hbf85e49_1 conda-forge
krb5 1.17.1 hfafb76e_3 conda-forge
lcms2 2.11 hbd6801e_0 conda-forge
ld_impl_linux-64 2.35 h769bd43_9 conda-forge
lerc 2.2 he1b5a44_0 conda-forge
libaec 1.0.4 he1b5a44_1 conda-forge
libblas 3.8.0 17_openblas conda-forge
libcblas 3.8.0 17_openblas conda-forge
libcurl 7.71.1 hcdd3856_8 conda-forge
libdap4 3.20.6 h1d1bd15_1 conda-forge
libedit 3.1.20191231 he28a2e2_2 conda-forge
libev 4.33 h516909a_1 conda-forge
libffi 3.2.1 he1b5a44_1007 conda-forge
libgcc-ng 9.3.0 h5dbcf3e_17 conda-forge
libgdal 3.1.3 h670eac6_1 conda-forge
libgfortran-ng 7.5.0 hae1eefd_17 conda-forge
libgfortran4 7.5.0 hae1eefd_17 conda-forge
libglib 2.66.1 h0dae87d_1 conda-forge
libgomp 9.3.0 h5dbcf3e_17 conda-forge
libiconv 1.16 h516909a_0 conda-forge
libkml 1.3.0 h74f7ee3_1012 conda-forge
liblapack 3.8.0 17_openblas conda-forge
libnetcdf 4.7.4 nompi_h84807e1_105 conda-forge
libnghttp2 1.41.0 h8cfc5f6_2 conda-forge
libopenblas 0.3.10 h5a2b251_0
libpng 1.6.37 hed695b0_2 conda-forge
libpq 12.3 h5513abc_2 conda-forge
libsodium 1.0.18 h516909a_1 conda-forge
libspatialindex 1.9.3 he1b5a44_3 conda-forge
libspatialite 5.0.0 h4dde289_0 conda-forge
libssh2 1.9.0 hab1572f_5 conda-forge
libstdcxx-ng 9.3.0 h2ae2ef3_17 conda-forge
libtiff 4.1.0 hc7e4089_6 conda-forge
libuuid 2.32.1 h14c3975_1000 conda-forge
libuv 1.40.0 hd18ef5c_0 conda-forge
libwebp-base 1.1.0 h516909a_3 conda-forge
libxcb 1.13 h14c3975_1002 conda-forge
libxml2 2.9.10 h68273f3_2 conda-forge
libzopfli 1.0.3 he1b5a44_0 conda-forge
lxml 4.6.1 pypi_0 pypi
lz4-c 1.9.2 he1b5a44_3 conda-forge
markupsafe 1.1.1 py38h8df0ef7_2 conda-forge
matplotlib-base 3.3.2 py38h4d1ce4f_1 conda-forge
mistune 0.8.4 py38h1e0a361_1002 conda-forge
mpl-scatter-density 0.7 pypi_0 pypi
munch 2.5.0 py_0 conda-forge
natsort 7.0.1 py_0 conda-forge
nbclient 0.5.1 py_0 conda-forge
nbconvert 6.0.7 py38h32f6830_1 conda-forge
nbdime 2.1.0 py_0 conda-forge
nbformat 5.0.8 py_0 conda-forge
ncurses 6.2 he1b5a44_2 conda-forge
nest-asyncio 1.4.1 py_0 conda-forge
nested_dict 1.61 pyh9f0ad1d_0 conda-forge
networkx 2.5 py_0 conda-forge
nodejs 14.13.1 h568c755_0 conda-forge
notebook 6.1.4 py38h32f6830_1 conda-forge
numpy 1.19.2 py38hf89b668_1 conda-forge
obspy 1.2.2 pypi_0 pypi
olefile 0.46 pyh9f0ad1d_1 conda-forge
openjpeg 2.3.1 h981e76c_3 conda-forge
openssl 1.1.1h h516909a_0 conda-forge
packaging 20.4 pyh9f0ad1d_0 conda-forge
pandas 1.1.3 py38hc5bc63f_2 conda-forge
pandoc 2.11.0.2 hd18ef5c_0 conda-forge
pandocfilters 1.4.2 py_1 conda-forge
parso 0.7.1 pyh9f0ad1d_0 conda-forge
patsy 0.5.1 py_0 conda-forge
pcre 8.44 he1b5a44_0 conda-forge
pexpect 4.8.0 pyh9f0ad1d_2 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 8.0.0 py38h9776b28_0 conda-forge
pip 20.2.4 py_0 conda-forge
pixman 0.38.0 h516909a_1003 conda-forge
plotly 4.11.0 pypi_0 pypi
poppler 0.89.0 h4190859_1 conda-forge
poppler-data 0.4.9 1 conda-forge
postgresql 12.3 h8573dbc_2 conda-forge
proj 7.1.1 h966b41f_3 conda-forge
prometheus_client 0.8.0 pyh9f0ad1d_0 conda-forge
prompt-toolkit 3.0.8 py_0 conda-forge
pthread-stubs 0.4 h14c3975_1001 conda-forge
ptitprince 0.2.5 pypi_0 pypi
ptvsd 4.3.2 py38h1e0a361_2 conda-forge
ptyprocess 0.6.0 py_1001 conda-forge
py-tools-ds 0.15.7 py38h32f6830_0 conda-forge
pycparser 2.20 pyh9f0ad1d_2 conda-forge
pygments 2.7.1 py_0 conda-forge
pyhamcrest 2.0.2 pypi_0 pypi
pyopenssl 19.1.0 py_1 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pyproj 2.6.1.post1 py38h56787f0_3 conda-forge
pyrsistent 0.17.3 py38h1e0a361_1 conda-forge
pyrsr 0.3.8 py_0 conda-forge
pyshp 2.1.2 pyh9f0ad1d_0 conda-forge
pysocks 1.7.1 py38h924ce5b_2 conda-forge
python 3.8.6 h852b56e_0_cpython conda-forge
python-dateutil 2.8.1 py_0 conda-forge
python_abi 3.8 1_cp38 conda-forge
pytz 2020.1 pyh9f0ad1d_0 conda-forge
pywavelets 1.1.1 py38hab2c0dc_3 conda-forge
pyyaml 5.3.1 py38h8df0ef7_1 conda-forge
pyzmq 19.0.2 py38ha71036d_2 conda-forge
readline 8.0 he28a2e2_2 conda-forge
requests 2.24.0 pyh9f0ad1d_0 conda-forge
retrying 1.3.3 pypi_0 pypi
rtree 0.9.4 py38h08f867b_1 conda-forge
scikit-image 0.17.2 py38hc5bc63f_3 conda-forge
scikit-learn 0.23.2 py38h519568a_1 conda-forge
scipy 1.5.2 py38h8c5af15_2 conda-forge
seaborn 0.11.0 0 conda-forge
seaborn-base 0.11.0 py_0 conda-forge
send2trash 1.5.0 py_0 conda-forge
setuptools 49.6.0 py38h924ce5b_2 conda-forge
shapely 1.7.1 py38hc7361b7_1 conda-forge
six 1.15.0 pyh9f0ad1d_0 conda-forge
smmap 3.0.4 pyh9f0ad1d_0 conda-forge
snappy 1.1.8 he1b5a44_3 conda-forge
specclassify 0.2.6 pyh9f0ad1d_0 conda-forge
spechomo 0.8.2 py_0 conda-forge
spechomo-eval 0.3.3 dev_0 <develop>
specidx 0.2.8 dev_0 <develop>
spectral 0.22.1 pyh9f0ad1d_0 conda-forge
sqlalchemy 1.3.20 pypi_0 pypi
sqlite 3.33.0 h4cf870e_1 conda-forge
statsmodels 0.12.0 py38hab2c0dc_1 conda-forge
tabulate 0.8.7 pyh9f0ad1d_0 conda-forge
terminado 0.9.1 py38h32f6830_1 conda-forge
testpath 0.4.4 py_0 conda-forge
threadpoolctl 2.1.0 pyh5ca1d4c_0 conda-forge
tifffile 2020.10.1 py_0 conda-forge
tiledb 2.1.1 h47b529c_1 conda-forge
tk 8.6.10 hed695b0_1 conda-forge
toolz 0.11.1 py_0 conda-forge
tornado 6.0.4 py38h1e0a361_2 conda-forge
tqdm 4.50.2 pyh9f0ad1d_0 conda-forge
traitlets 5.0.5 py_0 conda-forge
tzcode 2020a h516909a_0 conda-forge
urllib3 1.25.11 py_0 conda-forge
wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge
webencodings 0.5.1 py_1 conda-forge
wheel 0.35.1 pyh9f0ad1d_0 conda-forge
widgetsnbextension 3.5.1 py38h32f6830_4 conda-forge
xarray 0.16.1 pypi_0 pypi
xerces-c 3.2.3 hfe33f54_1 conda-forge
xeus 0.24.2 h841dea4_1 conda-forge
xeus-python 0.8.6 py38h2078d81_1 conda-forge
xorg-kbproto 1.0.7 h14c3975_1002 conda-forge
xorg-libice 1.0.10 h516909a_0 conda-forge
xorg-libsm 1.2.3 h84519dc_1000 conda-forge
xorg-libx11 1.6.12 h516909a_0 conda-forge
xorg-libxau 1.0.9 h14c3975_0 conda-forge
xorg-libxdmcp 1.1.3 h516909a_0 conda-forge
xorg-libxext 1.3.4 h516909a_0 conda-forge
xorg-libxrender 0.9.10 h516909a_1002 conda-forge
xorg-renderproto 0.11.1 h14c3975_1002 conda-forge
xorg-xextproto 7.3.0 h14c3975_1002 conda-forge
xorg-xproto 7.0.31 h14c3975_1007 conda-forge
xz 5.2.5 h516909a_1 conda-forge
yaml 0.2.5 h516909a_0 conda-forge
zeromq 4.3.3 he1b5a44_2 conda-forge
zfp 0.5.5 he1b5a44_4 conda-forge
zipp 3.3.1 py_0 conda-forge
zlib 1.2.11 h516909a_1010 conda-forge
zstd 1.4.5 h6597ccf_2 conda-forge
```
</details>
| 0easy
|
Title: A little enhancement for win32_hooks.py method _process_kbd_msg_type
Body: I propose a little enhancement for win32_hooks.py method _process_kbd_msg_type.
While debugging keyboard hooks, I noticed that the self.pressed_key array often contains the same key several times. I propose to add change the line
if event_type == 'key down':
in this method to
if event_type == 'key down' and current_key not in self.pressed_keys:
This keeps the list shorter.
| 0easy
|
Title: Add the missing docstrings to the `spec_evaluator.py` file
Body: Add the missing docstrings to the [spec_evaluator.py](https://github.com/scanapi/scanapi/blob/main/scanapi/evaluators/spec_evaluator.py) file
[Here](https://github.com/scanapi/scanapi/wiki/First-Pull-Request#7-make-your-changes) you can find instructions of how we create the [docstrings](https://www.python.org/dev/peps/pep-0257/#what-is-a-docstring).
Child of https://github.com/scanapi/scanapi/issues/411 | 0easy
|
Title: Address Werkzeug, Flask, and Python Versions for flask-restx 0.2.0
Body: Werkzeug 1.0 was released Feb 6th and removes py3.4 support.
Flask requires Werkzeug >= 0.15.0, so a default install from scratch will pull WZ 1.0
Flask has moved onto the 1.X series as of mid-2018 but we "allow" 0.8+
Address the large variance and forward motion of flask and werkzeug.
Flask >= 1.0.2 (1.0 and 1.0.1 had bugs. 1.0.2 stayed stable for ayear)
Werkzeug >= 1.0 (yes, forcing forward compatibility with 1.0 series)
remove py34
Related to #34 and #35
| 0easy
|
Title: [new]: `drop_dataset(dataset)`
Body: ### Check the idea has not already been suggested
- [X] I could not find my idea in [existing issues](https://github.com/unytics/bigfunctions/issues?q=is%3Aissue+is%3Aopen+label%3Anew-bigfunction)
### Edit the title above with self-explanatory function name and argument names
- [X] The function name and the argument names I entered in the title above seems self explanatory to me.
### BigFunction Description as it would appear in the documentation
get inspired from
```
execute immediate 'create or replace temp table tables as (select table_name as name from `' || dataset || '`.INFORMATION_SCHEMA.TABLES where table_type != "VIEW")';
execute immediate 'create or replace temp table views as (select table_name as name from `' || dataset || '`.INFORMATION_SCHEMA.TABLES where table_type = "VIEW")';
execute immediate 'create or replace temp table routines as (select routine_name, routine_type from `' || dataset || '`.INFORMATION_SCHEMA.ROUTINES)';
for record in (select * from tables) do
execute immediate 'drop table `' || dataset || '.' || record.name || '`';
end for;
for record in (select * from views) do
execute immediate 'drop view `' || dataset || '.' || record.name || '`';
end for;
for record in (select * from routines) do
execute immediate 'drop ' || record.routine_type || ' `' || dataset || '.' || record.routine_name || '`';
end for;
execute immediate "drop schema " || dataset;
```
### Examples of (arguments, expected output) as they would appear in the documentation
- my_dataset | 0easy
|
Title: Transit and traffic layers
Body: Google maps allows showing the current traffic and the transit routes. See [this](https://developers.google.com/maps/documentation/javascript/trafficlayer) for examples.
Anyone interested in adding this should follow the bicyling layer.
- [X] transit layer
- [x] traffic layer
- [x] transit layer not downloadable?
- [x] amend tutorial section on bicycling to also talk about transit and traffic layers. | 0easy
|
Title: ENH: expose `greedy` colouring from mapclassify
Body: Mapclassify has a function to create labels for topological colouring (xref #1165) called [`greedy`](https://pysal.org/mapclassify/generated/mapclassify.greedy.html#mapclassify.greedy). We could expose it as another option in `plot` and `explore` under the `scheme` keyword as `scheme="greedy"` and pass the other kwargs through `classification_kwds`.
Before passing the `scheme` to `mapclassify.classify` we would need to catch that the value is `greedy` and use `mapclassify.greedy` instead.
https://github.com/geopandas/geopandas/blob/e62b74a81428e6ab8ced23d688424a83bdabb686/geopandas/plotting.py#L772-L774 | 0easy
|
Title: nb cli command changes the files without the pipeline.yaml
Body: When running on the first pipeline example and running: !ploomber nb -f .ipynb
The files will change from .py to ipynb but the pipeline would keep pointing to the .py files.
This causes:
1. The pipeline isn't functional.
2. Users can revert back to the .ipynb files without changing the file manually. | 0easy
|
Title: Update deprecated `set-output` commands
Body: See [GitHub Actions: Deprecating save-state and set-output commands](https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/) and https://github.com/github/codeql-action/pull/1301
| 0easy
|
Title: Custom media handlers: Unexpected issue when providing custom json handler
Body: This is in falcon-2.0
Look at the documentation [here][1] for using rapidjson for encoding/decoding json. By providing:
`extra_handlers={'application/json': json_handler}` we are still left with the default handler for content-type `application-json; charset=UTF-8`. This results in an unexpected behaviour when some client library (e.g. Retrofit for Android) includes the charset in the header.
While the documentation should be updated, the expected behaviour is that if the handler for `application/json` is updated - it should also update the handler for variant with charset (or at least throw a warning) otherwise there is a possibility of hidden bugs.
[1]: https://falcon.readthedocs.io/en/stable/api/media.html | 0easy
|
Title: Add integration test for Datasette
Body: We need to find a reliable Datasette instance to run integration tests, or run our own. | 0easy
|
Title: Jupyter notebook tutorials: Standardization/streamlining of the notebook format
Body: - [ ] More linting to check new tutorial notebooks adhere to certain formatting requirements, eg:
* No outputs of cells.
* No virtualenv metadata.
See existing linter that checks if notebooks have empty lines: https://github.com/cleanlab/cleanlab/blob/master/.ci/nblint.py
- [ ] Explanation in [DEVELOPMENT.md](http://development.md/) doc of what steps developer should go through to create a new tutorial, and what the exact formatting requirements are. | 0easy
|
Title: browser.close() raises error message when `userDataDir` option is set
Body: I'm using latest version of pyppeteer (0.2.2 from `dev` branch) on Windows 10.
Minimal code to reproduce the issue:
```python
import asyncio
from pyppeteer import launch
async def test():
browser = await launch({'userDataDir': 'test'})
await browser.close()
asyncio.run(test())
```
And here is the error message:
```
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "C:\[ projects ]\kroger-cli\venv\lib\site-packages\pyppeteer\launcher.py", line 151, in _close_process
self._loop.run_until_complete(self.killChrome())
File "D:\Python\Python38\lib\asyncio\base_events.py", line 591, in run_until_complete
self._check_closed()
File "D:\Python\Python38\lib\asyncio\base_events.py", line 508, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
sys:1: RuntimeWarning: coroutine 'Launcher.killChrome' was never awaited
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
```
Removing the `userDataDir` option from the launch eliminates the issue (i.e. everything is working as expected).
Is it something you can help with? Happy to provide any logs/details that could help. Thanks! | 0easy
|
Title: Add map with Predictive Power Score
Body: Please add map with PPS https://github.com/8080labs/ppscore
See discussion https://www.reddit.com/r/MachineLearning/comments/ix5q64/p_training_automl_on_random_data_seeking_for/ | 0easy
|
Title: Duplicate test name detection does not take variables into account
Body: There is a warning is a suite contains multiple tests with the same name, but this duplicate detection doesn't work properly if there are variables in test names. There are two problems:
- There's no warning if names are same after variables are resolved.
- There is a warning if names are same before variables are resolved but no afterwards.
This is easy to fix by looking for the name from the `result` object, where variables are resolved, instead of the `data` object, that contains the original name. I noticed this when fixing #5292 that has the same root cause. This is less severe but worth fixing anyway. | 0easy
|
Title: datetime.utcnow() is deprecated as of Python 3.12
Body: All instances of datetime.utcnow() should be replaced | 0easy
|
Title: tox4: sub process "pip list" by "subprocess.Popen, communicate()" returns empty in Fedora 36 container in GitHub Actions
Body: ## Issue
Describe what's the expected behaviour and what you're observing.
The tox 4.0.11 is executed in Fedora 36 and 35 Docker containers on GitHub Actions (host OS: Ubuntu 22.0.4 LTS).
The tox executes the `pip list` as a sub process in the following code.
https://github.com/junaruga/rpm-py-installer/blob/2e4e7fe87c17639d06386653125d942228fb306f/install.py#L1546-L1547
```
cmd = '{0} list --format json'.format(self._get_pip_cmd())
json_str = Cmd.sh_e_out(cmd).split('\n')[0]
```
https://github.com/junaruga/rpm-py-installer/blob/2e4e7fe87c17639d06386653125d942228fb306f/install.py#L1979-L1985
```
proc = subprocess.Popen(cmd, **cmd_kwargs)
stdout, stderr = proc.communicate()
returncode = proc.returncode
message_format = (
'CMD Return Code: [{0}], Stdout: [{1}], Stderr: [{2}]'
)
Log.debug(message_format.format(returncode, stdout, stderr))
```
```
[DEBUG] CMD: /work/.tox/py310/bin/python -m pip list
[DEBUG] CMD Return Code: [0], Stdout: [None], Stderr: [b'']
```
```
[DEBUG] CMD: pip list
[DEBUG] CMD Return Code: [0], Stdout: [None], Stderr: [b'']
```
Interestingly when running the used Fedora 36 containers on local environment (Fedora 37), the `pip list` returned the stdout correctly. And this issue doesn't happen on the latest tox version 3 (= 3.27.1).
## Environment
Provide at least:
- OS: Fedora Linux 36 and 35 Docker containers on GitHub Actions (Ubuntu 22.0.4 LTS).
- `pip list` of the host Python where `tox` is installed:
```console
+ pip list
Package Version
------------------------ ----------
attrs 22.1.0
backports.unittest-mock 1.5
cachetools 5.2.0
chardet 5.1.0
colorama 0.4.6
dbus-python 1.2.18
distlib 0.3.6
distro 1.6.0
exceptiongroup 1.0.4
filelock 3.8.2
gpg 1.17.0
iniconfig 1.1.1
libcomps 0.1.18
packaging 22.0
pip 22.3.1
platformdirs 2.6.0
pluggy 1.0.0
pyproject_api 1.2.1
pytest 7.2.0
pytest-helpers-namespace 2021.12.29
python-dateutil 2.8.1
rpm 4.17.1
setuptools 65.6.3
six 1.16.0
tomli 2.0.1
tox 4.0.11
virtualenv 20.17.1
```
## Output of running tox
Provide the output of `tox -rvv`:
```console
```
I am sorry, here is the used `tox.ini`. As the result is quite long. If you need it, I am happy to capture and share the log text.
https://github.com/junaruga/rpm-py-installer/blob/master/tox.ini#L2
## Minimal example
If possible, provide a minimal reproducer for the issue:
```console
```
I am sorry. I don't find a minimal reproducer.
## Reproducing steps
I can tell how to reproduce this issue. Sorry for inconvenience.
1. Fork the repository: https://github.com/junaruga/rpm-py-installer .
2. Add an empty commit on the master branch such as `git commit --allow-empty`.
3. Push the branch to your forked repository.
4. This GitHub Actions are triggered by a push to any branches on the forked repository.
5. See "test-and-buiild (fedora_36 ...)" case in the GitHub Actions.
6. Click "Run the testes" - Search the line by "pip list". The actual log is like [this](https://github.com/junaruga/rpm-py-installer/actions/runs/3713857497/jobs/6297021704#step:10:503).
If you want to test with tox 3 to compare the result, you can do like this.
```
diff --git a/tox-requirements.txt b/tox-requirements.txt
index 46ce3b9..e46c5b1 100644
--- a/tox-requirements.txt
+++ b/tox-requirements.txt
@@ -1,4 +1,4 @@
pip
setuptools
virtualenv
-tox
+tox<4
```
| 0easy
|
Title: Arrow keys no longer work in PDB with tox 4
Body: Add `--pdb` to a `pytest` call (with or without `{tty:--color=yes}`), or insert a `pytest.set_trace()` or `breakpoint()` call.
The up arrow cannot be used to access previous commands in `tox>=4`:
```
(Pdb) pp value
'{}'
(Pdb) ^[[A
```
In `tox<4` the second prompt would show `(Pdb) pp value`. | 0easy
|
Title: Marketplace - search results - search box reduce height to 60px
Body:
### Describe your issue.
Reduce the height of search box from 72px to 60px <img width="1554" alt="Screenshot 2024-12-13 at 20 57 15" src="https://github.com/user-attachments/assets/ac4e243f-8701-40e5-b97b-d1e3a326b004" />
| 0easy
|
Title: [BUG] Time index name is lost after using prepend_values
Body: **Describe the bug**
The name of the time_index changes to 'time' after using `prepend_values`.
**To Reproduce**
```python
import numpy as np
import pandas as pd
from darts import TimeSeries
from darts.utils.timeseries_generation import generate_index
start = pd.Timestamp("2000-01-01")
end = pd.Timestamp("2000-12-31")
freq = pd.Timedelta(weeks=1)
index = generate_index(start=start, end=end, freq=freq, name='date')
values = np.random.normal(0, 1, size=len(index))
ts = TimeSeries.from_times_and_values(index, values, freq=freq, columns=pd.Index(['value']))
ts2 = ts.prepend_values(np.repeat(0, 30))
ts.time_index.name, ts2.time_index.name # ('date', 'time')
```
**Expected behavior**
The name of the time_index should stay the same when values are prepended.
| 0easy
|
Title: improve error message when missing product key
Body: Given a `pipeline.yaml` like this:
```yaml
tasks:
- source: tasks.get
```
If we execute `ploomber build`, It will fail with this message:
```
Error: Error validating TaskSpec({'source': 'tasks.get', 'upstream': None}). Missing keys: 'product'
```
It'd be better to suggest how to fix it:
```
Error: Error validating TaskSpec({'source': 'tasks.get', 'upstream': None}). Missing keys: 'product'
To fix it:
- source: tasks.get
product: products/data.csv
```
Note that if this is a script/notebook, the suggested product should have the `nb` key:
```
Error: Error validating TaskSpec({'source': 'tasks.get', 'upstream': None}). Missing keys: 'product'
To fix it:
- source: script.py
product:
nb: products/report.ipynb
data: products/data.csv
```
It is possible (but unlikely) that the DAGSpec is loaded from a dictionary (instead of a `.yaml`) file, in such case the output format of the suggestion should be a dict:
```python
{'source': 'tasks.get', 'product': 'products/data.csv'}
```
| 0easy
|
Title: Enhancement: Slider component can display it's current value
Body: A nice update to the `Slider` component would be to have it be able to display it's current value. It could be toggled on/off with a prop like `display_value`, which maybe defaults to `false` so it doesn't change behaviour. If `true`, it could display the `value` in a `<div/>` or `<label/>` below it (or above it?) and perhaps it could be styled with props like `value_style` and `value_className`. Perhaps a better name would be `label`, so `display_label`, `label_style`, and `label_className`. | 0easy
|
Title: Cao Initialisation for k-Prototype
Body: Hi,
I've got a question regarding the Cao initialisation procedure.
I understand that for categorical data, the centroids are computed deterministically. In k-modes, the number of initialisations is hence set to 1.
But when using k-prototype, the numerical aspects of the centroids are drawn from a normal distribution. Nonetheless, the number of initialisations is set to 1. Is there a way to do multiple initialisations for this case?
Thanks!
Best regards
Milena | 0easy
|
Title: [UX] Not showing `sky stop` cli hint for cloud that does not support stop
Body: <!-- Describe the bug report / feature request here -->
Currently, we still shows `sky stop <cluster-name>` even if the cloud does not support stop (e.g. Lambda). We should remove this hint.
```bash
📋 Useful Commands
Job ID: 1
├── To cancel the job: sky cancel lmd 1
├── To stream job logs: sky logs lmd 1
└── To view job queue: sky queue lmd
Cluster name: lmd
├── To log into the head VM: ssh lmd
├── To submit a job: sky exec lmd yaml_file
├── To stop the cluster: sky stop lmd
└── To teardown the cluster: sky down lmd
```
| 0easy
|
Title: please add model correlation heatmap
Body: | 0easy
|
Title: PydanticSerializationError missing validation error field name
Body: I am getting quite a few of these `Expected 'int' but got 'float' - serialized value may not be as expected` warnings in the log files. Is there a way to callout the specific field name?
I looked at Rust's exception and error message, and I don't see any "add_variable_context" flag.
If not easily possible, would it be possible to add the field name that is triggering the warning to PydanticSerializationError please.
The object that is triggering the exception has large number of sub-objects and fields, so going through all the fields would be very time consuming.
```python
# Internal imports
# External imports
from pydantic import BaseModel
from pydantic_core import PydanticSerializationError
# Own imports
class TestModelDump(BaseModel):
int_value: int | None = None
float_value: float | None = None
def test_mismatch():
valu = TestModelDump(int_value=1, float_value=1.0)
result = valu.model_dump_json()
print(result)
valu = TestModelDump(int_value=1.0, float_value=1.0)
result = valu.model_dump_json(warnings="error")
print(result)
valu = TestModelDump(int_value=1.0, float_value=1)
valu.int_value = 1.0
valu.float_value = 1
try:
result = valu.model_dump_json(warnings="error")
except PydanticSerializationError as e:
print(e)
else:
print(result)
``` | 0easy
|
Title: Add forward compatible `start_time`, `end_time` and `elapsed_time` propertys to result objects
Body: The plan is to enhance performance of getting and processing timestamps in RF 7.0 (#4258). Part of that is internally representing start and end times as `datetime` objects and elapsed time as a `timedelta`. The old `starttime`, `endtime` and `elapsed` time attributes will be preserved for backwards compatibility reasons, and new `start_time`, `end_time` and `elapsed_time` attributes added.
For forward compatibility reasons we should add `start_time`, `end_time` and `elapsed_time` already in RF 6.1. They can be propetys that get their values from the old attributes. In RF 7 we can then change them to be the "real" attributes and make the old ones propertys. | 0easy
|
Title: Issues Closed metric API
Body: The canonical definition is here: https://chaoss.community/?p=3633 | 0easy
|
Title: [UX] Auto completion in `sky check` for cloud name
Body: <!-- Describe the bug report / feature request here -->
We should have auto completion for cloud name in `sky check`. e.g.:
```bash
$ sky check kube
# hit tab
$ sky check kubernetes
```
| 0easy
|
Title: Azure image-id from marketplace with :latest fails
Body: <!-- Describe the bug report / feature request here -->
Referencing a "latest" VM image from the Azure marketplace like `Canonical:ubuntu-24_04-lts:server:latest` does not work:
```
Traceback (most recent call last):
File "/home/cooperc/.local/bin/sky", line 8, in <module>
sys.exit(cli())
^^^^^
File "/home/cooperc/a/sky-venv/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/sky-venv/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/home/cooperc/a/skypilot/sky/utils/common_utils.py", line 366, in _record
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/skypilot/sky/cli.py", line 838, in invoke
return super().invoke(ctx)
^^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/sky-venv/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/sky-venv/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/sky-venv/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/skypilot/sky/cli.py", line 1118, in launch
task_or_dag = _make_task_or_dag_from_entrypoint_with_overrides(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/skypilot/sky/cli.py", line 818, in _make_task_or_dag_from_entrypoint_with_overrides
task.set_resources_override(override_params)
File "/home/cooperc/a/skypilot/sky/task.py", line 664, in set_resources_override
new_resources = res.copy(**override_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/skypilot/sky/resources.py", line 1261, in copy
resources = Resources(
^^^^^^^^^^
File "/home/cooperc/a/skypilot/sky/resources.py", line 249, in __init__
self._try_validate_image_id()
File "/home/cooperc/a/skypilot/sky/resources.py", line 929, in _try_validate_image_id
image_size = self.cloud.get_image_size(image_id, region)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/skypilot/sky/clouds/azure.py", line 202, in get_image_size
image = compute_client.virtual_machine_images.get(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/sky-venv/lib/python3.11/site-packages/azure/core/tracing/decorator.py", line 105, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/cooperc/a/sky-venv/lib/python3.11/site-packages/azure/mgmt/compute/v2024_07_01/operations/_operations.py", line 18308, in get
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
azure.core.exceptions.HttpResponseError: (InvalidParameter) The value of parameter version is invalid.
Code: InvalidParameter
Message: The value of parameter version is invalid.
Target: version
```
It seems like `virtual_machine_images.get` cannot cope with the `latest` version for some reason. Using a specific version (e.g. `Canonical:ubuntu-24_04-lts:server:24.04.202411030`) works.
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: skypilot, version 1.0.0-dev0
* `sky -c`: skypilot, commit 6c9acac5df3d6fc13a6a7e19deb35715163beddd-dirty
| 0easy
|
Title: pmap over `num_particles` in SVI
Body: Hi,
In `Trace_ELBO`, the `num_particles` argument allows one to effectively introduce a batch size in estimating the ELBO gradient if `num_particles > 1`. By default, it's vectorized over the `num_particles`. Is it possible to also distribute the batch dimension over devices (e.g. when running on multiple GPUs). My particular application is prone to jax OOM errors and would benefit from distribution over `jax.pmap`. | 0easy
|
Title: Update hyper-parameter example notebook
Body: Now that we have `BayesSearchCV` we should update/cross link the bayes search CV example notebook and then hyper-parameter optimisation one.
As part of this we should also change the name of the searchcv notebook so it renders more like the others on our webpage:
<img width="243" alt="screen shot 2017-07-10 at 09 06 02" src="https://user-images.githubusercontent.com/1448859/28006330-15ac551e-654f-11e7-9c5b-7ddef051fb37.png">
| 0easy
|
Title: [BUG] ShapExplainer summary_plot Horizon does not include output_chunk_shift
Body: **Describe the bug**
`ShapExplainer().summary_plot()` should include `output_chunk_shift` for the horizon in the plot.
**To Reproduce**
```python
from darts.datasets import AirPassengersDataset
from darts.explainability import ShapExplainer
from darts.models import LinearRegressionModel
series = AirPassengersDataset().load()
model = LinearRegressionModel(lags=4, output_chunk_shift=4)
model.fit(series)
shap_explainer = ShapExplainer(model=model)
shap_values = shap_explainer.summary_plot()
```
gives

**Expected behavior**
The plot title should be `Horizon: t+(1+output_chunk_shift)` i.e. `Horizon: t+5`.
**System (please complete the following information):**
- Python version: 3.12.7
- darts version: 0.32.0
| 0easy
|
Title: missing examples in the serial and parallel executor docstrings
Body: These two sections are missing examples:
https://docs.ploomber.io/en/latest/api/_modules/executors/ploomber.executors.Serial.html
https://docs.ploomber.io/en/latest/api/_modules/executors/ploomber.executors.Parallel.html
We should add snippets that show how to use the `dotted_path` feature. Same examples as here:
https://docs.ploomber.io/en/latest/api/spec.html#executor | 0easy
|
Title: Sanic adapter uses deprecated cookie methods
Body: As of Sanic v23.3 (released in March 2023), the correct way to attach a cookie to a Sanic response is via the [`add_cookie` method](https://sanic.dev/en/release-notes/2023/v23.3.html#more-convenient-methods-for-setting-and-deleting-cookies). The old cookie dict mechanism is set to be removed in a future Sanic release (the warning messages say v24.9, although that was due to be released a couple months ago and wasn't, so it may be any time in the future). In the meantime, Slack Bolt's Sanic adapter generates deprecation warnings when used with current versions of Sanic.
### Reproducible in:
#### The `slack_bolt` version
slack-bolt==1.21.2
slack-sdk==3.33.1
#### Python runtime version
Python 3.12.7
#### OS info
ProductName: macOS
ProductVersion: 13.7.1
BuildVersion: 22H221
Darwin Kernel Version 22.6.0: Thu Sep 5 20:48:48 PDT 2024; root:xnu-8796.141.3.708.1~1/RELEASE_X86_64
#### Steps to reproduce:
1. Create a new Python project with Slack Bolt, Sanic and Pytest dependencies.
2. Add the following test file to the project:
```python
import os
import pytest
from sanic import Sanic
from sanic.request import Request as SanicRequest
from sanic.response import HTTPResponse as SanicHttpResponse
from slack_bolt.adapter.sanic.async_handler import AsyncSlackRequestHandler as SanicRequestHandler
from slack_bolt.async_app import AsyncApp as AsyncBoltApp
_SANIC_APP = Sanic("standup-for-me")
_BOLT_APP = AsyncBoltApp(
token=os.environ.get("SLACK_BOT_TOKEN"),
signing_secret=os.environ.get("SLACK_SIGNING_SECRET"),
)
_SLACK_REQUEST_HANDLER = SanicRequestHandler(_BOLT_APP)
@_SANIC_APP.get("/slack/install")
async def handle_slack_install(request: SanicRequest) -> SanicHttpResponse:
return await _SLACK_REQUEST_HANDLER.handle(request)
class TestSanicApp:
@pytest.mark.asyncio
async def test_sanic(self) -> None:
_, response = await _SANIC_APP.asgi_client.get("/slack/install")
assert response.status == 200
```
3. Run the test from the CLI using pytest.
4. Note the warnings in the test output.
### Expected result:
Slack Bolt should use the `add_cookie` method when generating a Sanic response object. As a result, there should be no cookie warnings from Sanic.
### Actual result:
The following warnings are logged to stdout:
```
/path/to/project/.venv/lib/python3.12/site-packages/sanic/logging/deprecation.py:33: DeprecationWarning: [DEPRECATION] Setting cookie values using the dict pattern has been deprecated. You should instead use the cookies.add_cookie method. To learn more, please see: https://sanic.dev/en/guide/release-notes/v23.3.html#response-cookies
/path/to/project/.venv/lib/python3.12/site-packages/sanic/logging/deprecation.py:33: DeprecationWarning: [DEPRECATION] Accessing cookies from the CookieJar by dict key is deprecated. You should instead use the cookies.get_cookie method. To learn more, please see: https://sanic.dev/en/guide/release-notes/v23.3.html#response-cookies
/path/to/project/.venv/lib/python3.12/site-packages/sanic/logging/deprecation.py:33: DeprecationWarning: [DEPRECATION v24.9] Setting values on a Cookie object as a dict has been deprecated. This feature will be removed in v24.9. You should instead set values on cookies as object properties: cookie.path=...
/path/to/project/.venv/lib/python3.12/site-packages/sanic/logging/deprecation.py:33: DeprecationWarning: [DEPRECATION v24.9] Setting values on a Cookie object as a dict has been deprecated. This feature will be removed in v24.9. You should instead set values on cookies as object properties: cookie.domain=...
/path/to/project/.venv/lib/python3.12/site-packages/sanic/logging/deprecation.py:33: DeprecationWarning: [DEPRECATION v24.9] Setting values on a Cookie object as a dict has been deprecated. This feature will be removed in v24.9. You should instead set values on cookies as object properties: cookie.max-age=...
/path/to/project/.venv/lib/python3.12/site-packages/sanic/logging/deprecation.py:33: DeprecationWarning: [DEPRECATION v24.9] Setting values on a Cookie object as a dict has been deprecated. This feature will be removed in v24.9. You should instead set values on cookies as object properties: cookie.secure=...
/path/to/project/.venv/lib/python3.12/site-packages/sanic/logging/deprecation.py:33: DeprecationWarning: [DEPRECATION v24.9] Setting values on a Cookie object as a dict has been deprecated. This feature will be removed in v24.9. You should instead set values on cookies as object properties: cookie.httponly=...
/path/to/project/.venv/lib/python3.12/site-packages/sanic/logging/deprecation.py:33: DeprecationWarning: [DEPRECATION v24.9] Setting values on a Cookie object as a dict has been deprecated. This feature will be removed in v24.9. You should instead set values on cookies as object properties: cookie.expires=...
```
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) (✅) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) (✅) before creating this issue or pull request. By submitting, you are agreeing to those rules. (✅)
| 0easy
|
Title: RoBERTa on SuperGLUE's 'Winograd Schema Challenge' task
Body: WSC is one of the tasks of the [SuperGLUE](https://super.gluebenchmark.com) benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the WSC data and fine-tunes a model on it. We expect scores in the range of their entry on the [SuperGLUE leaderboard](https://super.gluebenchmark.com/leaderboard).
This can be formulated as a classification task, using the [`TransformerClassificationTT`](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/models/transformer_classification_tt.py) model, analogous to the IMDB model. You can start with the [experiment config](https://github.com/allenai/allennlp-models/blob/Imdb/training_config/tango/imdb.jsonnet) and [dataset reading step](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/tango/imdb.py#L13) from IMDB, and adapt them to your needs. | 0easy
|
Title: Bug Report: Null columns not supported for Spark dataframe
Body: ### Current Behaviour
When attempting to profile a Spark dataframe that contains an entirely null column, the process errors.
When the null column is of type integer, the error message is `KeyError: '50%'` as thrown by `ydata_profiling/model/spark/describe_numeric_spark.py:102, in describe_numeric_1d_spark(config, df, summary)`.
When the null column is a string, the error message is `ZeroDivisionError: division by zero` as thrown by `ydata_profiling/model/spark/describe_supported_spark.py:31, in describe_supported_spark(config, series, summary)`.
### Expected Behaviour
A profile should be produced for the Spark dataframe even with null value columns. The profiler works as expected for the same data when passed as a Pandas dataframe.
### Data Description
Any Spark dataframe with an entirely null column:
```
df.withColumn('empty1', lit(None).cast('string')).withColumn('empty2', lit(None).cast('integer'))
```
### Code that reproduces the bug
```Python
# Follow the Spark Databricks example code: https://github.com/ydataai/ydata-profiling/blob/master/examples/integrations/databricks/ydata-profiling%20in%20Databricks.ipynb
# Add the following lines to df before running ProfileReport
df = (
df
.withColumn('empty1', lit(None).cast('string'))
.withColumn('empty2', lit(None).cast('integer'))
)
```
### pandas-profiling version
v4.1.2
### Dependencies
```Text
numpy==1.21.5
pandas==1.4.2
ydata-profiling==4.1.2
```
### OS
_No response_
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | 0easy
|
Title: Save cfg.TRAIN.START_EPOCH to state dict
Body: Well #77 didn't work for me while resuming from checkpoint_18.pth. The problem is when we resume, the model and optimizer passed in the restore_from function are suitable for epoch less than 10 (till backbone is not training) because the cfg.TRAIN.START_EPOCH is 0 (passed in build_opt_lr function just before restore_from) initially so this mismatches the optimizer after backbone start training. So to resume my training , I pass the cfg.TRAIN.START_EPOCH as 19 and when build_opt_lr function receives epoch greater than 10 (i.e Backbone training starts) it produces the model and optimizer suitable for resuming. And i can resume my training.
_Originally posted by @PhenomenalOnee in https://github.com/STVIR/pysot/issues/92#issuecomment-651571350_ | 0easy
|
Title: Cron job scheduling example
Body: We believe an example of scheduling a notebook will be great, either as an inner task or via a shell script | 0easy
|
Title: add missing_only=True to all imputers to use in combination with variables=None
Body: Add missing_only functionality to all imputers to use in combination with variables=None
When variables is None, the imputers select all numerical, or categorical or all variables by default. With the missing_only, it would select only those from each subgroup that show missing data during fit.
| 0easy
|
Title: Factory and mutable values - changing the object changes values inside Factory
Body: Hello, I couldn't find any topic about behavior like this so I create a new one.
I found this because I started to write some tests for a django project which uses postgresql Jsonb fields.
Every time I change the instance of an object created with Traits, the Factory parameters are change too and all newly created instances have this changed values
Here is a simplified example
```python
import factory
class A:
def __init__(self, json_field):
self.json_field = json_field
class AFactory(factory.Factory):
class Meta:
model = A
json_field = {'text': 'hello'}
class Params:
prefilled = factory.Trait(
json_field={
'props': [100, 101, 102, 103, 104],
'str_property': 'hello',
}
)
if __name__ == '__main__':
item_one = AFactory(prefilled=True)
print('Item one json_field [created]: {}'.format(item_one.json_field))
item_one.json_field['props'][0] = 999
item_one.json_field['str_property'] = 'Goodbye!'
print('Item one json_field [changed]: {}'.format(item_one.json_field))
item_two = AFactory(prefilled=True)
# new instance will have the same json_field as item_one
print('Item two json_field [created]: {}'.format(item_two.json_field))
assert item_one.json_field == item_two.json_field
#without traits
item_three = AFactory()
print('Item three json_field [created]: {}'.format(item_three.json_field))
item_three.json_field['text'] = 'different text'
print('Item three json_field [changed]: {}'.format(item_three.json_field))
item_four = AFactory()
print('Item four json_field [created]: {}'.format(item_four.json_field))
assert item_three.json_field == item_four.json_field
```
```bash
Item one json_field [created]: {'props': [100, 101, 102, 103, 104], 'str_property': 'hello'}
Item one json_field [changed]: {'props': [999, 101, 102, 103, 104], 'str_property': 'Goodbye!'}
Item two json_field [created]: {'props': [999, 101, 102, 103, 104], 'str_property': 'Goodbye!'}
Item three json_field [created]: {'text': 'hello'}
Item three json_field [changed]: {'text': 'different text'}
Item four json_field [created]: {'text': 'different text'}
```
P.S. at first I wrote about Traits but then I realised that any mutable values assigned inside Factory class are shared between factory and its instances.
| 0easy
|
Title: [Feature request] Add apply_to_images to RandomShadow
Body: | 0easy
|
Title: [BUG] AttributeError: 'GroupedPredictor' object has no attribute 'predict_proba'
Body: I am not sure if it's me that I am not using correctly the API.
When doing a classification problem and using a GroupedPredictor to estimate probabilities I can't get the probabilities
```python
from sklego.meta import GroupedPredictor
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklego.datasets import load_chicken
df = load_chicken(as_frame=True)
# Create a binary target
X = df.drop(columns='weight')
y = np.where(df.weight>df.weight.mean(),1,0)
mod = GroupedPredictor(LogisticRegression(), groups=["diet"])
mod.fit(X,y)
mod.predict_proba(X)
```
`
Traceback (most recent call last):
File "bug.py", line 25, in <module>
mod.predict_proba(X)
AttributeError: 'GroupedPredictor' object has no attribute 'predict_proba'
`
| 0easy
|
Title: Up and down arrow keys do not work in chat input
Body: <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue.
Before creating a new issue:
* Search for relevant issues
* Follow the issue reporting guidelines:
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html
-->
## Description
The up and down arrow keys cannot be used to navigate the chat input. The left and right arrow keys seem to work, but not up and down.
I've encountered this bug in both v2.19.0 and v2.20.0.
## Expected behavior
The up and down arrow keys should work for keyboard accessibility.
| 0easy
|
Title: Doc: missing `mask` in tucker docstring
Body: While `partial_tucker` has the `mask` argument documented, the Tucker decomposition does not, this should be added. | 0easy
|
Title: Product format error message
Body: ```
Could not determine format for product ‘outputs/validation_result0.pkl’. Pass a valid extension (‘.html’, ‘.ipynb’, ‘.md’, ‘.pdf’, ‘.rst’, and ‘.tex’) or pass “nbconvert_exporter_name”. If you want this task to generate multiple products, pass a dictionary to “product”, with the path to the output notebook in the “nb” key (e.g. “output.ipynb”) and any other output paths in other keys)
```
We need to support pkl as a format, or if supported but missing the dictionary key, we should adjust the error message to be specific and give an example (for instance, to pass a pickle product do `product: 'file.pkl'`). We should also include a link to our community slack channel. | 0easy
|
Title: Contribute `Cumulative Curve` to Vizro visual vocabulary
Body: ## Thank you for contributing to our visual-vocabulary! 🎨
Our visual-vocabulary is a dashboard, that serves a a comprehensive guide for selecting and creating various types of charts. It helps you decide when to use each chart type, and offers sample Python code using [Plotly](https://plotly.com/python/), and instructions for embedding these charts into a [Vizro](https://github.com/mckinsey/vizro) dashboard.
Take a look at the dashboard here: https://huggingface.co/spaces/vizro/demo-visual-vocabulary
The source code for the dashboard is here: https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary
## Instructions
0. Get familiar with the dev set-up (this should be done already as part of the initial intro sessions)
1. Read through the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary) of the visual vocabulary
2. Follow the steps to contribute a chart. Take a look at other examples. This [commit](https://github.com/mckinsey/vizro/pull/634/commits/417efffded2285e6cfcafac5d780834e0bdcc625) might be helpful as a reference to see which changes are required to add a chart.
3. Ensure the app is running without any issues via `hatch run example visual-vocabulary`
4. List out the resources you've used in the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary)
5. Raise a PR
Useful resources:
- Plotly ecdf plots: https://plotly.com/python/ecdf-plots/
- Cumulative charts: https://docs.datarobot.com/en/docs/modeling/analyze-models/evaluate/roc-curve-tab/cumulative-charts.html
- Data chart mastery: https://www.atlassian.com/data/charts/how-to-choose-data-visualization | 0easy
|
Title: [ENH] Empirical CDF calculation
Body: Because @ericmjl & I keep copypasta'ing it across all my notebooks
```python
def calc_single_ecdf(data):
"""Compute the Empirical Cumulative Distribution Function for an Array."""
# Number of points n
n = len(data)
# x data for the CDF
x = np.sort(data)
# y data for the CDF
y = np.arange(1, n + 1 ) / n
return x, y
``` | 0easy
|
Title: I try to convert Predictive Ranges [LuxAlgo] to the py, whats the problem?
Body: closed | 0easy
|
Title: [Tracker] Split tests into multiple file tests
Body: Currently, we have some test files that have all (or almost all) tests of a module within a single file, the ideal here is to split these cases into multiple test files. Which file should be testing a single functionality, or file of our library.
Example:
> We have now a single file testing almost all losses functions https://github.com/kornia/kornia/blob/c5558ac2c03bbbbbbb6c9f82616b8d5229398f1a/tests/losses/test_losses.py we would like to have a single test file for each loss. What would mirror the file organization of the losses module of kornia https://github.com/kornia/kornia/tree/c5558ac2c03bbbbbbb6c9f82616b8d5229398f1a/kornia/losses
This issue will work as a tracker to map some of these files that need to be split, feel free to help us find others and/or help us split them into multiple files.
The goal here is to reorganize our test suite. If you have any questions, you can call us and/or ask on our Slack page too [](https://join.slack.com/t/kornia/shared_invite/zt-csobk21g-2AQRi~X9Uu6PLMuUZdvfjA)
_Originally posted by @edgarriba in https://github.com/kornia/kornia/pull/2745#discussion_r1461605306_
-------------
You can choose anyone below, or find new ones. You don't have to worry about separating all the tests in a file into a single PR
- [x] Split losses tests - https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/losses/test_losses.py - #2801
- [x] Split metrics tests - https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/test_metrics.py
- [x] Split contrib tests - https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/contrib/test_contrib.py - #2802
- [ ] Split augmentations tests
- [ ] 2d - https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/augmentation/test_augmentation.py
- [ ] 3d - https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/augmentation/test_augmentation_3d.py
- [ ] Split container augmentation tests - https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/augmentation/test_container.py | 0easy
|
Title: Translate GlobaLeaks into your own native language to support your local community
Body: ## Description:
Globaleaks is a multi-language platform that strives to reach users across the globe. We need your help to extend the platform to even more languages. Your task will be to contribute translations for the Globaleaks application in your native language using [Transifex](https://app.transifex.com/otf/globaleaks), a collaborative translation platform.
By helping with translations, you ensure that more people can use Globaleaks in their own language.
This is particular important so that whistleblowers can safely understand the technology and achieve better results.
## Steps:
1. **Sign up on Transifex:**
- If you don’t already have an account, go to the [Globaleaks Transifex page](https://app.transifex.com/otf/globaleaks), and sign up.
2. **Request Access for Your Language:**
- Request access for the language you would like to contribute to. You can find a list of available languages on the Transifex page, or if your language is not listed, you can suggest it. If your language is not present you may request it.
3. **Wait for Admin Approval:**
- After submitting your request, our administrators will review and enable access to your requested language.
4. **Start Translating:**
- Once your access is granted, you will be able to start translating the application into your language. The Transifex platform will allow you to translate various text elements of the application.
- Make sure your translations are clear, concise, and culturally appropriate. Review any suggestions and contributions from project maintainers and other translators.
5. **Submit Your Translations:**
- After completing your translations, make sure to submit them. The translations will be reviewed and, once approved, they will be merged into the Globaleaks application.
## Prerequisites:
- **Basic Requirements:** No coding skills required! You just need to be fluent in the language you're contributing to and willing to help.
- **Tools:** A Transifex account.
- **Knowledge:** A good understanding of the language you are translating into and a keen eye for detail.
## Why it's a Good First Issue:
- Translating is an excellent entry point for new contributors because it doesn't require deep technical knowledge. You will get familiar with the project and help make the platform more accessible to users in your native language.
- You’ll also become part of the collaborative translation community within Globaleaks, which is a great way to get involved.
## Helpful Links:
- [Globaleaks Transifex Project Page](https://www.transifex.com/)
- [Transifex Help Guide](https://docs.transifex.com/) | 0easy
|
Title: Saving large images to tiff fails
Body: ### 🐛 Bug Report
When saving images larger than 4GB, I get an error: `error: argument out of range`. The saved file is then incomplete and has missing layers.
### 💡 Steps to Reproduce
1. Open napari
2. Create large image:
```
data = np.random.randint(low=0, high=2**16, size=(128,4096,4096), dtype='uint16')
viewer.add_image(data)
```
3. Save it anywhere via "File" -> "Save Selected Layers" as data.tif
### 💡 Expected Behavior
I expect the file to be saved without raising an error.
### 🌎 Environment
napari: 0.5.2
Platform: Windows-10-10.0.17763-SP0
Python: 3.11.9 | packaged by Anaconda, Inc. | (main, Apr 19 2024, 16:40:41) [MSC v.1916 64 bit (AMD64)]
Qt: 5.15.2
PyQt5: 5.15.11
NumPy: 1.26.4
SciPy: 1.14.0
Dask: 2024.8.0
VisPy: 0.14.3
magicgui: 0.9.1
superqt: 0.6.7
in-n-out: 0.2.1
app-model: 0.2.8
npe2: 0.7.7
OpenGL:
- GL version: 4.6.0 NVIDIA 516.25
- MAX_TEXTURE_SIZE: 32768
- GL_MAX_3D_TEXTURE_SIZE: 16384
Screens:
- screen 1: resolution 1920x1080, scale 1.0
- screen 2: resolution 1920x1080, scale 1.0
- screen 3: resolution 1920x1080, scale 1.0
Optional:
- numba: 0.60.0
- triangle: 20230923
- napari-plugin-manager: 0.1.0
Settings path:
- C:\Users\f.sturzenegger\AppData\Local\napari\fs_napari_d117e5226d23aaac275f4943339fe0c6b128e236\settings.yaml
### 💡 Additional Context
I suspect the issue is that the file is not saved as BigTiff:
The error is raised in `\napari\utils\io.py`, line 106:
```
tifffile.imwrite(filename, data, compression=('zlib', 1))
```
When I try to export it myself with this line and adding `bigtiff=True`:
```
tifffile.imwrite(filename, data, compression=('zlib', 1), bigtiff=True)
```
it works as expected. | 0easy
|
Title: Edge case: `source file.xonshrc` shows `attempting to source non-xonsh file`
Body: ```xsh
echo 'echo ok' > file.xonshrc
# RuntimeError: attempting to source non-xonsh file! If you are trying to source
# a file in another language, then please use the appropriate source command.
# For example, source-bash script.sh
mv file.xonshrc file.xsh
# ok
```
We need to have an ability to source any xonsh file regardless extension.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Add multi-gpu training example script
Body: It would be great to have more examples for fine-tuning in the library!
Similar to current examples for [binary-segmentaion](https://github.com/qubvel-org/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb) and [multi-label](https://github.com/qubvel-org/segmentation_models.pytorch/blob/main/examples/cars%20segmentation%20(camvid).ipynb) segmentation would be great to have Multi-GPU training example. This can be handled with [PyTorch-Lightning](https://lightning.ai/docs/pytorch/stable/accelerators/gpu_intermediate.html) or with pure Pytorch.
It might be a `.py` script or Jupyter notebook. The example should showcase how to
- fine-tune a model with pytorch-lightning (or any other fine-tuning framework, even a plain pytorch)
- compute metrics with correct gathering of results (maybe [torchmetrics](https://lightning.ai/docs/torchmetrics) can be utilized)
- visualize results
In case anyone wants to work on this you are welcome! Just notify in this issue, and then ping me to review a PR when it's ready.
Fixes:
- https://github.com/qubvel-org/segmentation_models.pytorch/issues/903
- https://github.com/qubvel-org/segmentation_models.pytorch/issues/896 | 0easy
|
Title: Add questions on containers (Docker, Nomad, k8s, ...)
Body: | 0easy
|
Title: Refactoring: soft split xonsh into components
Body: ### Motivation
Our code and docs are mostly like a list:
* We have news with Added/Changed/Deprecated sections.
* We have release notes also as Added/Changed/Deprecated sections.
* We have plain list of modules in `./xonsh/*.py` and in `./xonsh/tests/`
* We have list of labels in issue tracker
* The names of enviroment variables haven't name of component that will be changed.
This approach looks a bit legacy today because it makes understanding harder:
* What module for what component?
* What component of xonsh was changed? What was changed?
* What component address this test or this issue?
* How to filter certain components for tracing with [xunter](https://github.com/anki-code/xunter)?
To make xonsh structure more clean we need to soft grouping xonsh components/modules. "Soft" means just moving existing code into the submodules without global refactoring of everything.
I see three layers for now:
- top level `xonsh/<component>`
- libraries and component implementation in `xonsh/<component>/*`
- common libs `./xonsh/lib`
And here is what I see as first steps:
1. Xonsh modules structure
- [x] Move `ptk_shell`, `dumbshell`, `readline`, `ansi_color`, `color_tools` to `./xonsh/shells/`.
- [x] Move `lexer`, `tokenize`, `ast`, etc to `./xonsh/parsers/`
- [x] Create `xonsh.api`
- [x] Move libraries to `xonsh.lib`
- [ ] Move `xonfig`, `webconfig`, `wizard`, `xonitrb`, `tracer`, to `./xonsh/built_ins/`. Wait for #5543
2. News and release notes
- [ ] Create new template with top level components. Instead of "Added/Changed" we need to have Main, Prompt, Subprocess, Aliases, History, etc sections. This structure the changes looks more clear. Real life example of this is [release notes in xxh project](https://github.com/xxh/xxh/releases/tag/0.4.4). Probably one day we can use top level module name as the name of the section to group release notes automatically.
3. Environ
- Align https://xon.sh/envvars.html sections with the list of components.
* First step: #5501
* Next: #5149
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Validate function signature
Body: `_validate_and_modify_signature` takes a function as an argument to validate and modify its signature. we need to check that no arguments (other than the first one) start with `env`:
Example:
```python
# good: a, b do not start with env
def works(env, a, b):
pass
```
```python
# bad: env_something starts with env
def does_not_work(env, a, env_something):
pass
```
https://github.com/ploomber/ploomber/blob/2e0d764c8f914ba21480b275c545ea50ff800513/src/ploomber/env/decorators.py#L15
Please create a new file named `test_decorators.py` here: https://github.com/ploomber/ploomber/tree/master/tests/env and add a few tests
| 0easy
|
Title: Newest release of Chromium
Body: Hi,
The pyppeteer's docs say that `pyppeteer-install chrome` would install the latest Chromium release. In fact in Linux x64 environment the downloaded release was `588429`, while the actual latest release is `839847`.
A manual update (replacing files) is possible and all is working fine with the latest Chromium release, so is there a way to upgrade or reinstall Chromium to the actually latest release using pyppeteer's setup tools?
Thanks in advance. | 0easy
|
Title: Labeled environments cannot use runners from provisioned plugins
Body: ## Issue
When a tox environment has a `runner` derived from a plugin that is named in the `requires` key, the environment cannot appear in a top-level label expression.
See minimal reproducer: https://github.com/masenf/tox-label-provision-plugin-runner
```ini
[tox]
requires = foo-runner
labels =
foo = bar, baz
envlist = bar, baz
[testenv:{bar,baz}]
runner = foo
```
running `tox` uses the `envlist`, which works fine.
running `tox -m foo` uses the `label` foo, and blows up before passing off control to the provisioned tox
## Environment
Provide at least:
- OS: ubuntu 20.04
- `pip list` of the host Python where `tox` is installed:
```console
cachetools==5.3.0
chardet==5.1.0
colorama==0.4.6
distlib==0.3.6
filelock==3.9.0
packaging==23.0
platformdirs==3.0.0
pluggy==1.0.0
pyproject-api==1.5.0
tomli==2.0.1
tox==4.4.4
virtualenv==20.19.0
```
## Output of running tox
Provide the output of `tox -rvv`:
```console
tox -rvv -m foo
ROOT: 1297 W will run in automatically provisioned tox, host /home/mfurer/.pytools/bin/python is missing [requires (has)]: foo-runner [tox/provision.py:125]
Traceback (most recent call last):
File "/home/mfurer/.pytools/bin/tox", line 8, in <module>
sys.exit(run())
File "/home/mfurer/.pytools/lib/python3.8/site-packages/tox/run.py", line 19, in run
result = main(sys.argv[1:] if args is None else args)
File "/home/mfurer/.pytools/lib/python3.8/site-packages/tox/run.py", line 41, in main
result = provision(state)
File "/home/mfurer/.pytools/lib/python3.8/site-packages/tox/provision.py", line 126, in provision
return run_provision(provision_tox_env, state)
File "/home/mfurer/.pytools/lib/python3.8/site-packages/tox/provision.py", line 144, in run_provision
tox_env: PythonRun = cast(PythonRun, state.envs[name])
File "/home/mfurer/.pytools/lib/python3.8/site-packages/tox/session/env_select.py", line 337, in __getitem__
return self._defined_envs[item].env
File "/home/mfurer/.pytools/lib/python3.8/site-packages/tox/session/env_select.py", line 233, in _defined_envs
self._mark_active()
File "/home/mfurer/.pytools/lib/python3.8/site-packages/tox/session/env_select.py", line 321, in _mark_active
self._defined_envs_[env_name].is_active = True
KeyError: 'bar'
```
## Minimal example
https://github.com/masenf/tox-label-provision-plugin-runner
| 0easy
|
Title: Deprecate the `"default"` colorscale name and `list_all_colorscale_names()`
Body: The only reason we still need `list_all_colorscale_names()` is because we decided to add an extra `"default"` named color-scale to the list of available continuous color-scales. Not sure why this was done but to avoid breaking someone else's workflow, let's deprecate these first and completely drop support for it later on. | 0easy
|
Title: [Core] Verbose errors with ctrl+c on fetching status from stale controller
Body: I have a jobs controller on a stale k8s cluster. `sky status` gets stuck and ctrl + c shows a cryptic error (instead of a simple KeyboardInterrupt). We should suppress/clean up this error.
```
(base) ➜ ~ sky status
Clusters
NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND
test 1 week ago 1x Kubernetes(2CPU--2GB) UP - sky launch -c test --cloud...
sky-jobs-controller-2ea485ea 5 days ago 1x Kubernetes(2CPU--2GB, cpus=2+, mem=2+, disk_size=50) UP - sky jobs launch --cloud k8s...
Managed jobs
Traceback (most recent call last):
File "/Users/romilb/tools/anaconda3/bin/sky", line 8, in <module>
sys.exit(cli())
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 417, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/cli.py", line 875, in invoke
return super().invoke(ctx)
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 437, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/cli.py", line 1863, in status
sdk.api_cancel(managed_jobs_queue_request_id, silent=True)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 437, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/annotations.py", line 22, in wrapper
return func(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/client/sdk.py", line 1531, in api_cancel
body = payloads.RequestCancelBody(request_ids=request_ids, user_id=user_id)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/server/requests/payloads.py", line 86, in __init__
super().__init__(**data)
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/pydantic/main.py", line 193, in __init__
self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for RequestCancelBody
request_ids
Input should be a valid list [type=list_type, input_value='51785491-a96a-4800-8a61-10e2c8065ba3', input_type=str]
For further information visit https://errors.pydantic.dev/2.8/v/list_type
``` | 0easy
|
Title: Set plugin migration version in between each migration
Body: https://github.com/CTFd/CTFd/blob/e1991e16963b10302baa7cc50d52071a5053bf2f/CTFd/plugins/migrations.py#L72-L77
This code here probably should be setting the plugin version in between each migration so that if a migration fails it doesn't need to be started from the beginning again. | 0easy
|
Title: Fix ">" instead of ">" on "concepts" documentation page
Body: On https://slack.dev/bolt-python/concepts at line 12 of "Example with AWS Lambda" there is "\>" displayed, instead of ">" symbol. To fix this you have to remove "\<span class="p">;\</span>" after it
### Category (place an `x` in each of the `[ ]`)
* [ ] **slack_bolt.App** and/or its core components
* [ ] **slack_bolt.async_app.AsyncApp** and/or its core components
* [ ] Adapters in **slack_bolt.adapter**
* [x] Others | 0easy
|
Title: v0.4
Body: The new version of [django-graphql-jwt](https://github.com/flavors/django-graphql-jwt) (v0.3.1) is not compatible with the v0.3.X version of this package, so we need to release the v0.4.X.
[Here](https://github.com/flavors/django-graphql-jwt/compare/0.3.0...v0.3.1) is the difference from 0.3.0 to 0.3.1
[Here](https://github.com/PedroBern/django-graphql-auth/issues/25#issuecomment-721799884) is a suggestion of how to solve it.
## Need maintainers
Please if you have the time and want to work on this package, all PRs are welcome!
| 0easy
|
Title: Very small range data can crash application due to divide by zero issues
Body: I've hit crashes when trying to plot degenerate y data that's very close to 0 but not actually zero. along the lines of a series of 10^-310
This causes the following 2 functions to blow up because when reasonable numbers are divided by almost zero it's winding up as inf and nan in some of the code
https://github.com/pyqtgraph/pyqtgraph/blob/243c287044ca5a720b02b37e9a456d7bed1adfa3/pyqtgraph/graphicsItems/AxisItem.py#L972
I believe that this needs an epsilon in generateDrawSpecs:
if dif == 0:
->
if math.fabs(dif) < 0.00001:
https://github.com/pyqtgraph/pyqtgraph/blob/243c287044ca5a720b02b37e9a456d7bed1adfa3/pyqtgraph/graphicsItems/ViewBox/ViewBox.py#L1656
and updateMatrix:
if vr.height() == 0 or vr.width() == 0:
->
if math.fabs(vr.height()) < 0.00001 or vr.width() == 0:
vr.width might need it as well for a vertical plot | 0easy
|
Title: Implement `color_discrete_map`
Body: Implement and support a new `color_discrete_map` parameter.
References:
- https://plotly.com/python/discrete-color/#directly-mapping-colors-to-data-values | 0easy
|
Title: PEP-484 Type Hints
Body: **Is your feature request related to a problem? Please describe.**
We use python with strict typing at our organization which means we have to add a bunch of "type: ignore" comments wherever this library is used. Adding types would make working with this library much easier and require much less context switching between the docs and the editor thanks to autocomplete.
**Describe the solution you'd like**
I understand a bunch of functionality in this library is very hard to type (dynamic clients, etc.), so maybe this effort could start with the JOSE module.
**Describe alternatives you've considered**
Alternatively, type stubs for this library could be added to the [typeshed](https://github.com/python/typeshed) project but that is not ideal as it would be very easy for those to get out of sync. | 0easy
|
Title: Sortino ratio of 4,000?
Body: This may just be a problem with data, but I am getting blown up sortino ratios. I thought they had to be much smaller.
For instance:
```python
import yfinance as yf
msft = yf.Ticker("ETH-USD")
def getSortino(h, p):
sh = []
for i in range(0, len(h)):
if i > p:
a = h['Close'].iloc[i-p:i]
sh.append(ta.sortino_ratio(a))
else: sh.append(0)
return sh
h['rso364'] = getSortino(h, 364)
h['rso100'] = getSortino(h, 100)
h['rso1000'] = getSortino(h, 1000)
h.plot(y=['rso100', 'rso364', 'rso1000','Close'], linewidth=0.5)
```
Gives me a graph like [this](https://www.dropbox.com/s/mjpryr5etumunge/sortino.png?dl=0), I don't believe max values should be in the range of 4,000. Perhaps there is some missing data or something, but not sure what could cause this...
PS: I keep finding this package extremely useful. | 0easy
|
Title: Add 'sklearn.ensemble._stacking.StackingClassifier'
Body: A user requested us to add `sklearn.ensemble._stacking.StackingClassifier`. (See [doc](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.StackingClassifier.html)).
At first glance this appears it should be reasonably easy to implement/similar to our other classifier converters. | 0easy
|
Title: compatibility with IPython 8
Body: IPython just had a major release and one of the tests is breaking so I pinned the version in `setup.py`. Ploomber actually works fine, it seems like some of the testing config is incompatible with the new IPython internals.
Even a diagnosis of next steps will be useful here! | 0easy
|
Title: Add add_count and add_total functions
Body: I know there is functionality with groupby_agg here, so I have been going back and forth on this one. But like the tidyverse there are helper functions the are more direct like tally or add_count, where you could use summarize() and count() or n() together to get the same output.
I am not deeply invested in this suggestion, so if you think that groupby_agg can work here alone, I am fine with that.
Here are the two proposed functions I could possibly add:
add_count function
```
@pf.register_dataframe_method
def add_count(df: pd.DataFrame(), count_column: str, new_column_name: str, include_na: bool):
"""Add countby column on target column"""
if include_na == True:
df[new_column_name] = df.groupby([count_column])[count_column].transform('count')
num = df[new_column_name].isna().sum()
df[new_column_name].fillna(num, inplace= True)
return df
else:
df[new_column_name] = df.groupby([count_column])[count_column].transform('count')
return df
```
add_total function
```
@pf.register_dataframe_method
def add_total(df: pd.DataFrame(), sum_columns: list, new_column_name: str, axis: int, skipna: bool):
"""Add countby column on target column"""
if axis == 1:
df[new_column_name] = df[sum_columns].sum(axis= axis, skipna = skipna )
return df
else:
df[new_column_name] = df[sum_columns].sum(axis= axis, skipna = skipna ).sum()
return df
``` | 0easy
|
Title: Unit Test `pydantic_ai_examples`
Body: We currently unit test examples in the documentation (including confirming output). Let's do the same for `pydantic_ai_examples` scripts so that our recommended usage patterns are confirmed to behave as expected. | 0easy
|
Title: [Bug] Qwen2.5-VL-72B image input not working in SGLang, works fine in vLLM
Body: ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
### 🧾 Description:
When deploying `qwen2.5-vl-72b-awq` using SGLang, image inputs (via `image_url`) are not correctly handled. The same prompt works as expected in vLLM, where the model successfully describes the image.
### Reproduction
### ✅ Reproduction Steps:
#### ✅ SGLang Launch Command:
```bash
python -m sglang.launch_server \
--model-path qwen-vl-72b \
--port 30000 \
--trust-remote-code \
--host 0.0.0.0 \
--mem-fraction-static 0.8 \
--tp 4 \
--tool-call-parser qwen25
```
#### ✅ OpenAI-Compatible API Call (cURL):
```bash
curl -X POST "http://0.0.0.0:30000/v1/chat/completions" \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen2.5-vl",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "describe this picture"
},
{
"type": "image_url",
"image_url": {
"url": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"
}
}
]
}
],
"top_p": 0.8
}'
```
---
### 🧾 SGLang Response:
```json
{
"id": "803d3c01743b4429b61c0a83d60eda5b",
"object": "chat.completion",
"created": 1742528000,
"model": "qwen2.5-vl",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I'm sorry, but I cannot see any picture attached to your message. Could you please provide more information or upload the picture again? I'll do my best to describe it for you."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 22,
"completion_tokens": 41,
"total_tokens": 63
}
}
```
---
### ✅ Comparison with vLLM:
Using the exact same model and cURL request, the image is successfully described in vLLM deployment. This confirms that the issue is not with the prompt or model, but with how SGLang handles `image_url` type content in the message payload.
---
### 📌 Expected Behavior:
SGLang should support OpenAI-compatible image inputs by correctly parsing `messages.content[].image_url.url` and feeding the image into the model’s visual encoder.
### Environment
### 🧪 Environment:
- Model: `qwen2.5-vl-72b-awq`
- Deployment: SGLang 0.4.4.post1
- API Protocol: OpenAI-compatible Chat Completions API
- vLLM Behavior: ✅ Working as expected | 0easy
|
Title: fix: improve error message for bad regex glob
Body: A user on Matrix / Gitter was trying to run something like (simplified)
```xonsh
ls `*`
```
which yields
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gil/mambaforge/lib/python3.10/site-packages/xonsh/built_ins.py", line 146, in pathsearch
o = func(s)
File "/home/gil/mambaforge/lib/python3.10/site-packages/xonsh/built_ins.py", line 121, in regexsearch
return reglob(s)
File "/home/gil/mambaforge/lib/python3.10/site-packages/xonsh/built_ins.py", line 88, in reglob
return reglob(d, parts, i=0)
File "/home/gil/mambaforge/lib/python3.10/site-packages/xonsh/built_ins.py", line 95, in reglob
regex = re.compile(parts[i])
File "/home/gil/mambaforge/lib/python3.10/re.py", line 251, in compile
return _compile(pattern, flags)
File "/home/gil/mambaforge/lib/python3.10/re.py", line 303, in _compile
p = sre_compile.compile(pattern, flags)
File "/home/gil/mambaforge/lib/python3.10/sre_compile.py", line 788, in compile
p = sre_parse.parse(p, flags)
File "/home/gil/mambaforge/lib/python3.10/sre_parse.py", line 955, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/home/gil/mambaforge/lib/python3.10/sre_parse.py", line 444, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/home/gil/mambaforge/lib/python3.10/sre_parse.py", line 669, in _parse
raise source.error("nothing to repeat",
re.error: nothing to repeat at position 0
```
because it should be
```
ls `.*`
```
I don't think that there are other situations which would raise an `re.error` with the message about "position 0", so we can probably catch that error and suggest adding a `.` to the offending regular expression.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: `test_logfire_api` doesn't look at `logfire_api.__all__`
Body: It just uses `logfire.__all__` in both versions of `test_runtime`. `logfire_api.__all__` doesn't actually exist at runtime when logfire isn't importable. | 0easy
|
Title: Dialogs: Default option for `Get Selection From User`
Body: Hi! I am not sure whether this was already requested before (I could not find it for now).
I would like to have an option on keyword Get selection from user to set the default value.
So the user only has to push enter to get that preset value or if he/she wants another selection it can be picked.
Was something like this requested before? | 0easy
|
Title: bump pytest version
Body: one of the test yield this warning`DeprecationWarning: You're using an outdated version of pytest. Newer releases of pytest-asyncio will not be compatible with this pytest version. Please update pytest to version 7 or later.
warnings.warn(`
we should update our pytest version in v2
| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.