text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Add support for creating cartoee maps with an interactive GUI
Body:

 | 1medium
|
Title: 【长期】跨语言支持
Body: ### 已有讨论
#142
#197
### Q & A
[LxnChan](https://github.com/LxnChan)
有足够的样本的前提下我想自己训练出来一个日语模型,不知道行不行
答:就像把中文句子例如 “你好” 变成 “ni2 hao3”,只需要找到一个tts前端处理一下日语为phenomenon
> 作者却苦于近期精力限制只能势单力薄处理一些小的bug,也看到issue区有不少爱好与开发者想要学习或二次改造更好满足自己需求,不过比较零碎难以展开。为了让项目和AI持续可以给大家提供更多价值,共同学习,我在issue区根据不同主题创建长期交流频道,若留言人数超过20也将建立对应交流群。
> - 如何改参数,搞出更逼真的克隆效果 435
> - 如何改模型,搞出更好效果 436
> - 训练克隆特定人声音&finetune 437
> - 学术/论文讨论/训练分析 438
> - 跨语言支持 440
> - 工程化/新场景讨论(绝不做恶 & 合法合规) 439
| 3misc
|
Title: model.fit - class_weight broken
Body: It seems argmax is returning dtype=int64 in the true case and int32 is returned in the false case.
https://github.com/keras-team/keras/blob/a503a162fc5b4120a96a1f7203a1de841f0601e2/keras/src/trainers/data_adapters/tf_dataset_adapter.py#L129-L133
Stacktrace:
```Python traceback
Traceback (most recent call last):
File "/home/example/workspace/fir/trainer/train.py", line 122, in <module>
model.fit(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 113, in error_handler
return fn(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 282, in fit
epoch_iterator = TFEpochIterator(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 664, in __init__
super().__init__(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/epoch_iterator.py", line 64, in __init__
self.data_adapter = data_adapters.get_data_adapter(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/data_adapters/__init__.py", line 56, in get_data_adapter
return TFDatasetAdapter(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/data_adapters/tf_dataset_adapter.py", line 30, in __init__
dataset = dataset.map(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 2341, in map
return map_op._map_v2(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/map_op.py", line 43, in _map_v2
return _MapDataset(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/map_op.py", line 157, in __init__
self._map_func = structured_function.StructuredFunctionWrapper(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 265, in __init__
self._function = fn_factory()
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 1251, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 1221, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 696, in _initialize
self._concrete_variable_creation_fn = tracing_compilation.trace_function(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 178, in trace_function
concrete_function = _maybe_define_function(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 283, in _maybe_define_function
concrete_function = _create_concrete_function(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 310, in _create_concrete_function
traced_func_graph = func_graph_module.func_graph_from_py_func(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1059, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 599, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 231, in wrapped_fn
ret = wrapper_helper(*args)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 161, in wrapper_helper
ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 690, in wrapper
return converted_call(f, args, kwargs, options=options)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 377, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in _call_unconverted
return f(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/data_adapters/tf_dataset_adapter.py", line 129, in class_weights_map_fn
y_classes = tf.__internal__.smart_cond.smart_cond(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/framework/smart_cond.py", line 57, in smart_cond
return cond.cond(pred, true_fn=true_fn, false_fn=false_fn,
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/ops/cond_v2.py", line 880, in error
raise TypeError(
TypeError: true_fn and false_fn arguments to tf.cond must have the same number, type, and overall structure of return values.
true_fn output: Tensor("cond/Identity:0", shape=(2048,), dtype=int64)
false_fn output: Tensor("cond/Identity:0", shape=(2048,), dtype=int32)
Error details:
Tensor("cond/Identity:0", shape=(2048,), dtype=int64) and Tensor("cond/Identity:0", shape=(2048,), dtype=int32) have different types
``` | 2hard
|
Title: [Bug] Exporting to ipynb with comments at the end of the line
Body: MacOS
nvim 9.4
## Description
Exporting outputs that have code cells with comments at the end of the line causes cells to not match up. Need to investigate a little further to figure out the cause of that.
## Reproduction Steps
create a jupyter notebook with this cell
```python
print("hi") # problem
```
open in nvim, run with molten, try to export the cell. Cell contents don't match.
Print statement debugging to the rescue. Though I'm not sure why this would be a problem right now
| 1medium
|
Title: 希望支持 docker 部署, 支持配置代理
Body: | 1medium
|
Title: Scanning BLE devices returns empty metadata
Body: * bleak version: 0.10.0
* Python version: Python 3.6.13 Anaconda, Inc.
* Operating System: Windows 10 Pro
### Description
Hi,
I wanted to test scanning devices and print the details of the scanned device (name, metadata, etc.).
The devices are detected (name, address), nevertheless the metadata field metadata = {'uuids': [], ...} does not contain the service uuids which is expected to be a list of one element, in my case 'uuids': ['680c21d9-c946-4c1f-9c11-baa1c21329e7'].
The reason is I want to filter the scan output, application-side, on the device's service uuid to only report devices of interest.
Also i checked the issue 437 and Fix KeyError: 'delegate' in CoreBluetooth backend commit. I am using current libraries (updated ones after the issue 437). Am i doing something wrong?
### What I Did
The python code:
```python
async def run():
scanner = BleakScanner()
scanner.register_detection_callback(detection_callback)
await scanner.start()
await asyncio.sleep(sleep_time)
await scanner.stop()
scanned_devices = scanner.discovered_devices
for d in scanned_devices:
if d.address in Dict:
print(d)
print(d.metadata.values())
```
```
The output of this code:
E3:EB:F8:F2:9E:9D: Nordic_UART
dict_values([[], {89: b''}])
```
| 1medium
|
Title: CONTRIBUTING.md references wrong requirements file
Body: CONTRIBUTING.md says to run `pip install -r requirements-dev.txt` but I believe it should be `pip install -r requirements-test.txt`.
Also, is it worth mentioning that `pip install -r requirements.txt` or `pip install -r requirements-dev.txt` has to be run before `pip install -r requirements-test.txt`, or making requirements-test.txt reference requirements.txt so it's mildly more straight forward to install and test? | 0easy
|
Title: Horovod on spark>=2.4 Barrier Execution Mode supporting
Body: **Is your feature request related to a problem? Please describe.**
When I use Horovod on spark for training distributed DL model, Horovod does some additional actions for data transferring to Horovod processes:
- DataFrame partitions saving to some distributed storage using Petastorm (for example, HDFS)
- Partitions reading from this storage for data delivering to Horovod processes using client (for example, hadoop).
So this actions can decrease processing speed when we work with big data.
**Describe the solution you'd like**
I suggest use Barrier Execution Mode that was introduced in spark 2.4 version. Horovod can repartition Dataframe to number of executors and use `mapInPandas()` for conversion Spark DataFrame partition representation to iterator of `pd.Dataframe`. Arrow enabling will increase conversation speed. Iterator of `pd.Dataframe` Horovod can convert to specific DL framework dataloader. This logic can be wrapped into [Torch|Keras|Lighting]Estimator or by adding special function `horovod.spark.run_on_dataframe()` like `horovod.spark.run()`
**Describe alternatives you've considered**
As I understand, Databricks uses similar design for [HorovodRunner](https://docs.databricks.com/en/machine-learning/train-model/distributed-training/horovod-runner.html). And [XGBoost on Pyspark](https://github.com/dmlc/xgboost/blob/v1.7.5/python-package/xgboost/spark/core.py#L855) uses similar approach
**Additional context**
This feature can increase Horovod on spark popularity. So, in this [presentation](https://youtu.be/gMT_ONmI9RM?t=562) Uber engineers describe this problem.
If you support this idea, but you don’t have time to implement it, I can start implementation it as a contribution to horovod.
I'll be waiting for your feedback. | 2hard
|
Title: Feature request: support for copy to clipboard for code blocks
Body: Looks like github has recently added a nice 'copy to clipboard' button for every code block rendered from a markdown. Without knowing a lot about how this is implemented or if it is easy/hard for grip to do, it would be awesome if this could be incorporated into the grip output as well.
Thanks for a great tool! | 0easy
|
Title: Ckeditor <TypeError: Cannot convert undefined or null to object>
Body: ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Following the guide [sqladmin ckeditor](https://aminalaee.dev/sqladmin/cookbook/using_wysiwyg/) got an error:
```
TypeError: Cannot convert undefined or null to object
at Function.keys (<anonymous>)
at ou.init (datacontroller.ts:344:42)
at ou.<anonymous> (observablemixin.ts:277:33)
at ou.fire (emittermixin.ts:241:31)
at <computed> [as init] (observablemixin.ts:281:17)
at classiceditor.ts:227:31
```
### Steps to reproduce the bug
Follow the ckeditor embedding documentation
### Expected behavior
_No response_
### Actual behavior
```
TypeError: Cannot convert undefined or null to object
at Function.keys (<anonymous>)
at ou.init (datacontroller.ts:344:42)
at ou.<anonymous> (observablemixin.ts:277:33)
at ou.fire (emittermixin.ts:241:31)
at <computed> [as init] (observablemixin.ts:281:17)
at classiceditor.ts:227:31
```
### Debugging material
_No response_
### Environment
Sqladmin 0.17.0
### Additional context
Everything starts working fine if you add the following code
```
<script src="https://cdn.ckeditor.com/ckeditor5/39.0.1/classic/ckeditor.js"></script>
<script>
ClassicEditor
.create(document.querySelector('#content'))
.catch(error => {
console.error(error);
});
</script>
```
straight into the editor.html template before the last closing block, however I don't think this is the correct behavior. | 1medium
|
Title: Request Adding Leaflet.TileLayer.MBTiles Plugin Support
Body: **Is your feature request related to a problem? Please describe.**
No. This feature request adds support for displaying raster tile base maps in .mbtiles format.
**Describe the solution you'd like**
1. Make [Leaflet.TileLayer.MBTiles](https://gitlab.com/IvanSanchez/Leaflet.TileLayer.MBTiles#) it's own Folium plugin.
2. Put the above plugin on a CDN so users aren't forced to store it locally. The original author's CDN link is broken I think.
Currently, you can use Folium and [Leaflet.TileLayer.MBTiles](https://gitlab.com/IvanSanchez/Leaflet.TileLayer.MBTiles#) to display .mbtiles format, but it's a bit hacky.
I used roughly the following snippet to override `TileLayer._template`
```
# override defaults to use the plugin
folium.raster_layers.TileLayer._template = Template(u"""
{% macro script(this, kwargs) %}
var {{ this.get_name() }} = L.tileLayer.mbTiles( <--------
{{ this.tiles|tojson }},
{{ this.options|tojson }}
).addTo({{ this._parent.get_name() }});
{% endmacro %}
""")
# make a Map with .mbtile basemap
m = folium.Map(
location=[35.650787, -117.661728],
tiles='my_tiles.mbtiles',
attr=attr
)
# add open street map
layer = folium.TileLayer(
tiles='https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png',
name='OpenStreetMap Online',
attr='© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'contributors',
overlay=True,
control=True,
show=False
).add_to(m)
# function to return _template back to default after the fact.
set_tile_layer_default(layer)
```
Even then, you have to override `TileLayer._template` back to it's default before `m.save('index.html')` for each additional (non-mbtiles) basemap you add.
It would be really nice to:
```
from folium.plugins import MBTiles
m = MBTiles.Map(
location = [14.0, 15.0],
tiles = 'your_tiles_url.mbtiles',
attr = 'attr text'
)
```
**Describe alternatives you've considered**
The only alternative I can see is something like the above snippet. There was some discussion in #351.
**Additional context**
For those unfamiliar, one open source way to make your own tiles is by using [TileMill](https://github.com/tilemill-project/tilemill). TileMill exports the usual {z}{x}{y} directory structure, as well as the Mapbox raster .mbtiles format. Mbtiles are preferable in some cases, because you only have to manage 1 file instead of (no joke) millions of .png files.
Also, it's great when one open source project ties cleanly into another.
**Implementation**
folium is maintained by volunteers. Can you help make a PR if we want to implement this feature?
I would be happy to write this PR. I have working code which accomplishes the task, just not sure how to best turn it into a plugin.
If somebody could please comment:
1. Is this idea acceptable and feasible?
2. A general best practices and implementation list.
Thanks.
| 1medium
|
Title: Add support for request to use a specific session
Body: I'm scraping a single site with 2 different logins. I don't see a way to specify which session/cookies a request should use?
The documentation for this is lacking.
Thanks | 1medium
|
Title: Replace, or complement, `pmdarima` with `statsforecast.AutoARIMA`
Body: The Nixtla folks claim that their `statsforecast.AutoARIMA` is faster than `pmdarima` and `prophet`:

([source](https://nixtla.github.io/statsforecast/examples/autoarima_vs_prophet.html))
This is the API: https://nixtla.github.io/statsforecast/models.html#autoarima-1
My understanding is that both pmdarima and statsforecast implement 2008 Hyndman & Khandakar "Automatic Time Series Forecasting: The forecast Package for R".
Paging @mergenthaler, @FedericoGarza, and @kdgutier 👋🏽😊 | 1medium
|
Title: pandasgui shutdown out of interactive cmd, pandasgui.show(df, settings={'block': True}) is needed.
Body: Hi @adamerose ,
I am trying pandasgui in some python script, unlike in interactive cmd, it shutdown immediately after running.
I find `settings={'block': True}` is necessary, would you please figure it out in the demo examples?
Thank you and pandasgui is nice tool!
| 0easy
|
Title: [🕹️] Copilot for Terminal Code Side-Quest
Body: ### What side quest or challenge are you solving?
Copilot for Terminal
### Points
300-750
### Description
Create a custom copilot that integrates a new language model (e.g., Cohere, Llama3.2, etc.) into OpenBB's Terminal.
### Provide proof that you've completed the task
... | 1medium
|
Title: request.user returns AnonymousUser
Body: Is there a method to return the authorised user when using JSONWebTokenAuthentication in DEFAULT_AUTHENTICATION_CLASSES? I'm calling a view very similar to this http://stackoverflow.com/a/20569205 but this doesn't work as request.user returns AnonymousUser.
| 1medium
|
Title: Docker proxy problem
Body: <!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: Docker Engine v20.10.8 (Windows)
* Python Version: Python 3.12.7
* CCXT version: ccxt==4.4.24
* Freqtrade Version: freqtrade 2024.10
## Your question
Here is my config.json
```
{
"$schema": "https://schema.freqtrade.io/schema.json",
"max_open_trades": 3,
"stake_currency": "USDT",
"stake_amount": "unlimited",
"tradable_balance_ratio": 0.99,
"fiat_display_currency": "USD",
"dry_run": true,
"dry_run_wallet": 1000,
"cancel_open_orders_on_exit": false,
"trading_mode": "futures",
"margin_mode": "isolated",
"unfilledtimeout": {
"entry": 10,
"exit": 10,
"exit_timeout_count": 0,
"unit": "minutes"
},
"entry_pricing": {
"price_side": "same",
"use_order_book": true,
"order_book_top": 1,
"price_last_balance": 0.0,
"check_depth_of_market": {
"enabled": false,
"bids_to_ask_delta": 1
}
},
"exit_pricing": {
"price_side": "same",
"use_order_book": true,
"order_book_top": 1
},
"exchange": {
"name": "okx",
"key": "",
"secret": "",
"ccxt_config": {
"httpsProxy": "http://127.0.0.1:7890"
},
"ccxt_async_config": {},
"pair_whitelist": [],
"pair_blacklist": []
},
"pairlists": [
{
"method": "VolumePairList",
"number_assets": 20,
"sort_key": "quoteVolume",
"min_value": 0,
"refresh_period": 1800
}
],
"telegram": {
"enabled": true,
"token": "",
"chat_id": ""
},
"api_server": {
"enabled": true,
"listen_ip_address": "0.0.0.0",
"listen_port": 8080,
"verbosity": "error",
"enable_openapi": false,
"jwt_secret_key": "",
"ws_token": "",
"CORS_origins": [],
"username": "freqtrader",
"password": "freqtrader"
},
"bot_name": "freqtrade",
"initial_state": "running",
"force_entry_enable": false,
"internals": {
"process_throttle_secs": 5
}
}
```
when I run with docker

I got this log
```
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
freqtrade | return future.result()
freqtrade | ^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 630, in _api_reload_markets
freqtrade | raise TemporaryError(
freqtrade | freqtrade.exceptions.TemporaryError: Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade | 2024-11-23 08:25:32,145 - freqtrade - ERROR - Could not load markets, therefore cannot start. Please investigate the above error for more details.
freqtrade | Exception ignored in: <module 'threading' from '/usr/local/lib/python3.12/threading.py'>
freqtrade | Traceback (most recent call last):
freqtrade | File "/usr/local/lib/python3.12/threading.py", line 1624, in _shutdown
freqtrade | lock.acquire()
freqtrade | File "/freqtrade/freqtrade/commands/trade_commands.py", line 18, in term_handler
freqtrade | raise KeyboardInterrupt()
freqtrade | KeyboardInterrupt:
freqtrade | 2024-11-23 08:31:34,355 - freqtrade - INFO - freqtrade 2024.10
freqtrade | 2024-11-23 08:31:35,013 - numexpr.utils - INFO - NumExpr defaulting to 16 threads.
freqtrade | 2024-11-23 08:31:38,106 - freqtrade.worker - INFO - Starting worker 2024.10
freqtrade | 2024-11-23 08:31:38,107 - freqtrade.configuration.load_config - INFO - Using config: /freqtrade/user_data/config.json ...
freqtrade | 2024-11-23 08:31:38,115 - freqtrade.loggers - INFO - Verbosity set to 0
freqtrade | 2024-11-23 08:31:38,115 - freqtrade.configuration.configuration - INFO - Runmode set to dry_run.
freqtrade | 2024-11-23 08:31:38,116 - freqtrade.configuration.configuration - INFO - Parameter --db-url detected ...
freqtrade | 2024-11-23 08:31:38,116 - freqtrade.configuration.configuration - INFO - Dry run is enabled
freqtrade | 2024-11-23 08:31:38,117 - freqtrade.configuration.configuration - INFO - Using DB: "sqlite:////freqtrade/user_data/tradesv3.sqlite"
freqtrade | 2024-11-23 08:31:38,117 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 3 ...
freqtrade | 2024-11-23 08:31:38,181 - freqtrade.configuration.configuration - INFO - Using user-data directory: /freqtrade/user_data ...
freqtrade | 2024-11-23 08:31:38,183 - freqtrade.configuration.configuration - INFO - Using data directory: /freqtrade/user_data/data/okx ...
freqtrade | 2024-11-23 08:31:38,184 - freqtrade.exchange.check_exchange - INFO - Checking exchange...
freqtrade | 2024-11-23 08:31:38,191 - freqtrade.exchange.check_exchange - INFO - Exchange "okx" is officially supported by the Freqtrade development team.
freqtrade | 2024-11-23 08:31:38,192 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.
freqtrade | 2024-11-23 08:31:38,229 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy SampleStrategy from '/freqtrade/user_data/strategies/sample_strategy.py'...
freqtrade | 2024-11-23 08:31:38,230 - freqtrade.strategy.hyper - INFO - Found no parameter file.
freqtrade | 2024-11-23 08:31:38,231 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: USDT.
freqtrade | 2024-11-23 08:31:38,232 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: unlimited.
freqtrade | 2024-11-23 08:31:38,232 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'unfilledtimeout' with value in config file: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}.
freqtrade | 2024-11-23 08:31:38,233 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'max_open_trades' with value in config file: 3.
freqtrade | 2024-11-23 08:31:38,233 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {'60': 0.01, '30': 0.02, '0': 0.04}
freqtrade | 2024-11-23 08:31:38,234 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using timeframe: 5m
freqtrade | 2024-11-23 08:31:38,234 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.1
freqtrade | 2024-11-23 08:31:38,234 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: False
freqtrade | 2024-11-23 08:31:38,235 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0
freqtrade | 2024-11-23 08:31:38,236 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False
freqtrade | 2024-11-23 08:31:38,236 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_custom_stoploss: False
freqtrade | 2024-11-23 08:31:38,236 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: True
freqtrade | 2024-11-23 08:31:38,237 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'entry': 'limit', 'exit': 'limit', 'stoploss': 'market', 'stoploss_on_exchange': False}
freqtrade | 2024-11-23 08:31:38,237 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'entry': 'GTC', 'exit': 'GTC'}
freqtrade | 2024-11-23 08:31:38,238 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: USDT
freqtrade | 2024-11-23 08:31:38,238 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: unlimited
freqtrade | 2024-11-23 08:31:38,239 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 200
freqtrade | 2024-11-23 08:31:38,239 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using unfilledtimeout: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}
freqtrade | 2024-11-23 08:31:38,239 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_exit_signal: True
freqtrade | 2024-11-23 08:31:38,240 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_only: False
freqtrade | 2024-11-23 08:31:38,240 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_entry_signal: False
freqtrade | 2024-11-23 08:31:38,241 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_offset: 0.0
freqtrade | 2024-11-23 08:31:38,241 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using disable_dataframe_checks: False
freqtrade | 2024-11-23 08:31:38,242 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_buying_expired_candle_after: 0
freqtrade | 2024-11-23 08:31:38,242 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using position_adjustment_enable: False
freqtrade | 2024-11-23 08:31:38,242 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_entry_position_adjustment: -1
freqtrade | 2024-11-23 08:31:38,243 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_open_trades: 3
freqtrade | 2024-11-23 08:31:38,243 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
freqtrade | 2024-11-23 08:31:38,246 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled
freqtrade | 2024-11-23 08:31:38,247 - freqtrade.exchange.exchange - INFO - Using CCXT 4.4.24
freqtrade | 2024-11-23 08:31:38,248 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}, 'httpsProxy': 'http://127.0.0.1:7890'}
freqtrade | 2024-11-23 08:31:38,254 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}, 'httpsProxy': 'http://127.0.0.1:7890'}
freqtrade | 2024-11-23 08:31:38,261 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap', 'brokerId': 'ffb5405ad327SUDE'}, 'httpsProxy': 'http://127.0.0.1:7890'}
freqtrade | 2024-11-23 08:31:38,268 - freqtrade.exchange.exchange - INFO - Using Exchange "OKX"
freqtrade | 2024-11-23 08:31:38,296 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT". Retrying still for 3 times.
freqtrade | 2024-11-23 08:31:38,785 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT". Retrying still for 2 times.
freqtrade | 2024-11-23 08:31:39,301 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT". Retrying still for 1 times.
freqtrade | 2024-11-23 08:31:39,816 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT". Giving up.
freqtrade | 2024-11-23 08:31:39,817 - freqtrade.exchange.exchange - ERROR - Could not load markets.
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1091, in _wrap_create_connection
freqtrade | sock = await aiohappyeyeballs.start_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 104, in start_connection
freqtrade | raise first_exception
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 82, in start_connection
freqtrade | sock = await _connect_sock(
freqtrade | ^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 174, in _connect_sock
freqtrade | await loop.sock_connect(sock, address)
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 651, in sock_connect
freqtrade | return await fut
freqtrade | ^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 691, in _sock_connect_cb
freqtrade | raise OSError(err, f'Connect call failed {address}')
freqtrade | ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 7890)
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 210, in fetch
freqtrade | async with session_method(yarl.URL(url, encoded=True),
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 1359, in __aenter__
freqtrade | self._resp: _RetType = await self._coro
freqtrade | ^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 663, in _request
freqtrade | conn = await self._connector.connect(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 563, in connect
freqtrade | proto = await self._create_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1030, in _create_connection
freqtrade | _, proto = await self._create_proxy_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1391, in _create_proxy_connection
freqtrade | transport, proto = await self._create_direct_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1366, in _create_direct_connection
freqtrade | raise last_exc
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1335, in _create_direct_connection
freqtrade | transp, proto = await self._wrap_create_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1106, in _wrap_create_connection
freqtrade | raise client_error(req.connection_key, exc) from exc
freqtrade | aiohttp.client_exceptions.ClientProxyConnectionError: Cannot connect to host 127.0.0.1:7890 ssl:default [Connect call failed ('127.0.0.1', 7890)]
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 626, in _api_reload_markets
freqtrade | return await self._api_async.load_markets(reload=reload, params={})
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in load_markets
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in load_markets
freqtrade | result = await self.markets_loading
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 273, in load_markets_helper
freqtrade | markets = await self.fetch_markets(params)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1416, in fetch_markets
freqtrade | promises = await asyncio.gather(*promises)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1580, in fetch_markets_by_type
freqtrade | response = await self.publicGetPublicInstruments(self.extend(request, params))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 868, in request
freqtrade | return await self.fetch2(path, api, method, params, headers, body, config)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 864, in fetch2
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 855, in fetch2
freqtrade | return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 248, in fetch
freqtrade | raise ExchangeNotAvailable(details) from e
freqtrade | ccxt.base.errors.ExchangeNotAvailable: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 181, in wrapper
freqtrade | return f(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 638, in _load_async_markets
freqtrade | markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
freqtrade | return future.result()
freqtrade | ^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 630, in _api_reload_markets
freqtrade | raise TemporaryError(
freqtrade | freqtrade.exceptions.TemporaryError: Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | During handling of the above exception, another exception occurred:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1091, in _wrap_create_connection
freqtrade | sock = await aiohappyeyeballs.start_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 104, in start_connection
freqtrade | raise first_exception
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 82, in start_connection
freqtrade | sock = await _connect_sock(
freqtrade | ^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 174, in _connect_sock
freqtrade | await loop.sock_connect(sock, address)
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 651, in sock_connect
freqtrade | return await fut
freqtrade | ^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 691, in _sock_connect_cb
freqtrade | raise OSError(err, f'Connect call failed {address}')
freqtrade | ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 7890)
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 210, in fetch
freqtrade | async with session_method(yarl.URL(url, encoded=True),
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 1359, in __aenter__
freqtrade | self._resp: _RetType = await self._coro
freqtrade | ^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 663, in _request
freqtrade | conn = await self._connector.connect(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 563, in connect
freqtrade | proto = await self._create_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1030, in _create_connection
freqtrade | _, proto = await self._create_proxy_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1391, in _create_proxy_connection
freqtrade | transport, proto = await self._create_direct_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1366, in _create_direct_connection
freqtrade | raise last_exc
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1335, in _create_direct_connection
freqtrade | transp, proto = await self._wrap_create_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1106, in _wrap_create_connection
freqtrade | raise client_error(req.connection_key, exc) from exc
freqtrade | aiohttp.client_exceptions.ClientProxyConnectionError: Cannot connect to host 127.0.0.1:7890 ssl:default [Connect call failed ('127.0.0.1', 7890)]
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 626, in _api_reload_markets
freqtrade | return await self._api_async.load_markets(reload=reload, params={})
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in load_markets
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in load_markets
freqtrade | result = await self.markets_loading
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 273, in load_markets_helper
freqtrade | markets = await self.fetch_markets(params)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1416, in fetch_markets
freqtrade | promises = await asyncio.gather(*promises)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1580, in fetch_markets_by_type
freqtrade | response = await self.publicGetPublicInstruments(self.extend(request, params))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 868, in request
freqtrade | return await self.fetch2(path, api, method, params, headers, body, config)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 864, in fetch2
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 855, in fetch2
freqtrade | return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 248, in fetch
freqtrade | raise ExchangeNotAvailable(details) from e
freqtrade | ccxt.base.errors.ExchangeNotAvailable: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 181, in wrapper
freqtrade | return f(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 638, in _load_async_markets
freqtrade | markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
freqtrade | return future.result()
freqtrade | ^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 630, in _api_reload_markets
freqtrade | raise TemporaryError(
freqtrade | freqtrade.exceptions.TemporaryError: Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | During handling of the above exception, another exception occurred:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1091, in _wrap_create_connection
freqtrade | sock = await aiohappyeyeballs.start_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 104, in start_connection
freqtrade | raise first_exception
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 82, in start_connection
freqtrade | sock = await _connect_sock(
freqtrade | ^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 174, in _connect_sock
freqtrade | await loop.sock_connect(sock, address)
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 651, in sock_connect
freqtrade | return await fut
freqtrade | ^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 691, in _sock_connect_cb
freqtrade | raise OSError(err, f'Connect call failed {address}')
freqtrade | ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 7890)
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 210, in fetch
freqtrade | async with session_method(yarl.URL(url, encoded=True),
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 1359, in __aenter__
freqtrade | self._resp: _RetType = await self._coro
freqtrade | ^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 663, in _request
freqtrade | conn = await self._connector.connect(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 563, in connect
freqtrade | proto = await self._create_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1030, in _create_connection
freqtrade | _, proto = await self._create_proxy_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1391, in _create_proxy_connection
freqtrade | transport, proto = await self._create_direct_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1366, in _create_direct_connection
freqtrade | raise last_exc
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1335, in _create_direct_connection
freqtrade | transp, proto = await self._wrap_create_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1106, in _wrap_create_connection
freqtrade | raise client_error(req.connection_key, exc) from exc
freqtrade | aiohttp.client_exceptions.ClientProxyConnectionError: Cannot connect to host 127.0.0.1:7890 ssl:default [Connect call failed ('127.0.0.1', 7890)]
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 626, in _api_reload_markets
freqtrade | return await self._api_async.load_markets(reload=reload, params={})
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in load_markets
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in load_markets
freqtrade | result = await self.markets_loading
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 273, in load_markets_helper
freqtrade | markets = await self.fetch_markets(params)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1416, in fetch_markets
freqtrade | promises = await asyncio.gather(*promises)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1580, in fetch_markets_by_type
freqtrade | response = await self.publicGetPublicInstruments(self.extend(request, params))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 868, in request
freqtrade | return await self.fetch2(path, api, method, params, headers, body, config)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 864, in fetch2
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 855, in fetch2
freqtrade | return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 248, in fetch
freqtrade | raise ExchangeNotAvailable(details) from e
freqtrade | ccxt.base.errors.ExchangeNotAvailable: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 181, in wrapper
freqtrade | return f(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 638, in _load_async_markets
freqtrade | markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
freqtrade | return future.result()
freqtrade | ^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 630, in _api_reload_markets
freqtrade | raise TemporaryError(
freqtrade | freqtrade.exceptions.TemporaryError: Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | During handling of the above exception, another exception occurred:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1091, in _wrap_create_connection
freqtrade | sock = await aiohappyeyeballs.start_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 104, in start_connection
freqtrade | raise first_exception
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 82, in start_connection
freqtrade | sock = await _connect_sock(
freqtrade | ^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 174, in _connect_sock
freqtrade | await loop.sock_connect(sock, address)
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 651, in sock_connect
freqtrade | return await fut
freqtrade | ^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 691, in _sock_connect_cb
freqtrade | raise OSError(err, f'Connect call failed {address}')
freqtrade | ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 7890)
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 210, in fetch
freqtrade | async with session_method(yarl.URL(url, encoded=True),
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 1359, in __aenter__
freqtrade | self._resp: _RetType = await self._coro
freqtrade | ^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 663, in _request
freqtrade | conn = await self._connector.connect(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 563, in connect
freqtrade | proto = await self._create_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1030, in _create_connection
freqtrade | _, proto = await self._create_proxy_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1391, in _create_proxy_connection
freqtrade | transport, proto = await self._create_direct_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1366, in _create_direct_connection
freqtrade | raise last_exc
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1335, in _create_direct_connection
freqtrade | transp, proto = await self._wrap_create_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1106, in _wrap_create_connection
freqtrade | raise client_error(req.connection_key, exc) from exc
freqtrade | aiohttp.client_exceptions.ClientProxyConnectionError: Cannot connect to host 127.0.0.1:7890 ssl:default [Connect call failed ('127.0.0.1', 7890)]
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 626, in _api_reload_markets
freqtrade | return await self._api_async.load_markets(reload=reload, params={})
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in load_markets
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in load_markets
freqtrade | result = await self.markets_loading
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 273, in load_markets_helper
freqtrade | markets = await self.fetch_markets(params)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1416, in fetch_markets
freqtrade | promises = await asyncio.gather(*promises)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1580, in fetch_markets_by_type
freqtrade | response = await self.publicGetPublicInstruments(self.extend(request, params))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 868, in request
freqtrade | return await self.fetch2(path, api, method, params, headers, body, config)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 864, in fetch2
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 855, in fetch2
freqtrade | return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 248, in fetch
freqtrade | raise ExchangeNotAvailable(details) from e
freqtrade | ccxt.base.errors.ExchangeNotAvailable: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 665, in reload_markets
freqtrade | self._markets = retrier(self._load_async_markets, retries=retries)(reload=True)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 193, in wrapper
freqtrade | return wrapper(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 193, in wrapper
freqtrade | return wrapper(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 193, in wrapper
freqtrade | return wrapper(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 196, in wrapper
freqtrade | raise ex
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 181, in wrapper
freqtrade | return f(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 638, in _load_async_markets
freqtrade | markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
freqtrade | return future.result()
freqtrade | ^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 630, in _api_reload_markets
freqtrade | raise TemporaryError(
freqtrade | freqtrade.exceptions.TemporaryError: Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade | 2024-11-23 08:31:39,838 - freqtrade - ERROR - Could not load markets, therefore cannot start. Please investigate the above error for more details.
freqtrade | Exception ignored in: <module 'threading' from '/usr/local/lib/python3.12/threading.py'>
freqtrade | Traceback (most recent call last):
freqtrade | File "/usr/local/lib/python3.12/threading.py", line 1624, in _shutdown
freqtrade | lock.acquire()
freqtrade | File "/freqtrade/freqtrade/commands/trade_commands.py", line 18, in term_handler
freqtrade | raise KeyboardInterrupt()
freqtrade | KeyboardInterrupt:
freqtrade exited with code 2
```
What should I try next?
| 1medium
|
Title: Potential security issue
Body: Hello 👋
I run a security community that finds and fixes vulnerabilities in OSS. A researcher (@fa2y) has found a potential issue, which I would be eager to share with you.
Could you add a `SECURITY.md` file with an e-mail address for me to send further details to? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) a security policy to ensure issues are responsibly disclosed, and it would help direct researchers in the future.
Looking forward to hearing from you 👍
(cc @huntr-helper) | 0easy
|
Title: "Connection is already acquired" error
Body: Hi. I have a daemon with several asyncio tasks, each of which executes some requests. All of the tasks share the same connection.
After some time one of them begins to produce this error.
Failed code is simple :)
In main.py I have
`self.engine = Database(config.Config.SQLALCHEMY_DATABASE_URI)
await self.engine.connect()
`
And in broken task
`await wait_for(self.engine.execute(query=statement), 10)
`
What could be wrong? | 1medium
|
Title: Feature request: threading support
Body: Has support for multi-threaded processes been considered in the past? Any pointers on how one would go about adding that to pudb? | 1medium
|
Title: Feature Request: magic command to paste code from file and execute
Body: There are multiple instances where one would want to send code to a running IPython terminal from an editor (a la Spyder). An example of this would be this extension for VSCode https://github.com/hoangKnLai/vscode-ipython which works really well in its current state. However, it's using the "%run -i" command to run code that resides in some temporary file.
The "%load" magic command can be used for such a task, but it does not execute the code, it simply pastes it in the terminal. This requires sending multiple "enter" keys to the terminal to execute the code, and it also keeps the "%load ...." tag at the top which contaminates the history in case the user wants to re-run the same code block from history using the up arrow. It looks like this command was mainly designed for injecting code from a file into a jupyter notebook cell.
There is another "%paste" magic command but loads code from the clipboard, but the clipboard tends to be a little sluggish and act up especially when firing multiple commands from the editor (think running code line after line).
A potential solution is using "%run -i path/to/file.py" which the aforementioned extension implements. However, looking at the source code for this magic function, it seems to be making some manipulation to the user name space, among other things. We are not sure if it's "safe" to be using the %run command for this.
Would it be possible to add a new, simple magic command, call it "%fpaste" that does the same thing as "%paste" but loads code from a file instead of the clipboard? There are no namespace checks of any sort, it simply loads the code in the current session and executes it. It would simplify things a lot for extension developers since we can just send a "%fpaste path/to/file.py" to ipython and it automatically loads the code and executes it. This way we don't have to send multiple enter keys to the terminal.
Here is a simple implementation that I am using currently to create a custom magic command that does exactly this:
```python
# ~\.ipython\profile_default\startup\fpaste.py
from IPython.core.magic import Magics, magics_class, line_cell_magic, line_magic
# The class MUST call this class decorator at creation time
@magics_class
class MyMagics(Magics):
@line_magic
def fpaste(self, line):
import sys
from pathlib import Path
contents = Path('path/to/pycode.py')
sys.stdout.write(self.shell.pycolorize(contents))
self.shell.run_cell(contents, store_history=True)
get_ipython().register_magics(MyMagics)
```
This works pretty well. I had to keep the import statements for path and sys inside the function otherwise I have to import them again after doing a hard reset with "%reset -f".
The current workflow in VSCode would look like this:
[in editor]
trigger keybinding that sends text to ipython, VSCode writes the contents of the selected code or code cell to the pycode.py file, then send a "%fpaste path/to/pycode.py" to the integrated terminal running ipython, then finally sends one enter key to execute the whole statement.
[switch to integrated terminal running ipython]
ipython executes the code because that's what the %fpaste implementation does, the execution is separate from printing the code to stdout. Maybe a better implementation would be to skip writing the contents to stdout in case that's not a desired behavior, we can just execute the code.
Note:
Since history is set to true in the fpaste function, we can easily hit up arrow and rerun the code if needed.
For some reason that I don't understand, the %fpaste tag does not show up in the history. Is this by design? It is the needed behavior but it would be nice to understand why it's happening. As mentioned, using the "%load" command tends to inject the magic command tag in history as well, which pollutes the code a little when the user wants to re-run it.
Here is a screenshot of what this would look like (using IPython classic prompt):

Same snippet ran using the "%load" command:

| 1medium
|
Title: Crash report for output
Body: ## Mycodo Issue Report:
7.1.3
#### Problem Description
Please list:
Hi All, run into this error for outputs set up,
After pressing: add output for atlas i2c pump signal crash error accrued.
Restart / did not solved it, pressing output in setup generate the same error
Any suggestions? many thanks!
### Errors
```
Error 500: Internal Server Error
Something bad happened but it's probably not your fault. Letting the developers know about these issues is crucial to supporting Mycodo. Please submit a new issue on GitHub with the following error traceback (copy the entire traceback):
Error (Full Traceback):
Traceback (most recent call last):
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask_login/utils.py", line 261, in decorated_view
return func(*args, **kwargs)
File "/home/pi/Mycodo/mycodo/mycodo_flask/routes_page.py", line 1546, in page_output
user=user)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/templating.py", line 135, in render_template
context, ctx.app)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/templating.py", line 117, in _render
rv = template.render(context)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb)
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/output.html", line 3, in top-level template code
{% set help_page = ["output", dict_translation['output']['title']] %}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/layout.html", line 260, in top-level template code
{%- block body %}{% endblock -%}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/output.html", line 230, in block "body"
{% include 'pages/output_options/'+each_output_template %}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/output_options/atlas_ezo_pmp.html", line 4, in top-level template code
{{form_mod_output.i2c_location.label(class_='control-label')}}
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/environment.py", line 430, in getattr
return getattr(obj, attribute)
jinja2.exceptions.UndefinedError: 'mycodo.mycodo_flask.forms.forms_output.OutputMod object' has no attribute 'i2c_location'
```
### Steps to Reproduce the issue:
How can this issue be reproduced?
1. step 1
2. step 2...
3. etc
### Additional Notes
Is there anything that should be added to make it easier
to address this issue? | 1medium
|
Title: Enhancement: Allow parameter of type `Iterable` to be processed as `list`
Body: ### Description
Litestar doesn't think it should process `Iterable` parameter as an array.
The original [discussion](https://discord.com/channels/919193495116337154/1260559977081471058) was about default parameter value, but the problem is actually with how litestar handles `Iterable` itself
### URL to code causing the issue
https://github.com/litestar-org/litestar/blob/8c4c15bb501879dabaecfbf0af541ac571c08cf3/litestar/_kwargs/parameter_definition.py#L67C9-L67C60
### MCVE
```python
@get('/sequence')
async def sequence(foo: Sequence[str]) -> Sequence[str]:
return foo
@get('/iterable')
async def iterable(foo: Iterable[str]) -> Iterable[str]:
return foo
app = Litestar(
route_handlers=(
sequence,
iterable,
)
)
```
### Steps to reproduce
```bash
$ curl 'localhost:8000/sequence?foo=s1&foo=s2'
["s1","s2"]
$ curl 'localhost:8000/iterable?foo=s1&foo=s2'
s2
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.9.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | 1medium
|
Title: Is there any standard configuration of softprompt on few-shot tasks to match the performance of the Papers?
Body: | 1medium
|
Title: Include Keras, sklearn in docs and structure
Body: Right now, it's all one big text. We should split up different subsections for different frontends. At the top, we should have a quick text summarizing the frontend framework and describing how to import them dynamically (using `kymatio.Scattering*D` and specifying `frontend` when creating). | 1medium
|
Title: ModuleNotFoundError: No module named 'tensorflow.keras.layers.experimental'Bug:
Body: ### Bug Description
when import autokeras
import autokeras as ak
### Bug Reproduction
Code for reproducing the bug:
https://colab.research.google.com/github/keras-team/autokeras/blob/master/docs/ipynb/structured_data_classification.ipynb
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python: 3
- autokeras: <!--- e.g. 0.4.0, 1.0.2, master-->
- keras-tuner:
- scikit-learn:
- numpy:
- pandas:
- tensorflow:
### Additional context
[<ipython-input-4-4e35e895c450>](https://localhost:8080/#) in <cell line: 4>()
2 import tensorflow as tf
3
----> 4 import autokeras as ak
7 frames
[/usr/local/lib/python3.10/dist-packages/autokeras/keras_layers.py](https://localhost:8080/#) in <module>
18 from tensorflow import nest
19 from tensorflow.keras import layers
---> 20 from tensorflow.keras.layers.experimental import preprocessing
21
22 from autokeras.utils import data_utils
ModuleNotFoundError: No module named 'tensorflow.keras.layers.experimental | 1medium
|
Title: narrow gaps in 2d kernel density diagram
Body: Hi there!
When I plot a 2d kernel density diagram, there are narrow gaps between the adjacent contours (not white lines).
I am wondering if there is anyway to fill these gaps, thanks!

| 1medium
|
Title: Feature request: Streaming
Body: As per [my comment in #11](https://github.com/jmcnamara/XlsxWriter/issues/11#issuecomment-479149512), it is possible, if not completely straightforward, to stream ZIP files on output, for example using [`zipstream`](https://github.com/allanlei/python-zipstream). The current ZIP member must be finished before the next one can be started, however. The current `zipstream` also requires all the members to be listed beforehand, but this is an artifact of the current implementation that shouldn’t be too hard to fix.
Does this mean `xlsxwriter` could work in `constant_memory` mode without creating any temporary files as long as write operations to different worksheets are not interleaved?
I would be willing to implement this if necessary and possible, but I’m not at all familliar with the internal structure of `xlsxwriter`, so some design help would be appreciated. | 1medium
|
Title: 3.7.6 does not have wheel for linux aarch64 / arm64
Body: - https://pypi.org/project/spacy/3.7.6/#files
- https://pypi.org/project/spacy/3.7.5/#files
3.7.5 provides wheels for linux aarch64 for various python versions, but 3.7.6 does not have any wheels for linux aarch64.
Is it intended? Couldn't find related info on changelogs. | 1medium
|
Title: how to configre security on the top level
Body: I want use security on the top level, but how to configure. | 1medium
|
Title: Error on mxnet/gluon: after upgrade the mxnet-cu100 and gluoncv,then run script "train_faster_rcnn.py"
Body: My virtual environment is broken and then I create a new virtual environment named "huf_mx" as following.

Trying to run script "train_faster_rcnn.py" on spyder IDE, I get the error.
As shown in the image, there are some problem on mxnet/gluon, or maybe gluoncv either.
How to fix it, installing another mxnet?
I have tried 1.4.0,but not work.
thx a lot. | 1medium
|
Title: 代码里好多错误,pix2pixHD_model_DA.py里面的netE是什么了?
Body: 代码里好多错误,pix2pixHD_model_DA.py里面的netE是什么了?出现的莫名其妙 | 1medium
|
Title: Slack Action `type_select` Not Triggering After Modal is Opened
Body: Hello everyone, I'm seeing an issue with the SDK for a particular action `type_select`, I've provided more details below. Any help would be appreciated. Also, please let me know if more information is required from my side.
### Reproducible in:
#### The `slack_bolt` version
```bash
slack_bolt==1.20.1
slack_sdk==3.33.1
```
#### Python runtime version
```bash
Python 3.9.6
```
#### OS info
```bash
ProductName: macOS
ProductVersion: 15.0
BuildVersion: 24A335
Darwin Kernel Version 24.0.0: Mon Aug 12 20:51:54 PDT 2024; root:xnu-11215.1.10~2/RELEASE_ARM64_T6000
```
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1. Open the modal by clicking the “Create Broadcast” button.
2. Try to select either “Task” or “FYI” from the dropdown.
3. Observe that the modal is not dynamically updated, and no logging occurs for the type_select action.
### Expected result:
I was working on a dynamic form extension in Slack using the Bolt framework. The form allows users to select between different broadcast types (such as Task or FYI) using a dropdown. Based on the selected value, additional fields should be dynamically shown in the modal. For example:
- If the user selects Task, fields related to task details (like start date, end date, number of reminders) should appear.
- If the user selects FYI, these fields would not be shown.
Similarly, I have another dropdown for selecting the broadcast method (Channel or DM). If DM is selected, a field should appear for selecting multiple users who will receive the broadcast. However, the action for the dropdown selection `type_select` is not being triggered, preventing the dynamic form behavior from working as expected.
1. The modal should open after the button is clicked.
2. When the user selects Task or FYI in the dropdown, the type_select action should be triggered, and the modal should be updated with additional fields based on the selection.
```bash
# In broadcast_modal_content.py
def get_initial_blocks():
return [
{
"type": "input",
"block_id": "broadcast_type",
"label": {"type": "plain_text", "text": "Is this a Task or FYI?"},
"element": {
"type": "static_select",
"action_id": "type_select", # This is the action ID not being triggered
"options": [
{"text": {"type": "plain_text", "text": "Task"}, "value": "task"},
{"text": {"type": "plain_text", "text": "FYI"}, "value": "fyi"}
]
}
}
]
# In broadcast_action.py
@app.action("type_select")
def update_modal_on_type_selection(ack, body, client, logger):
ack() # Acknowledge the action
logger.info("type_select action triggered") # This log is never seen
# Further processing for modal update
```
### Actual result:
- The open_broadcast_form action triggers correctly, and the modal opens.
- However, the type_select action is not triggered when a selection is made in the dropdown.
## Additional Details
<img width="740" alt="Screenshot 2024-10-15 at 12 38 41 PM" src="https://github.com/user-attachments/assets/161fcef6-930d-4c39-8354-2beaaa100531">
| 1medium
|
Title: keras.io examples using model.stateless_call aren't setting training=True and aren't managing non_trainable state
Body: both the examples
* https://keras.io/guides/writing_a_custom_training_loop_in_jax/
* https://keras.io/guides/distributed_training_with_jax/
have a training loop that uses `stateless_call`, but neither sets `training=True` ( which means they are using the `training=False` behaviour )
i guess it's not strictly relevant to the first example, which has no interesting non_trainables, but the second example has batch norm and dropout.
additionally i don't see where this non_trainable state would be managed?
* re: batchnorm; each replica might have the same params, but is getting a different batch, so would collect different batch norm stats after the stateless_call. these aren't being aggregated?
* re: dropout; the RNGKeys for dropout are in the non_trainable, but never change. shouldn't these be split after each forward pass?
| 1medium
|
Title: Cannot even import autoviz?
Body: Not able to even import autoviz after installation. Thought to try it, but doesn't work.
Please see the screenshot below:



| 1medium
|
Title: Where to write the dependency?
Body: Sorry this is a very simple question, I have selected the
```
Select dependency_source:
1 - conda-forge
```
I wonder where should I add dependencies for the installation?
Normally, I would add the dependencies to `install_requires` in `setup.py`, but that would mean that the dependency is installed via pip.
I wonder where should I put the dependencies? | 1medium
|
Title: Adding support for netCDF (*.nc) files
Body: ### Feature request
netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`.
### Motivation
When uploading *.nc files onto Huggingface Hub through the `datasets` API, I would like to be able to preview the dataset without converting it to another format.
### Your contribution
I can submit a PR, provided I have the time. | 2hard
|
Title: Run failed when using nms in batch infer
Body: ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Export
### Bug
I want to convert the model to ONNX format and integrate NMS into the model. However, after conversion, when I try to perform batch inference with dynamic shapes, I get a Split error. When I change the batch size during conversion, the error numbers also change accordingly. It only works correctly when the preset batch size matches the inference batch size. Is this because some tensor values were converted to constants somewhere? I hope to get an answer.
err message:
```
[E:onnxruntime:, sequential_executor.cc:516 ExecuteKernel] Non-zero status code returned while running Split node. Name:'/Split_2' Status Message: Cannot split using values in 'split' attribute. Axis=0 Input shape={3,8400} NumOutputs=1 Num entries in 'split' (must equal number of outputs) was 1 Sum of sizes in 'split' (must equal size of selected axis) was 1
```
the split node:
<img width="740" alt="Image" src="https://github.com/user-attachments/assets/a99745fd-0b79-4d2d-b6ed-efc33ddeb666" />
### Environment
ultralytics 8.3.74
torch 2.4.0
onnx 1.17.0
onnxruntime-gpu 1.19.0
opencv-python 4.10.0.84
python 3.12.4
### Minimal Reproducible Example
export code:
```
from ultralytics import YOLO
import cv2
import numpy as np
model = YOLO('yolov11n-face.pt') # load a pretrained YOLOv8n detection model
image = cv2.imread('../datasets/face_test/image.jpg')
results = model((image, image2, image3, image4)) # predict on an image
model.export(format="onnx", nms=True, dynamic=True, simplify=True)
```
test code:
```
image = cv2.imread('../datasets/face_test/image1.jpg')
image2 = cv2.imread('../datasets/face_test/image2.jpg')
results = model((image, image2)) # predict on an image
output_path = 'yolov11n-face.onnx'
image_list = [image, image2]
w = torch.Tensor([i.shape[0] for i in image_list]).to(model.device)
h = torch.Tensor([i.shape[1] for i in image_list]).to(model.device)
in_tensor = preprocess(image_list) # (2, 640, 640, 3)
onnx_model = onnx.load(output_path)
onnx.checker.check_model(onnx_model)
ort_session = onnxruntime.InferenceSession(output_path)
ort_inputs = {
ort_session.get_inputs()[0].name: to_numpy(in_tensor.cpu())
}
# print(ort_inputs)
for _ in range(1):
ort_outs = ort_session.run(None, ort_inputs)
print('onnx out:')
print(ort_outs)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | 1medium
|
Title: add replay for lantern live
Body: | 1medium
|
Title: Colored 1.5 broke syrupy
Body: **Describe the bug**
I run `poetry update` this morning and it updated `colored` to 1.5. Which is a legit update according to pyrupy `pyproject.toml`. From the change log of `colored` I can see they renamed some functions like `bg` to `back` and `fg` to `fore`. These changes broke syrupy.
<!-- A clear and concise description of what the bug is. -->
**To reproduce**
Update to colored 1.5 and try using syrupy in a CI pipeline on github/gitlab
<!--
Steps to reproduce the behavior:
1. Setup '...'
2. Run command '....'
3. See error
-->
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
It should not break or it should not allow to update to 1.5
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
**Environment (please complete the following information):**
- OS: I am not sure whatever my CI is using, I think Alpine
- Syrupy Version: 3.0.6
- Python Version: 3.11
**Additional context**
<!-- Add any other context about the problem here. -->
| 1medium
|
Title: Get Landmarks
Body: Dear all,
maybe I am too stupid to find it. How can I get the Landmarks when I am using extract_faces?
kr
Dirk | 1medium
|
Title: `SemanticSegmentationTarget` throws tensor device casting error
Body: ## Bug
The following code
```python
targets = [
SemanticSegmentationTarget(2, np.ones((128, 128), dtype=np.float32))
]
with GradCAM(model=m, target_layers=l, reshape_transform=reshape_transform) as cam:
mask = cam(input_tensor=timg, targets=targets)
```
throws a RuntimeError
```
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, mps:0 and cpu!
```
## Full Stack Trace
```
{
"name": "RuntimeError",
"message": "Expected all tensors to be on the same device, but found at least two devices, mps:0 and cpu!",
"stack": "---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[5], line 30
24 targets = [
25 SemanticSegmentationTarget(2, np.ones((128, 128), dtype=np.float32))
26 # SemanticSegmentationTarget(10, np.ones((128, 128), dtype=np.float32))
27 ]
29 with GradCAM(model=m, target_layers=l, reshape_transform=reshape_transform) as cam:
---> 30 mask = cam(
31 input_tensor=timg,
32 targets=targets)
34 pt.imshow(show_on_image(img, mask[0]))
File ~/.pyenv/versions/3.12.3/lib/python3.12/site-packages/pytorch_grad_cam/base_cam.py:186, in BaseCAM.__call__(self, input_tensor, targets, aug_smooth, eigen_smooth)
183 if aug_smooth is True:
184 return self.forward_augmentation_smoothing(input_tensor, targets, eigen_smooth)
--> 186 return self.forward(input_tensor, targets, eigen_smooth)
File ~/.pyenv/versions/3.12.3/lib/python3.12/site-packages/pytorch_grad_cam/base_cam.py:98, in BaseCAM.forward(self, input_tensor, targets, eigen_smooth)
96 if self.uses_gradients:
97 self.model.zero_grad()
---> 98 loss = sum([target(output) for target, output in zip(targets, outputs)])
99 loss.backward(retain_graph=True)
101 # In most of the saliency attribution papers, the saliency is
102 # computed with a single target layer.
103 # Commonly it is the last convolutional layer.
(...)
108 # use all conv layers for example, all Batchnorm layers,
109 # or something else.
File ~/.pyenv/versions/3.12.3/lib/python3.12/site-packages/pytorch_grad_cam/utils/model_targets.py:68, in SemanticSegmentationTarget.__call__(self, model_output)
67 def __call__(self, model_output):
---> 68 return (model_output[self.category, :, :] * self.mask).sum()
69 return (model_output[self.category, :, :] * self.mask.to(model_output.device)).sum()
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, mps:0 and cpu!"
}
```
## Version
grad-cam==1.5.4
on **Python 3.12.3**
| 1medium
|
Title: Remove deprecated `SentenceWindowRetrieval` component
Body: It was left as a stubbed component in 2.4 to aid backward compatibility with existing pipelines. It should be removed in 2.5. | 1medium
|
Title: 人名识别导致分词错误
Body: <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.5.2
我使用的版本是:1.3.4
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
句子 :
```
商改后要买另外的保险保费怎么算
```
分词结果:
```
[商改后/nr, 要买/nz, 另外/c, 的/ude1, 保险/n, 保费/n, 怎么/ryv, 算/v]
```
其中 商改后 被识别为一个人名
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
System.out.println(StandardTokenizer.segment("商改后要买另外的保险保费怎么算"));
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
[商改/v,后/, 要买/nz, 另外/c, 的/ude1, 保险/n, 保费/n, 怎么/ryv, 算/v]
```
### 实际输出
```
[商改后/nr, 要买/nz, 另外/c, 的/ude1, 保险/n, 保费/n, 怎么/ryv, 算/v]
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| 1medium
|
Title: Problem with Twisted AttributeError: 'EPollReactor' object has no attribute '_handleSignals'
Body: ### Description
Since the release of Twisted 23.8.0 recently, I've had problems running Scrapy. I had to fix to the previous version, Twisted==22.10.0
### Steps to Reproduce
1. Install Scrapy in a new environment
2. Run .start() from an CrawlerProcess object
**Expected behavior:** Scrapy to crawl
**Actual behavior:**

**Reproduces how often:** Always in the new version
### Versions
Scrapy>=2.9.0
### Additional context
I've seen this fix on [stackoverflow](https://stackoverflow.com/questions/76995567/error-when-crawl-data-epollreactor-object-has-no-attribute-handlesignals)
| 1medium
|
Title: Solution: Integration with Tableau, Power BI and Data Studio
Body: **Problem research and evidence**
1. Integration with top data visualization tools was suggested during the discussion: https://github.com/ckan/ckan/issues/7499 @jqnatividad concluded that it's an optimal way to enable top data visualization experience.
2. During the research there were 3 respondents that mentioned integrations with BI Tools:
- Customizable dashboard - linking Tableau, Power BI, Google Data Studio
- Make it easier to send data to Tableau, Power BI
- Integration with Tableau
⚡️ If you have a datastore you can integrate a BI tool.
**Solution hypothesis**
Solution should provide a view of Tableau, Power BI and Data Studio on CKAN pages. It'll enable publishers enrich datasets published with visualizations and stories based on them.
Solution should enable data sending to data visualization tool with from CKAN interface.
**Solution description**
TBD
**Questions to consider**
Is this change going to break current installations?
Can we provide a backwards compatibility?
How easy is gonna be for current implementations to migrate to this new release?
Do current versions of CKAN have the adequate resources/support to migrate to this new version?
Are we going to change the database schema?
Are we going to change the API?
Are we going to deprecate Interfaces? | 2hard
|
Title: How to use loss instead of accuray in HPO
Body: **Describe the issue**: I just started using nni and followed the /hpo_quickstart_pytorch/ example to inplement.
But, instead of accuracy, I need to thave the main monitor and optimize for the loss, and the lower loss, the better, obviously.
I used nni.report_intermediate_result(valLossSingle) to report the loss. I think I need to let the tuner know that this is actually loss, rather than accuracy. How to do that?
**Environment**:
- NNI version: 2.7
- Training service (local|remote|pai|aml|etc): local
- Client OS: ubuntu
- Server OS (for remote mode only):
- Python version: 3.10
- PyTorch/TensorFlow version: 2
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | 1medium
|
Title: tflearn Installation issues.
Body: Hello,
I've been trying to install tflearn module for a while now and haven't been able to do so. I have checked most documentations and other problems but none seem to be usefull.
I am using anaconda extension. I have tensorflow installed, it's current installed version is 1.8 and as far as I understood it should be higher or equal to 1.0.
I can't use anaconda navigator for some reason but the prompt is working just fine and that's where I've been trying.
`import tflearn
curses is not supported on this machine (please install/reinstall curses for an optimal experience)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tflearn-0.3.2-py3.6.egg\tflearn\__init__.py", line 62, in <module>
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tflearn-0.3.2-py3.6.egg\tflearn\datasets\__init__.py", line 1, in <module>
# -*- coding: utf-8 -*-
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tflearn-0.3.2-py3.6.egg\tflearn\datasets\cifar10.py", line 16, in <module>
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tflearn-0.3.2-py3.6.egg\tflearn\data_utils.py", line 7, in <module>
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\PIL\Image.py", line 60, in <module>
from . import _imaging as core
ImportError: DLL load failed: The specified module could not be found.`
That's the error that comes up when I try to import it from the prompt. It seems like it is installed but not importable
`~\AppData\Local\Continuum\anaconda3\lib\site-packages\PIL\Image.py in <module>()
58 # Also note that Image.core is not a publicly documented interface,
59 # and should be considered private and subject to change.
---> 60 from . import _imaging as core
61 if PILLOW_VERSION != getattr(core, 'PILLOW_VERSION', None):
62 raise ImportError("The _imaging extension was built for another "
ImportError: DLL load failed: The specified module could not be found.`
And that's a jupyter's notebook import error.
Please guide me!
| 1medium
|
Title: [BUG] Note to the installation guide
Body: Hi! your instalation [guide](https://fastapi-utils.davidmontague.xyz/#installation) differs your guide in [github](https://github.com/dmontagu/fastapi-utils?tab=readme-ov-file#installation)
<img width="1089" alt="image" src="https://github.com/user-attachments/assets/226b09f3-44f2-46ba-876a-0c15d64788b7">
<img width="1025" alt="image" src="https://github.com/user-attachments/assets/8b0a1071-20ea-40d3-afe5-19ba6d97ce15">
| 0easy
|
Title: Rename DAG (from models/dag) to SchedulerDAG -- or remove it!
Body: ### Body
It is easily confused with the DAG in task sdk.
Ideally we can remove. At least rename it.
comment from ash
> That should be renamed as SchedulerDAG really -- (or possibly it can be removed now) -- it's a) a compat import while that part isn't complete, and b) where scheduler specific DAG functions go
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | 1medium
|
Title: TST: make sure tests work on python 2.6
Body: | 1medium
|
Title: [Discussion] can we convert the pretrained model to saved model format?
Body: can we directly download the model and convert to saved model format, then load from Cpp API? | 3misc
|
Title: Help menu/main page on github have dead links
Body: Google Code closed down. As such the FAQ/Online Help/Scripting Help links lead to dead sites. They probably should be changed to Archive.org links or hosted elsewhere preferably. The Paypal link is defunct as well.
| 0easy
|
Title: AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'next'
Body: Hi.
I am currently trying to train my own model.
I obtained the dataset as required and modified the config file. However, I get this error when trying to train.
I have already tried to decrease workers in config file.
I also tried to modify the dataset.py file, line 101 from
image, text = data_loader_iter.next()
to
image, text = next(data_loader_iter)
However, the error persist.
Thanks | 1medium
|
Title: models中图片字段展示
Body: 模型中我存储的图片路径为“/blog/static/images/blog/timg_PrPLvaa.jpg”,这是我访问静态资源路径,但是后台显示该字段时是用a标签包裹,自动拼接的路径不对,“http://192.168.3.28:5058/admin/blog/article/1/change/blog/static/images/blog/timg_PrPLvaa.jpg”,前边拼接的可以设计成可以让用户自定义修改,要不然点击访问不了该图片。再不然也可以不自动进行拼接,让用户保存完整链接。
**留下你的联系方式,以便与你取得联系**
QQ:13716.7399
| 1medium
|
Title: [BUG-REPORT] to_pandas_df giving error Urgent!!!
Body: @JovanVeljanoski @maartenbreddels Hey There Can you please help me in this, its little Urgent
I'm trying to do the conversion of vaex dataframe to pandas dataframe but its giving me error shared down below. At first I've taken two dataframes and joined them on more than column with merging solution and after that I've tried converting it to pandas data frame.
Both the dataframes have these columns and datatypes:
one data frame is named as df and another is named as product
first image is of df dataframe:

second image is for product dataframe

These two tables are joined using below code:

After this we get dataframe Named megedStuff which gives us the joins. Also many time I saw vaex is not good at handling empty table cells or nan values or missing values.
When I try to convert this mergedStuff to pandas df it shows below error and also I'm attaching data with this. Maybe after converting it to csvs it will work fine but I'm not sure.
[product.csv](https://github.com/vaexio/vaex/files/8231496/product.csv)
[df.csv](https://github.com/vaexio/vaex/files/8231497/df.csv)
ArrowIndexErrorTraceback (most recent call last)
<ipython-input-11-968fb59ca07f> in <module>
----> 1 mergedStuff1 = mergedStuff.to_pandas_df()
/opt/conda/lib/python3.7/site-packages/vaex/dataframe.py in to_pandas_df(self, column_names, selection, strings, virtual, index_name, parallel, chunk_size, array_type)
3242 return iterator()
3243 else:
-> 3244 return create_pdf(self.to_dict(column_names=column_names, selection=selection, parallel=parallel, array_type=array_type))
3245
3246 @docsubst
/opt/conda/lib/python3.7/site-packages/vaex/dataframe.py in to_dict(self, column_names, selection, strings, virtual, parallel, chunk_size, array_type)
3157 yield i1, i2, dict(list(zip(column_names, [array_types.convert(chunk, array_type) for chunk in chunks])))
3158 return iterator()
-> 3159 return dict(list(zip(column_names, [array_types.convert(chunk, array_type) for chunk in self.evaluate(column_names, selection=selection, parallel=parallel)])))
3160
3161 @_hidden
/opt/conda/lib/python3.7/site-packages/vaex/dataframe.py in evaluate(self, expression, i1, i2, out, selection, filtered, array_type, parallel, chunk_size, progress)
2996 return self.evaluate_iterator(expression, s1=i1, s2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size, progress=progress)
2997 else:
-> 2998 return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size, progress=progress)
2999
3000 @docsubst
/opt/conda/lib/python3.7/site-packages/vaex/dataframe.py in _evaluate_implementation(self, expression, i1, i2, out, selection, filtered, array_type, parallel, chunk_size, raw, progress)
6351 arrays[expression][i1:i2] = blocks[i]
6352 if expression_to_evaluate:
-> 6353 df.map_reduce(assign, lambda *_: None, expression_to_evaluate, progress=progress, ignore_filter=False, selection=selection, pre_filter=use_filter, info=True, to_numpy=False, name="evaluate")
6354 def finalize_result(expression):
6355 expression_obj = expression
/opt/conda/lib/python3.7/site-packages/vaex/dataframe.py in map_reduce(self, map, reduce, arguments, progress, delay, info, to_numpy, ignore_filter, pre_filter, name, selection)
427 progressbar.add_task(task, f'map reduce: {name}')
428 task = self.executor.schedule(task)
--> 429 return self._delay(delay, task)
430
431 def apply(self, f, arguments=None, vectorize=False, multiprocessing=True):
/opt/conda/lib/python3.7/site-packages/vaex/dataframe.py in _delay(self, delay, task, progressbar)
1686 return task
1687 else:
-> 1688 self.execute()
1689 return task.get()
1690
/opt/conda/lib/python3.7/site-packages/vaex/dataframe.py in execute(self)
410 print(repr(task))
411 if self.executor.tasks:
--> 412 self.executor.execute()
413
414 async def execute_async(self):
/opt/conda/lib/python3.7/site-packages/vaex/execution.py in execute(self)
306
307 def execute(self):
--> 308 for _ in self.execute_generator():
309 pass # just eat all elements
310
/opt/conda/lib/python3.7/site-packages/vaex/execution.py in execute_generator(self, use_async)
433 dataset.row_count,
434 progress=progress,
--> 435 cancel=lambda: self._cancel(run), unpack=True, run=run, use_async=use_async)
436 duration_wallclock = time.time() - t0
437 logger.debug("executing took %r seconds", duration_wallclock)
/opt/conda/lib/python3.7/site-packages/vaex/multithreading.py in map(self, callable, iterator, count, on_error, progress, cancel, unpack, use_async, **kwargs_extra)
104 iterator = (loop.run_in_executor(self, lambda value=value: wrapped(value)) for value in cancellable_iter())
105 else:
--> 106 iterator = super(ThreadPoolIndex, self).map(wrapped, cancellable_iter())
107 total = 0
108 iterator = iter(buffer(iterator, self._max_workers + 3))
/opt/conda/lib/python3.7/concurrent/futures/_base.py in map(self, fn, timeout, chunksize, *iterables)
573 end_time = timeout + time.monotonic()
574
--> 575 fs = [self.submit(fn, *args) for args in zip(*iterables)]
576
577 # Yield must be hidden in closure so that the futures are submitted
/opt/conda/lib/python3.7/concurrent/futures/_base.py in <listcomp>(.0)
573 end_time = timeout + time.monotonic()
574
--> 575 fs = [self.submit(fn, *args) for args in zip(*iterables)]
576
577 # Yield must be hidden in closure so that the futures are submitted
/opt/conda/lib/python3.7/site-packages/vaex/multithreading.py in cancellable_iter()
90 chunk_iterator = iterator
91 def cancellable_iter():
---> 92 for value in chunk_iterator:
93 yield value
94 if cancelled:
/opt/conda/lib/python3.7/site-packages/vaex/dataset.py in chunk_iterator(self, columns, chunk_size, reverse)
647 raise KeyError(f'Oops, you tried to get column {name}, but you renamed it to {rename}')
648 columns = [self.reverse.get(name, name) for name in columns]
--> 649 for i1, i2, chunks in self.original.chunk_iterator(columns, chunk_size, reverse=reverse):
650 yield i1, i2, {self.renaming.get(name, name): ar for name, ar in chunks.items()}
651
/opt/conda/lib/python3.7/site-packages/vaex/dataset.py in chunk_iterator(self, columns, chunk_size, reverse)
1165 if column in self._dropped_names:
1166 raise KeyError(f'Oops, you tried to get column {column} while it is actually dropped')
-> 1167 yield from self.original.chunk_iterator(columns, chunk_size=chunk_size, reverse=reverse)
1168
1169 def hashed(self):
/opt/conda/lib/python3.7/site-packages/vaex/join.py in chunk_iterator(self, *args, **kwargs)
62
63 def chunk_iterator(self, *args, **kwargs):
---> 64 yield from self.original.chunk_iterator(*args, **kwargs)
65
66 def hashed(self):
/opt/conda/lib/python3.7/site-packages/vaex/dataset.py in chunk_iterator(self, columns, chunk_size, reverse)
1235 for (i1, i2, ichunks), (j1, j2, jchunks) in zip(
1236 self.left.chunk_iterator(columns_left, chunk_size, reverse=reverse),
-> 1237 self.right.chunk_iterator(columns_right, chunk_size, reverse=reverse)):
1238 # TODO: if one of the datasets does not respect the chunk_size (e.g. parquet)
1239 # this might fail
/opt/conda/lib/python3.7/site-packages/vaex/dataset.py in chunk_iterator(self, columns, chunk_size, reverse)
908 # TODO: we may be able to do this slightly more efficient by first
909 # materializing the columns
--> 910 yield from self._default_chunk_iterator(self._columns, columns, chunk_size, reverse=reverse)
911
912 def slice(self, start, end):
/opt/conda/lib/python3.7/site-packages/vaex/dataset.py in _default_chunk_iterator(self, array_map, columns, chunk_size, reverse)
506 def _default_chunk_iterator(self, array_map, columns, chunk_size, reverse=False):
507 for i1, i2, reader in self._default_lazy_chunk_iterator(array_map, columns, chunk_size, reverse):
--> 508 yield i1, i2, reader()
509
510 @abstractmethod
/opt/conda/lib/python3.7/site-packages/vaex/dataset.py in reader(i1, i2)
497 i2 = min((i + 1) * chunk_size, self.row_count)
498 def reader(i1=i1, i2=i2):
--> 499 chunks = {k: array_map[k][i1:i2] for k in columns}
500 length = i2 - i1
501 for name, chunk in chunks.items():
/opt/conda/lib/python3.7/site-packages/vaex/dataset.py in <dictcomp>(.0)
497 i2 = min((i + 1) * chunk_size, self.row_count)
498 def reader(i1=i1, i2=i2):
--> 499 chunks = {k: array_map[k][i1:i2] for k in columns}
500 length = i2 - i1
501 for name, chunk in chunks.items():
/opt/conda/lib/python3.7/site-packages/vaex/column.py in __getitem__(self, slice)
375 take_indices[mask] = 0
376 if isinstance(ar_unfiltered, supported_arrow_array_types):
--> 377 ar = ar_unfiltered.take(vaex.array_types.to_arrow(take_indices))
378 else:
379 ar = ar_unfiltered[take_indices]
/opt/conda/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.Array.take()
/opt/conda/lib/python3.7/site-packages/pyarrow/compute.py in take(data, indices, boundscheck, memory_pool)
452 """
453 options = TakeOptions(boundscheck=boundscheck)
--> 454 return call_function('take', [data, indices], options, memory_pool)
455
456
/opt/conda/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
/opt/conda/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
/opt/conda/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
/opt/conda/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowIndexError: Index 21245 out of bounds | 2hard
|
Title: 能否通过文本向量化中加入指令优化图片召回率
Body: 目前能否能通过在文本向量化中加入类似“为这个句子生成向量以用于检索图片:”,的指令来优化图片的召回率 | 1medium
|
Title: `IndentationError` when using a multi-line decorator for a class method
Body: #### Problem Description
I encounter an `IndentationError` when trying to generate the documentation of a class which contains a multi-line decorator on a class method.
I believe that this is a limitation of the [`_dedent` method](https://github.com/mitmproxy/pdoc/blob/main/pdoc/doc_ast.py#L210) but I do not have a good proposal for a solution yet. If we can discuss and figure out a good approach on how to resolve this I am happy to help with fixing the issue :+1:
What would need to happen is that the actual function definition also gets dedented rather than simply the line following the decorator. See the example below.
#### Steps to reproduce the behavior:
I was able to break down the issue to the following example:
```python
import pytest
class TestDummy:
"""Test Dummy."""
@pytest.mark.parametrize(
"arg,string",
[
["hello", "world"],
],
)
def test_dummy(self, arg, string):
"""Dummy test method."""
assert arg != string
```
<details>
<summary>
Using `pdoc -o tmp example.py` generates the following error:
</summary>
```python-traceback
Traceback (most recent call last):
File "/home/max/Git/cobib/.direnv/python-3.9.2/bin/pdoc", line 33, in <module>
sys.exit(load_entry_point('pdoc', 'console_scripts', 'pdoc')())
File "/home/max/Git/pdoc/pdoc/__main__.py", line 153, in cli
pdoc.pdoc(
File "/home/max/Git/pdoc/pdoc/__init__.py", line 386, in pdoc
write(doc.Module(m))
File "/home/max/Git/pdoc/pdoc/__init__.py", line 355, in write
outfile.write_bytes(r(mod).encode())
File "/home/max/Git/pdoc/pdoc/__init__.py", line 367, in r
return render.html_module(module=mod, all_modules=all_modules)
File "/home/max/Git/pdoc/pdoc/render.py", line 70, in html_module
return env.select_template(
File "/home/max/Git/cobib/.direnv/python-3.9.2/lib/python3.9/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/home/max/Git/cobib/.direnv/python-3.9.2/lib/python3.9/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/home/max/Git/cobib/.direnv/python-3.9.2/lib/python3.9/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/home/max/Git/pdoc/pdoc/templates/default/module.html.jinja2", line 650, in top-level template code
{%- if loop.nextitem %}.{% endif -%}
File "/home/max/Git/pdoc/pdoc/templates/frame.html.jinja2", line 18, in top-level template code
<body>{% block body %}{% endblock %}</body>
File "/home/max/Git/pdoc/pdoc/templates/default/module.html.jinja2", line 722, in block "body"
{% block module_contents %}
File "/home/max/Git/pdoc/pdoc/templates/default/module.html.jinja2", line 729, in block "module_contents"
{{ member(m) }}
File "/home/max/Git/cobib/.direnv/python-3.9.2/lib/python3.9/site-packages/jinja2/runtime.py", line 679, in _invoke
rv = self._func(*arguments)
File "/home/max/Git/pdoc/pdoc/templates/default/module.html.jinja2", line 557, in template
{{ function(doc) }}
File "/home/max/Git/cobib/.direnv/python-3.9.2/lib/python3.9/site-packages/jinja2/runtime.py", line 679, in _invoke
rv = self._func(*arguments)
File "/home/max/Git/pdoc/pdoc/templates/default/module.html.jinja2", line 523, in template
{{ decorators(fn) }}
File "/home/max/Git/cobib/.direnv/python-3.9.2/lib/python3.9/site-packages/jinja2/runtime.py", line 679, in _invoke
rv = self._func(*arguments)
File "/home/max/Git/pdoc/pdoc/templates/default/module.html.jinja2", line 514, in template
{% for d in doc.decorators if not d.startswith("@_") %}
File "/home/max/Git/cobib/.direnv/python-3.9.2/lib/python3.9/site-packages/jinja2/environment.py", line 471, in getattr
return getattr(obj, attribute)
File "/usr/lib/python3.9/functools.py", line 969, in __get__
val = self.func(instance)
File "/home/max/Git/pdoc/pdoc/doc.py", line 767, in decorators
for t in doc_ast.parse(obj).decorator_list:
File "/home/max/Git/pdoc/pdoc/doc_ast.py", line 60, in parse
return _parse_function(src)
File "/home/max/Git/pdoc/pdoc/doc_ast.py", line 196, in _parse_function
tree = ast.parse(_dedent(source))
File "/usr/lib/python3.9/ast.py", line 50, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 7
def test_dummy(self, arg, string):
IndentationError: unexpected indent
```
</details>
For additional details, this is what `_dedent` generates at the moment:
```python
@pytest.mark.parametrize(
"arg,string",
[
["hello", "world"],
],
)
def test_dummy(self, arg, string):
"""Dummy test method."""
assert arg != string
```
The dedented string is not causing any problems but the remaining indent of the `def test_dummy` line eventually causes the error to arise.
#### System Information
A fresh checkout of `main`:
```
% pdoc --version
pdoc: 6.4.2 (+15, commit 8d133d2)
Python: 3.9.2
Platform: Linux-5.11.11-arch1-1-x86_64-with-glibc2.33
```
| 1medium
|
Title: Cannot turn off sampler injection at inference time.
Body: ### Bug description
I want to use a custom distributed batch sampler at inference time.
The sampler looks like this:
```python
class DistributedInferenceBatchSampler(DistributedSampler):
def __init__(self, dataset: Dataset,
batch_size: int = 1,
num_replicas: Optional[int] = None,
rank: Optional[int] = None,
shuffle: bool = False,
seed: int = 0,
drop_last: bool = False,
) -> None:
super().__init__(dataset, num_replicas=num_replicas, rank=rank,
shuffle=shuffle, seed=seed, drop_last=drop_last)
# do stuff
# sort data indices by datapoint length, batch up
# subsample batches for current rank
self.batches = # nested list [[b1_1, b1_2, ...], [b2_1, b2_2, ...], ...]
def __iter__(self) -> Iterator[T_co]:
return iter(self.batches)
def __len__(self) -> int:
return len(self.batches)
```
I use this dataloader inside my data module:
```python
class DataModule(LightningDataModule):
# ...
def predict_dataloader(self, rank=None, num_replicas=None):
# define Dataset 'data'
bsampler = DistributedInferenceBatchSampler(dataset=data,
batch_size=4,
num_replicas=num_replicas,
rank=rank)
data_loader = DataLoader(data,
batch_sampler=bsampler)
return data_loader
```
I'm running inference like so:
```python
trainer = Trainer( strategy='auto',
use_distributed_sampler=False, # using custom distributed batchsampler
accelerator=gpu,
deterministic=False,
enable_progress_bar=True,
enable_model_summary=True,
devices=devices
)
trainer.predict(model=self._model, datamodule=self._datamodule)
```
However, Lightning tries to replace my batch sampler despite the `use_distributed_sample=False` flag because it always does so in predict mode, and fails because the sampler doesn't have the same signature as a Pytorch `BatchSampler`.
I've tried wrapping my custom `DistributedInferenceBatchSampler` like so:
```python
class BatchSamplerWrapper(BatchSampler):
def __init__(self, sampler, batch_size=1, drop_last=False):
self.sampler = sampler
self.batch_size = batch_size # ignored
self.drop_last = drop_last # ignored
def __iter__(self):
for batch in self.sampler:
yield batch
def __len__(self):
return len(self.sampler)
class DataModule(LightningDataModule):
# ...
def predict_dataloader(self, rank=None, num_replicas=None):
# define Dataset 'data'
bsampler = DistributedInferenceBatchSampler(dataset=data,
batch_size=4,
num_replicas=num_replicas,
rank=rank)
wrapper = BatchSamplerWrapper(bsampler, batch_size=4, drop_last=False)
data_loader = DataLoader(data,
batch_sampler=wrapper)
return data_loader
```
However, Lightning replaces my `bsampler` inside the wrapper with a `torch.utils.data.sampler.SequentialSampler` which leads to `BatchSamplerWrapper.__iter__()` not having the intended behaviour. It returns an `int` rather than a list of `int`s, leading to:
```
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
Error executing job with overrides: []
Traceback (most recent call last):
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/codebase/IgFlow/multiflow/experiments/inference.py", line 18, in sample
run.sample()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/codebase/IgFlow/multiflow/experiments/model_run.py", line 125, in sample
trainer.predict(model=self._model, datamodule=self._datamodule)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 864, in predict
return call._call_and_handle_interrupt(
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 102, in launch
return function(*args, **kwargs)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 903, in _predict_impl
results = self._run(model, ckpt_path=ckpt_path)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 989, in _run
results = self._run_stage()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1030, in _run_stage
return self.predict_loop.run()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
return loop_run(self, *args, **kwargs)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/loops/prediction_loop.py", line 119, in run
batch, batch_idx, dataloader_idx = next(data_fetcher)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/loops/fetchers.py", line 127, in __next__
batch = super().__next__()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/loops/fetchers.py", line 56, in __next__
batch = next(self.iterator)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/utilities/combined_loader.py", line 326, in __next__
out = next(self._iterator)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/utilities/combined_loader.py", line 132, in __next__
out = next(self.iterators[0])
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
return self._process_data(data)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/_utils.py", line 644, in reraise
raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
TypeError: 'int' object is not iterable
```
I just want to turn off this sampler-replacing behaviour. I have a similar setup during training (rather than inference) and that works fine (no wrappers required, either).
### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
_No response_
### More info
_No response_ | 2hard
|
Title: Intermittent proxy-with-auth not being set before page load
Body: ## Intermittent proxy-with-auth not being set before page load
Since the Manifest V3 extension conversion, there is a short delay before the auth is set, which may lead to an auth prompt appearing if navigating to a page immediately after the web browser is opened when using proxy-with-auth. This delay is a fraction of a second. It can be avoided by added a tiny delay after the web browser is opened (before navigating to a URL). | 1medium
|
Title: Use PyPI token in GitHub actions
Body: I just noticed this:
https://github.com/pytest-dev/pytest-qt/blob/e8080068d3ed215051231d5d5fd43fd1268e4eaa/.github/workflows/main.yml#L83-L86
It looks like https://github.com/pypa/pypi-support/issues/98 was taken care of. | 1medium
|
Title: SmartThing Washer start/stop
Body: ### The problem
Can't start or stop the washing machine with the switch component included in home assistant integration.
The command should use the samsungce.washerOperatingState capability instead of the switch. With the command start and pause or cancel.
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
SmartThings
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/smartthings
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | 1medium
|
Title: Support additional table arguments
Body: Simply follow the SQLAlchemy ORM way, support `__table_args__`:
http://docs.sqlalchemy.org/en/latest/orm/extensions/declarative/table_config.html | 1medium
|
Title: ChunkStore - Delete does not update metadata properly
Body: does not update rows or chunk count when a chunk_range is specified
| 1medium
|
Title: TextFieldTensors are not supported as input to a model Head
Body: <!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x] I have verified that the issue exists against the `main` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [x] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [x] I have included in the "Environment" section below the output of `pip freeze`.
- [ ] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
I'm trying to pass a `TextFieldTensor` to a `Head` using the `multitask` Model.
For context, the `ClassifierHead` doesn't support `TextFieldTensors` for sequence decoding tasks. I created a simple head which inherits from the `AutoRegressiveSeqDecoder`, and puts the encoder text and mask into a dictionary before passing to the forward of the decoder.
```python
@Head.register("seq_decoder_tokens")
class SeqDecoderTokensHead(AutoRegressiveSeqDecoder):
@overrides
def forward(
self,
encoded_text: torch.Tensor,
encoded_text_mask: torch.Tensor,
target_tokens: TextFieldTensors = None,
) -> dict[str, torch.Tensor]:
encoder_out = {
"encoder_outputs": encoded_text,
"source_mask": encoded_text_mask,
}
return super().forward(encoder_out, target_tokens)
```
The error, as in the traceback below, is with the `make_inputs_for_task` function.
https://github.com/allenai/allennlp/blob/7d4a67263d7a210aca22d4f2b03e8568d3c34a48/allennlp/models/multitask.py#L114-L119
From the type annotation, I'm assuming that it might not be supported by the model so I might be barking up the wrong tree? I assumed that I could just pass the variable to the Head and it would work.
I'm not sure this is the best solution but I've made a fork with one possible solution, if it helps?
https://github.com/amitkparekh/allennlp/commit/f1e0630d83ed0d459f1efa167e4b2bad76f80bb2
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/models/multitask.py", line 125, in <listcomp>
return [whole_batch_input[i] for i in task_indices[task]]
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/models/multitask.py", line 125, in make_inputs_for_task
return [whole_batch_input[i] for i in task_indices[task]]
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/models/multitask.py", line 138, in <dictcomp>
head_arguments = {key: make_inputs_for_task(head_name, value) for key, value in head_arguments.items()}
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/models/multitask.py", line 138, in forward
head_arguments = {key: make_inputs_for_task(head_name, value) for key, value in head_arguments.items()}
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/training/gradient_descent_trainer.py", line 351, in batch_outputs
output_dict = self._pytorch_model(**batch)
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/training/gradient_descent_trainer.py", line 458, in _train_epoch
batch_outputs = self.batch_outputs(batch, for_training=True)
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/training/gradient_descent_trainer.py", line 727, in _try_train
train_metrics = self._train_epoch(epoch)
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/training/gradient_descent_trainer.py", line 706, in train
metrics, epoch = self._try_train()
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/commands/train.py", line 543, in run
return self.trainer.train()
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/commands/train.py", line 470, in _train_worker
metrics = train_loop.run()
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/commands/train.py", line 240, in train_model
model = _train_worker(
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/commands/train.py", line 171, in train_model_from_file
return train_model(
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/commands/train.py", line 111, in train_model_from_args
train_model_from_file(
File "/Users/amit/Develop/pokerface/.venv/lib/python3.9/site-packages/allennlp/commands/__init__.py", line 121, in main
args.func(args)
File "/Users/amit/Develop/pokerface/scraps/debug_allennlp.py", line 35, in <module>
main()
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 268, in run_path
return _run_module_code(code, init_globals, run_name,
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main (Current frame)
return _run_code(code, main_globals, None,
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: OS X
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.9.5
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
allennlp @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b1/ea/84/1d6e28cb885e4d55ac8939f78e408f7deb86ea845630a64f10d7700a62/allennlp-2.5.0-py3-none-any.whl
allennlp-models @ file:///Users/amit/Library/Caches/pypoetry/artifacts/77/31/1f/2ef3c0ed6db0dc41de744411f56b478ab042f091dda6cd079b72b12c53/allennlp_models-2.5.0-py3-none-any.whl
appdirs @ file:///Users/amit/Library/Caches/pypoetry/artifacts/47/cf/4f/4ef02fb715aa36daeebad18cc5570126159c659c41c7b5ec46a7387d9b/appdirs-1.4.4-py2.py3-none-any.whl
appnope @ file:///Users/amit/Library/Caches/pypoetry/artifacts/fd/ae/b8/2382438588ec752ed602c2dcab2a0678b30ff15f2d9a30267ff97ecd64/appnope-0.1.2-py2.py3-none-any.whl
argon2-cffi @ file:///Users/amit/Library/Caches/pypoetry/artifacts/3c/87/0e/fc0a0440e3e84e11c88d9d2d049f9aff2fdc4e7493dd19af67381b1d05/argon2_cffi-20.1.0-cp37-abi3-macosx_10_6_intel.whl
astor @ file:///Users/amit/Library/Caches/pypoetry/artifacts/1f/03/36/982d1222edac5e8cb8ac6e0464249747fa800d4fb04728a99153ecfe4d/astor-0.8.1-py2.py3-none-any.whl
async-generator @ file:///Users/amit/Library/Caches/pypoetry/artifacts/fe/72/db/709736555b02c2d1ae90038b7b05138b15e24edde3aa7556fc2507a90f/async_generator-1.10-py3-none-any.whl
attrs @ file:///Users/amit/Library/Caches/pypoetry/artifacts/6f/a9/ee/569c37f69a8c365ee41d2340aeac0214ee8c0086b8d8db43a21545204b/attrs-21.2.0-py2.py3-none-any.whl
backcall @ file:///Users/amit/Library/Caches/pypoetry/artifacts/43/8e/e8/4e598704edf6fb4a53d552ea511c04e9958dcf850897760e5387878b99/backcall-0.2.0-py2.py3-none-any.whl
backports.csv @ file:///Users/amit/Library/Caches/pypoetry/artifacts/30/84/1a/81a42cff31ce7f0b7a86ab54e2cbcb610d96fa8b735d63bdb7251e91cb/backports.csv-1.0.7-py2.py3-none-any.whl
bandit @ file:///Users/amit/Library/Caches/pypoetry/artifacts/6a/05/4f/98680ab175e4b595c2d1b775974c208b6b20c05448a52944374c2db4b0/bandit-1.7.0-py3-none-any.whl
beartype @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c3/fe/bd/c04bccf2fa951904264c6dc16cbdff35bc2aec170d65b9afb879841dfa/beartype-0.7.1-py3-none-any.whl
beautifulsoup4 @ file:///Users/amit/Library/Caches/pypoetry/artifacts/eb/47/47/287c1b8a386f9437d562f9221ae959756bc5bbfcd541c38c17968dfe8a/beautifulsoup4-4.9.3-py3-none-any.whl
black @ file:///Users/amit/Library/Caches/pypoetry/artifacts/6a/38/11/b77f947a81ed86d08787f98d076035bedb21abd3d3c57129221264c3ea/black-21.6b0-py3-none-any.whl
bleach @ file:///Users/amit/Library/Caches/pypoetry/artifacts/15/bd/f0/eaed67c8e6d37dda902603474339528f29bde3f7ecc2bb4b8874e0da87/bleach-3.3.0-py2.py3-none-any.whl
blis @ file:///Users/amit/Library/Caches/pypoetry/artifacts/96/b3/59/60398be6c97784bb3ba6ef5d76e86ce6bc530e27ca83de6d106b5dadd3/blis-0.7.4-cp39-cp39-macosx_10_9_x86_64.whl
boto3 @ file:///Users/amit/Library/Caches/pypoetry/artifacts/5e/59/0d/9ce4b54813b37237c51d31bb0d3c0853123334f6309a2088f81ec49c03/boto3-1.17.109-py2.py3-none-any.whl
botocore @ file:///Users/amit/Library/Caches/pypoetry/artifacts/34/8e/85/14e136ebe48249de4155082aa1b2aa772ec3f7d415dd1c60067ed3a2ad/botocore-1.20.109-py2.py3-none-any.whl
cachetools @ file:///Users/amit/Library/Caches/pypoetry/artifacts/04/ca/d7/8af05dc8ccae1212d4151afd99960369c2415b26bc13ed3bbb288c4f5a/cachetools-4.2.2-py3-none-any.whl
catalogue @ file:///Users/amit/Library/Caches/pypoetry/artifacts/af/cf/ab/153c9326701d6adc746e129e3e05991a55bd43b2f19e3c52d7ccc9a7b4/catalogue-1.0.0-py2.py3-none-any.whl
certifi @ file:///Users/amit/Library/Caches/pypoetry/artifacts/cd/2c/dc/e5bfda594e18f3f1e9af9f11e13581014d821425f325f3220b3ed2c337/certifi-2021.5.30-py2.py3-none-any.whl
cffi @ file:///Users/amit/Library/Caches/pypoetry/artifacts/6e/aa/04/2c3c9401654c8f5580dc8965817a99e8ad464a0987e17149061aadfcbf/cffi-1.14.6-cp39-cp39-macosx_10_9_x86_64.whl
cfgv @ file:///Users/amit/Library/Caches/pypoetry/artifacts/6b/52/b7/27617ac43f25c9962779813593809288745c414fd878b968cc3d91ca6c/cfgv-3.3.0-py2.py3-none-any.whl
chardet @ file:///Users/amit/Library/Caches/pypoetry/artifacts/11/63/f9/797eda27963177a6b75a340f62aa194d462ea69e6b0dbb77a651fa2b62/chardet-4.0.0-py2.py3-none-any.whl
checklist @ file:///Users/amit/Library/Caches/pypoetry/artifacts/04/84/f3/1324eec13577715f52121b3073ab37792c08483aae7faa7d22b7dd5e1d/checklist-0.0.11.tar.gz
cheroot @ file:///Users/amit/Library/Caches/pypoetry/artifacts/3c/cf/87/c9bb0e3b0d4c43affeb8f9714d5791b61c97750bc3a2ee35d276d425b5/cheroot-8.5.2-py2.py3-none-any.whl
CherryPy @ file:///Users/amit/Library/Caches/pypoetry/artifacts/dd/d5/0c/5289a45f52e9aa001f7b8c2b9c792377e79ad572bc759f1a692d160818/CherryPy-18.6.1-py2.py3-none-any.whl
click @ file:///Users/amit/Library/Caches/pypoetry/artifacts/ae/32/83/e159324c1bd58177322f4e45f598d500fe22544bff20f53f55cf749da8/click-8.0.1-py3-none-any.whl
colorama @ file:///Users/amit/Library/Caches/pypoetry/artifacts/9e/b3/11/7d87ac44fdb2d557301f1f4086a37c080d1482a98751abe7cdbabbad26/colorama-0.4.4-py2.py3-none-any.whl
commonmark @ file:///Users/amit/Library/Caches/pypoetry/artifacts/11/56/f6/d054064b623fab5c7e4420f60d931f49fea2dacdebe1dc991201010c84/commonmark-0.9.1-py2.py3-none-any.whl
configparser @ file:///Users/amit/Library/Caches/pypoetry/artifacts/4e/17/fd/30c9e84dc4b951f3587a4d5eb2894e20105120991793ff3e9f3a60d787/configparser-5.0.2-py3-none-any.whl
conllu @ file:///Users/amit/Library/Caches/pypoetry/artifacts/a9/0b/c7/419ecfd4c8064217550121b01eb4c617ebb82d54c06c775a2d683abbe4/conllu-4.4-py2.py3-none-any.whl
cryptography @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c7/6d/36/40b404429242377c0424cb9957a04887a4be7399a62c1505197845c09f/cryptography-3.4.7-cp36-abi3-macosx_10_10_x86_64.whl
cymem @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b8/f4/20/752f31c5ea9976672f58722949ff87b94832958712b6e3ba60e831421e/cymem-2.0.5-cp39-cp39-macosx_10_9_x86_64.whl
darglint @ file:///Users/amit/Library/Caches/pypoetry/artifacts/bb/64/25/6fa28703f05a524eef407d4425af6290f242eb131574704386e677624b/darglint-1.8.0-py3-none-any.whl
decorator @ file:///Users/amit/Library/Caches/pypoetry/artifacts/99/d8/37/35167f3a4175b089109325a5ee11846ac4de416442972573b87351396d/decorator-5.0.9-py3-none-any.whl
defusedxml @ file:///Users/amit/Library/Caches/pypoetry/artifacts/2b/69/07/7b13f7eaf3a4d7af737dcebe24d3d17b1c2a2f457fbddf746f5642bc43/defusedxml-0.7.1-py2.py3-none-any.whl
dill @ file:///Users/amit/Library/Caches/pypoetry/artifacts/26/56/9e/73963d2285e6c700801f185e8c1d28f1f971c09aaa411cec9b799a5fca/dill-0.3.4-py2.py3-none-any.whl
distlib @ file:///Users/amit/Library/Caches/pypoetry/artifacts/6f/35/97/6a255392aaa7200818a8cd0f4b014ef2e0a086bab49dd568780f367ef5/distlib-0.3.2-py2.py3-none-any.whl
docker-pycreds @ file:///Users/amit/Library/Caches/pypoetry/artifacts/9b/0b/be/891931da9caf5e55102337a635d3a7eeeb92c93b4bd39c24d0810f1f25/docker_pycreds-0.4.0-py2.py3-none-any.whl
docutils @ file:///Users/amit/Library/Caches/pypoetry/artifacts/43/28/92/79000933ad30371dc938d9b368a9000e20ac0bb467a716c19ef1fbd3c7/docutils-0.17.1-py2.py3-none-any.whl
entrypoints @ file:///Users/amit/Library/Caches/pypoetry/artifacts/63/c1/af/bbfdd91bcb544e62ac8f1567ef23c243cb188d1a9cb933532999c9bbb0/entrypoints-0.3-py2.py3-none-any.whl
eradicate @ file:///Users/amit/Library/Caches/pypoetry/artifacts/12/ce/ac/197035fe6d51568abb7ea160f5ad416d2164a2010005e8356b8229e550/eradicate-2.0.0.tar.gz
feedparser @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d1/81/87/0f3c1c0b02176b2bf05af85261d8ce7522e4d241e5d9f7b3f0ec4f2a10/feedparser-6.0.8-py3-none-any.whl
filelock @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7e/c4/97/2cfbeab3cc292d0b4290cb7cab0b969b3002dc24f6dd5944cbe340e684/filelock-3.0.12-py3-none-any.whl
flake8 @ file:///Users/amit/Library/Caches/pypoetry/artifacts/3f/57/d5/11722093c13092cc3bfc3dd7c88aef6f8e4d5ac97cfe5fd054d5aba412/flake8-3.9.2-py2.py3-none-any.whl
flake8-bandit @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7e/e4/46/e15782d941f9cde39b64ca5b636180f47573f2b2c9315be56b55152f17/flake8_bandit-2.1.2.tar.gz
flake8-broken-line @ file:///Users/amit/Library/Caches/pypoetry/artifacts/51/ea/87/37348b281b73d7df44fc46b09c0430e2984e991df11998e2e9bb459fce/flake8_broken_line-0.3.0-py3-none-any.whl
flake8-bugbear @ file:///Users/amit/Library/Caches/pypoetry/artifacts/15/95/75/8c7a4504d7eda1a394c3d14349b88577ff2d98d59941944c95bd8672ba/flake8_bugbear-21.4.3-py36.py37.py38-none-any.whl
flake8-commas @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c3/47/1d/7f7fac0c58b2bd2bf7361bcba0bceba1c81c365cab5e1de352fa7fac68/flake8_commas-2.0.0-py2.py3-none-any.whl
flake8-comprehensions @ file:///Users/amit/Library/Caches/pypoetry/artifacts/4e/10/91/04adae987aa18ee9463db75694dccee7bf7d5518118462f18252bd3e0a/flake8_comprehensions-3.5.0-py3-none-any.whl
flake8-debugger @ file:///Users/amit/Library/Caches/pypoetry/artifacts/66/04/47/7bef98a8d237eb17cbfbcb803343be1c79e2c0674ceba163717b6c8e1b/flake8_debugger-4.0.0-py3-none-any.whl
flake8-docstrings @ file:///Users/amit/Library/Caches/pypoetry/artifacts/e0/85/e9/6b482a11d48cf26e1170d9f5bf0b044a5a6c9b816ffe70945e90fc3e56/flake8_docstrings-1.6.0-py2.py3-none-any.whl
flake8-eradicate @ file:///Users/amit/Library/Caches/pypoetry/artifacts/fc/3d/a0/58427b14b0a6d33587f3d6896e695615272c37c3ff2c89d6155b5155f6/flake8_eradicate-1.1.0-py3-none-any.whl
flake8-isort @ file:///Users/amit/Library/Caches/pypoetry/artifacts/11/37/1c/68fb64c6704b9c2468f711b83090590abc2c8295eeafbac9a167f32e0a/flake8_isort-4.0.0-py2.py3-none-any.whl
flake8-polyfill @ file:///Users/amit/Library/Caches/pypoetry/artifacts/28/17/cc/952c11cd5ffb2608137557f928dc4f9365b4dbe1e2a6015eeea78583ac/flake8_polyfill-1.0.2-py2.py3-none-any.whl
flake8-quotes @ file:///Users/amit/Library/Caches/pypoetry/artifacts/75/61/73/b33ec4139bc79d01b0748fb1ae5889fbd6bd544c6f35521fb4dd981b1a/flake8-quotes-3.2.0.tar.gz
flake8-rst-docstrings @ file:///Users/amit/Library/Caches/pypoetry/artifacts/69/c4/19/62afcee08756b2cc746fb4d585d02dd819556e0b0d30a6cc7acb6a101a/flake8_rst_docstrings-0.2.3-py3-none-any.whl
flake8-string-format @ file:///Users/amit/Library/Caches/pypoetry/artifacts/24/89/bb/7ce8e216f8c7289aa8a2ad4c44f30f87af6c7cdaf5d510110d566d66ec/flake8_string_format-0.3.0-py2.py3-none-any.whl
ftfy @ file:///Users/amit/Library/Caches/pypoetry/artifacts/da/3c/cd/9230817d48f70d575b300d027e9d3845ffe7dec691cfeca5959d022536/ftfy-6.0.3.tar.gz
future @ file:///Users/amit/Library/Caches/pypoetry/artifacts/f8/58/55/86be1f567b212fdd98854d12815964a49db8fb1bcff725018e5f95c61d/future-0.18.2.tar.gz
gitdb @ file:///Users/amit/Library/Caches/pypoetry/artifacts/82/af/0d/fc5992ac7ef8a227e6b9705aa6de550211814dd0318b857530d3306d02/gitdb-4.0.7-py3-none-any.whl
GitPython @ file:///Users/amit/Library/Caches/pypoetry/artifacts/89/d2/fc/aacfc97469f68f6b2da4db532a2e04ba3ba94e09728e3ea6d4444e0dd2/GitPython-3.1.18-py3-none-any.whl
google-api-core @ file:///Users/amit/Library/Caches/pypoetry/artifacts/5e/fa/f1/f104f6d061efd1ddb611c0cb4fcc9252c5e15459515b8991136cdf8886/google_api_core-1.31.0-py2.py3-none-any.whl
google-auth @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c7/96/64/403c53d4b77d3b92517777391e7598970fd56386f0bc6098f99801e59f/google_auth-1.32.1-py2.py3-none-any.whl
google-cloud-core @ file:///Users/amit/Library/Caches/pypoetry/artifacts/ed/24/7b/fd351e28e6811f5ef5718d536c84e43df631d72487e2870d11a840b8b2/google_cloud_core-1.7.1-py2.py3-none-any.whl
google-cloud-storage @ file:///Users/amit/Library/Caches/pypoetry/artifacts/52/25/b2/b8d3db4a638b27dd17a77802f8eae114ac39325db23cd0f7c58781e35a/google_cloud_storage-1.38.0-py2.py3-none-any.whl
google-crc32c @ file:///Users/amit/Library/Caches/pypoetry/artifacts/ac/6b/f2/c976910125e4a02d8b127cb4138fc9ac6167c7c971a76bc1bde216ee7b/google_crc32c-1.1.2-cp39-cp39-macosx_10_14_x86_64.whl
google-resumable-media @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7d/de/88/a84ae5cef0a9895612e0c4db686aab010ff824edd1aeceb3906c3cd7e0/google_resumable_media-1.3.1-py2.py3-none-any.whl
googleapis-common-protos @ file:///Users/amit/Library/Caches/pypoetry/artifacts/8d/40/1f/21e71977c3d547b27842caaafc2e420c9a6dd40a745cce4b61673e3be3/googleapis_common_protos-1.53.0-py2.py3-none-any.whl
h5py @ file:///Users/amit/Library/Caches/pypoetry/artifacts/21/20/f5/670da7f96cdc48c98aad052bf28c0efc57bf865ff1f9b2c50ae8d6b2a3/h5py-3.3.0-cp39-cp39-macosx_10_9_x86_64.whl
huggingface-hub @ file:///Users/amit/Library/Caches/pypoetry/artifacts/40/86/15/ea367547cd99a3a52f226c2b2b7fd5d28b0b7c0e1323eee2909b46cc31/huggingface_hub-0.0.13-py3-none-any.whl
hypothesis @ file:///Users/amit/Library/Caches/pypoetry/artifacts/62/44/a7/47fd46593add79f575e8418b9aac6d209749e120cdd07fbf309533c548/hypothesis-6.14.1-py3-none-any.whl
identify @ file:///Users/amit/Library/Caches/pypoetry/artifacts/88/59/31/ca587f87a94f22f5d7f086dd53b95373139c023f75974e9293131680b7/identify-2.2.11-py2.py3-none-any.whl
idna @ file:///Users/amit/Library/Caches/pypoetry/artifacts/ef/7f/a9/19cc0b8760bdf6f696290c06532496f8bb29fbdaad044f852fed00ec82/idna-2.10-py2.py3-none-any.whl
iniconfig @ file:///Users/amit/Library/Caches/pypoetry/artifacts/fa/b0/c6/10cfac68c9e6de9d2a1678366ca89fd9292b362c1760dbe758e41691cb/iniconfig-1.1.1-py2.py3-none-any.whl
ipykernel @ file:///Users/amit/Library/Caches/pypoetry/artifacts/30/98/d5/b01a5306b6404f2c2862d298a4f8649f5c2579c0f5973f59838fe3fc2b/ipykernel-5.5.5-py3-none-any.whl
ipython @ file:///Users/amit/Library/Caches/pypoetry/artifacts/1e/56/a1/122ee0fd1f99ee5a3e81bfe0366288a488583ed0be289fb244e1663376/ipython-7.25.0-py3-none-any.whl
ipython-genutils @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b4/31/01/6f96480580d1674cab0b5e26dc9fca7bbdf7a2fd5811a7807a92436268/ipython_genutils-0.2.0-py2.py3-none-any.whl
ipywidgets @ file:///Users/amit/Library/Caches/pypoetry/artifacts/75/67/72/e14b677150e119dacbd4bf8559095cb47df20ea3c6ecc37bac97964b4a/ipywidgets-7.6.3-py2.py3-none-any.whl
iso-639 @ file:///Users/amit/Library/Caches/pypoetry/artifacts/f0/a6/d6/17ede193e09cf4ac45d787cbef2e1c78ff7dff5a58775728212204bbd0/iso-639-0.4.5.tar.gz
isort @ file:///Users/amit/Library/Caches/pypoetry/artifacts/e8/65/0b/2aee1c8017c733cd6111eb34137491a1456d443396b6d283fba8e0d4a3/isort-5.9.2-py3-none-any.whl
jaraco.classes @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d0/3c/d2/f157bfb781b294c3d68a29e898ab39327bc2397eea1b42cf8afdfda14b/jaraco.classes-3.2.1-py3-none-any.whl
jaraco.collections @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7d/46/b7/579da18cb7f7d7a7d7cd19be8b2d5f5cf3449b1a019c7ec307333b2346/jaraco.collections-3.3.0-py3-none-any.whl
jaraco.functools @ file:///Users/amit/Library/Caches/pypoetry/artifacts/4e/9d/9f/96376949fa50b18fc96d90f93d24360b467689922c512daa6beee4d08b/jaraco.functools-3.3.0-py3-none-any.whl
jaraco.text @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d8/d0/b6/f9d37687fabea73a57ffc167a243a09255bc57413476984bab40bd0984/jaraco.text-3.5.0-py3-none-any.whl
jedi @ file:///Users/amit/Library/Caches/pypoetry/artifacts/2a/5b/d6/62e1f4e7b392c3e7f8258bbe3159dff695814a46e65547cd547ca0fedb/jedi-0.18.0-py2.py3-none-any.whl
Jinja2 @ file:///Users/amit/Library/Caches/pypoetry/artifacts/21/2e/46/0a76ea6f6a15e594c9828a85a781f1cee8ed5a1b77e361305645f9e1f4/Jinja2-3.0.1-py3-none-any.whl
jmespath @ file:///Users/amit/Library/Caches/pypoetry/artifacts/2c/f0/52/b0ba93d941bd49c8719dee7ca27d2096bf96e17948667388c3ee2ac8f8/jmespath-0.10.0-py2.py3-none-any.whl
joblib @ file:///Users/amit/Library/Caches/pypoetry/artifacts/28/41/26/7ed6532cff9d56b8d2878f93bc289c075f338c12aa7d630862aae39d45/joblib-1.0.1-py3-none-any.whl
jsonnet @ file:///Users/amit/Library/Caches/pypoetry/artifacts/60/4e/2d/acde747a02049d38e6dbda9dc3fbf64f03bf2e14c8e9ad04f07edcc66b/jsonnet-0.17.0.tar.gz
jsonschema @ file:///Users/amit/Library/Caches/pypoetry/artifacts/db/1d/66/ad84fa70cc987bd4aad68be808562321cdab3cb03f4d5d7714a0e0571c/jsonschema-3.2.0-py2.py3-none-any.whl
jupyter @ file:///Users/amit/Library/Caches/pypoetry/artifacts/bb/e8/12/09df1332820a1126a780ab09cec78d2f50457f79bcd0cb2fbb07b19ef4/jupyter-1.0.0-py2.py3-none-any.whl
jupyter-client @ file:///Users/amit/Library/Caches/pypoetry/artifacts/8c/14/4d/593d81015262d7306a1f89c900a11dfb7d2f18ee37f8ef17ba7ab983bf/jupyter_client-6.2.0-py3-none-any.whl
jupyter-console @ file:///Users/amit/Library/Caches/pypoetry/artifacts/4b/2f/b8/119f975fb811a5e911beacc7a1bb8f8e1154254fb33204e753131e7aca/jupyter_console-6.4.0-py3-none-any.whl
jupyter-core @ file:///Users/amit/Library/Caches/pypoetry/artifacts/91/8f/22/1377f102bb4478eb073c741714348b7cbb6518d221da9c232cfc5242b3/jupyter_core-4.7.1-py3-none-any.whl
jupyterlab-pygments @ file:///Users/amit/Library/Caches/pypoetry/artifacts/de/ef/fc/5883436de4b7865f082f7cba0e0e0ff5fbf229fe55d6e7d5431a6080f4/jupyterlab_pygments-0.1.2-py2.py3-none-any.whl
jupyterlab-widgets @ file:///Users/amit/Library/Caches/pypoetry/artifacts/26/96/3a/11068104cd5f33e8f5437f04e75fb2220aa927c47ae5f01a7477acc169/jupyterlab_widgets-1.0.0-py3-none-any.whl
lmdb @ file:///Users/amit/Library/Caches/pypoetry/artifacts/fd/60/fa/5d8bc278b7132dd8c6a50b32b0b4076f31080cee889a3e59167a93af64/lmdb-1.2.1-cp39-cp39-macosx_10_14_x86_64.whl
lxml @ file:///Users/amit/Library/Caches/pypoetry/artifacts/9e/44/6a/570737853888f173f84e160c5772c792bfd10ea0385a76c138c94b23fc/lxml-4.6.3-cp39-cp39-macosx_10_9_x86_64.whl
MarkupSafe @ file:///Users/amit/Library/Caches/pypoetry/artifacts/20/e4/29/5b1a93d4ee8437f01551437cffbb57ba6744c59796443ca99051473f75/MarkupSafe-2.0.1-cp39-cp39-macosx_10_9_x86_64.whl
matplotlib-inline @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d8/4c/e1/8b0adf0d076721db928e74aaaf5cf1d73cec52ea2a27ebabe3e9be8957/matplotlib_inline-0.1.2-py3-none-any.whl
mccabe @ file:///Users/amit/Library/Caches/pypoetry/artifacts/96/5e/5f/21ae5296697ca7f94de4da6e21d4936d74029c352a35202e4c339a4253/mccabe-0.6.1-py2.py3-none-any.whl
mistune @ file:///Users/amit/Library/Caches/pypoetry/artifacts/33/31/4c/2d69dc65d06d1c8f8b00b8e995e24bae97fce2e1f8ec5d8d2d98e852da/mistune-0.8.4-py2.py3-none-any.whl
more-itertools @ file:///Users/amit/Library/Caches/pypoetry/artifacts/47/d4/7d/526affa62eb0c76eec19004cbbf18cc4e55f51c665e92b88bb2ed25752/more_itertools-8.8.0-py3-none-any.whl
munch @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c3/f9/98/c46b861b1fe10f4d4fecd0ed8752a968be33d2c7e698b70589015aa0b2/munch-2.5.0-py2.py3-none-any.whl
murmurhash @ file:///Users/amit/Library/Caches/pypoetry/artifacts/03/a3/c8/88bb7a1608c5b641c88c47b5b0c9b5377ddbc103ca4574acf4af3f00b4/murmurhash-1.0.5-cp39-cp39-macosx_10_9_x86_64.whl
mypy @ file:///Users/amit/Library/Caches/pypoetry/artifacts/ab/61/06/b62a10ea75c111856b223b2921035d273fd3d746110fde9ff4bb20fc0c/mypy-0.901-cp39-cp39-macosx_10_9_x86_64.whl
mypy-extensions @ file:///Users/amit/Library/Caches/pypoetry/artifacts/92/45/bf/1807ce854ff668d92602207a37bfa9316def2a3f257bd03c4c5be4bc9b/mypy_extensions-0.4.3-py2.py3-none-any.whl
nbclient @ file:///Users/amit/Library/Caches/pypoetry/artifacts/cd/06/72/d0468da165d742240bf6e43191ef3a942576956bf30426df2cba0ac668/nbclient-0.5.3-py3-none-any.whl
nbconvert @ file:///Users/amit/Library/Caches/pypoetry/artifacts/cd/57/5f/0aebcf8edff99a9ea65795c24b549346cdb70b238cf2c200de0bf86f0f/nbconvert-6.1.0-py3-none-any.whl
nbformat @ file:///Users/amit/Library/Caches/pypoetry/artifacts/36/a7/e7/e1a0c1c54f6151e23afd51bc71e3f6e0b24a96dd1e693b92dd9a4e4ab3/nbformat-5.1.3-py3-none-any.whl
nest-asyncio @ file:///Users/amit/Library/Caches/pypoetry/artifacts/51/1a/ee/4f904dc67e6f59f0c8b75ef59d0111e4fb57dd4a884ad19d289ab31a77/nest_asyncio-1.5.1-py3-none-any.whl
nltk @ file:///Users/amit/Library/Caches/pypoetry/artifacts/18/6d/ee/ffa73af7527056102cff0b73ffa10fc5a7ffa898f9214d546e6ec70b57/nltk-3.6.2-py3-none-any.whl
nodeenv @ file:///Users/amit/Library/Caches/pypoetry/artifacts/54/f0/a1/53d1e469f8e160bad013c23411375fd63d4ea70cda0fc649fcb244ca7e/nodeenv-1.6.0-py2.py3-none-any.whl
notebook @ file:///Users/amit/Library/Caches/pypoetry/artifacts/8a/53/13/86803f6ca277cd2b45ce841ae8843ab6eb51d45b1d3a8fe96eb2382a07/notebook-6.4.0-py3-none-any.whl
numpy @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d6/2b/b1/16975744b34a9b3ec7bd901b286178933b525ca493a93edd7441d3b807/numpy-1.21.0-cp39-cp39-macosx_10_9_x86_64.whl
orjson @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c8/c9/04/a1cc49302292dbbe34795ed7d5a881c50cf02f4f813f336f7493949870/orjson-3.6.0-cp39-cp39-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl
overrides @ file:///Users/amit/Library/Caches/pypoetry/artifacts/24/45/16/62e842b5cdff34f5106ee676232cbcc7d7a1333e4900d111bca737b13a/overrides-3.1.0.tar.gz
packaging @ file:///Users/amit/Library/Caches/pypoetry/artifacts/f9/4f/09/c91a145b26102e014fd6e33bd8c7b87306c8e1d4a771158f34dd13210e/packaging-21.0-py3-none-any.whl
pandocfilters @ file:///Users/amit/Library/Caches/pypoetry/artifacts/2b/8b/2b/6cd1e4385f3f7f98a25f05764a4ea3f2f20d1db00612ef79e25bb90fe9/pandocfilters-1.4.3.tar.gz
parso @ file:///Users/amit/Library/Caches/pypoetry/artifacts/36/cd/ab/a8c3a5df337bc6f34a10f3f385417b62cdfebe2873ac2fec38206af0db/parso-0.8.2-py2.py3-none-any.whl
pastel @ file:///Users/amit/Library/Caches/pypoetry/artifacts/da/84/f3/3e4d8b15eabeba62960ed9d3ccc1e30b7ae5f1b93e6c28d291c67eaf93/pastel-0.2.1-py2.py3-none-any.whl
pathspec @ file:///Users/amit/Library/Caches/pypoetry/artifacts/40/8b/b2/80a6945971d8475bcc04d09afec4845855dd74f68da6a4c18bbf8f7784/pathspec-0.8.1-py2.py3-none-any.whl
pathtools @ file:///Users/amit/Library/Caches/pypoetry/artifacts/ce/ff/c7/31da76336d55d51d979a50868616c867c7b2ea6f2d2084b8c744726ae7/pathtools-0.1.2.tar.gz
patternfork-nosql @ file:///Users/amit/Library/Caches/pypoetry/artifacts/fa/53/d7/a04c2b1cd20312460da3f82ca634ac259fc581089956ed73763c0757cc/patternfork_nosql-3.6.tar.gz
pbr @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c7/6a/db/482c805b950fd5a0ece81e7a9a063cb1aa99169ca73fba511759c9db30/pbr-5.6.0-py2.py3-none-any.whl
pdfminer.six @ file:///Users/amit/Library/Caches/pypoetry/artifacts/21/c6/47/9aded02cbdd666be599af42cbfc0c6cf4ad847f177acf882ee11ff2f19/pdfminer.six-20201018-py3-none-any.whl
pep8-naming @ file:///Users/amit/Library/Caches/pypoetry/artifacts/a4/93/c7/a3b9b8b4aef682b4caa67015d897aff3d064860a460124ad8a23b6f45f/pep8_naming-0.11.1-py2.py3-none-any.whl
pexpect @ file:///Users/amit/Library/Caches/pypoetry/artifacts/5c/c2/43/b54fe59cab7e831df35401c8e6840162bf4a2ae5862604e7bc22db3000/pexpect-4.8.0-py2.py3-none-any.whl
pickleshare @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b5/48/a1/d2b823337003d531d87cf0d503ef28bb579703a74d14ad24a88863d616/pickleshare-0.7.5-py2.py3-none-any.whl
Pillow @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d7/25/9f/10bf1b0f46db0f11f87fcf319db8e4a8d0acd2326569efb025a0b8b7aa/Pillow-8.3.1-cp39-cp39-macosx_10_10_x86_64.whl
plac @ file:///Users/amit/Library/Caches/pypoetry/artifacts/a5/cb/88/6e55cfacecccbab7d6a03f0d05004c5a39c03a39265423788255714111/plac-1.1.3-py2.py3-none-any.whl
pluggy @ file:///Users/amit/Library/Caches/pypoetry/artifacts/29/58/fc/ed8b7451d3ef91a6465024f5656141da996e7aafd4d41a1659629a75e7/pluggy-0.13.1-py2.py3-none-any.whl
poethepoet @ file:///Users/amit/Library/Caches/pypoetry/artifacts/36/42/75/cc7c77f4b9ac69fd8470c7524490be4f0c722f6f7b98d14da11fc2774e/poethepoet-0.10.0-py3-none-any.whl
pokerface==0.1.0
portend @ file:///Users/amit/Library/Caches/pypoetry/artifacts/9e/0a/b1/aab32a1b4dbdac42ada198c9c4378651e9a6fba9698662d1e1838a7100/portend-2.7.1-py3-none-any.whl
pre-commit @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7b/f6/8d/c5425b5811fbbbe85e82cc7980de882a7b239984e2338412a964d62d8f/pre_commit-2.13.0-py2.py3-none-any.whl
preshed @ file:///Users/amit/Library/Caches/pypoetry/artifacts/f9/3c/c4/47684f5aebfe9d8a986343a71cbbdef4bcbc7494eb8d08857621978f33/preshed-3.0.5-cp39-cp39-macosx_10_9_x86_64.whl
prometheus-client @ file:///Users/amit/Library/Caches/pypoetry/artifacts/ce/fa/c7/255d5bc58b6e1a7cc1e9d4eb3c5a993bea6af1aa016b5008147ab8beb7/prometheus_client-0.11.0-py2.py3-none-any.whl
promise @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d6/c6/43/95f1e737b1dd79d3a5ac6cfb264a889716bab4cd9d28a9bc8c69591d53/promise-2.3.tar.gz
prompt-toolkit @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c0/cc/91/2b2506bf1a53afc81d06682da5dce00b5f700a51ee2a8452dc8b98af15/prompt_toolkit-3.0.19-py3-none-any.whl
protobuf @ file:///Users/amit/Library/Caches/pypoetry/artifacts/88/20/9e/47da88de518b7b11435b39354a7b9a334acfae01916f4aee875b908765/protobuf-3.17.3-cp39-cp39-macosx_10_9_x86_64.whl
psutil @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c5/45/be/f328230784273b5b602263092a0788e3055d7fa032dc1dcb0b1583bcb9/psutil-5.8.0-cp39-cp39-macosx_10_9_x86_64.whl
ptyprocess @ file:///Users/amit/Library/Caches/pypoetry/artifacts/2a/29/5d/0cdc5ec916431d60f03d2f725c54edbfa9fe53700b75fdfee209a3291e/ptyprocess-0.7.0-py2.py3-none-any.whl
py @ file:///Users/amit/Library/Caches/pypoetry/artifacts/60/79/0b/c48bd9c2a989aa8b1eb7a67cd02b053c10734f2e4e5665f7995f09999c/py-1.10.0-py2.py3-none-any.whl
py-rouge @ file:///Users/amit/Library/Caches/pypoetry/artifacts/31/31/4f/cc7585fdf5aec32c5b688726f52c2238f959caba4f6a65950f1a932745/py_rouge-1.1-py3-none-any.whl
py-spy @ file:///Users/amit/Library/Caches/pypoetry/artifacts/23/40/09/d09dd2f0516f0d0e73fe6a8dcfc9aaa5c13ffc2ebe7361b237391b0625/py_spy-0.3.7-py2.py3-none-macosx_10_9_x86_64.whl
pyasn1 @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7b/3a/54/42ce43b579bda01b9d79022fb733811594441e7a32e9f9a5a98f672bdc/pyasn1-0.4.8-py2.py3-none-any.whl
pyasn1-modules @ file:///Users/amit/Library/Caches/pypoetry/artifacts/dd/b8/4f/b56433e0354274a31074995e01b8671751e9f0ed0001f5254e5b03a54f/pyasn1_modules-0.2.8-py2.py3-none-any.whl
pycodestyle @ file:///Users/amit/Library/Caches/pypoetry/artifacts/4c/30/97/026c283ef67eb248e5b7e6fad1f8ffb99dae50c11fd93eb939fd7c1f46/pycodestyle-2.7.0-py2.py3-none-any.whl
pycparser @ file:///Users/amit/Library/Caches/pypoetry/artifacts/37/8e/5a/0ea4f84bc7f11e0e3468110efa2c7783241ea7eaa63a92a751de06f78f/pycparser-2.20-py2.py3-none-any.whl
pydantic @ file:///Users/amit/Library/Caches/pypoetry/artifacts/96/6b/c5/3defa89e523826db389eb2be9b0b808835134ab64c68e8193da2ac7d51/pydantic-1.8.2-cp39-cp39-macosx_10_9_x86_64.whl
pydocstyle @ file:///Users/amit/Library/Caches/pypoetry/artifacts/75/e7/e5/1acad15a51efd39cf39259c7888c205fd787a92efea28f7afc5a9e315c/pydocstyle-6.1.1-py3-none-any.whl
pyflakes @ file:///Users/amit/Library/Caches/pypoetry/artifacts/eb/c4/2c/47fcc1b3f387b1f7033e85b3ac6ee7772338461a8de8ac3977c6a7dcc1/pyflakes-2.3.1-py2.py3-none-any.whl
Pygments @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d1/62/9b/10957d050758a3da079375961ec00a6c83b71a90eefd0199361d9d54de/Pygments-2.9.0-py3-none-any.whl
Pyment @ file:///Users/amit/Library/Caches/pypoetry/artifacts/e0/cc/65/a915b66c524613d67c0dfcd3b9e99a2916908d106a16a5909520b29036/Pyment-0.3.3-py2.py3-none-any.whl
pyparsing @ file:///Users/amit/Library/Caches/pypoetry/artifacts/92/0f/cf/effdcd5d76a6186df0969f85b3b030284ff8058936d5016540b5258ea3/pyparsing-2.4.7-py2.py3-none-any.whl
pyrsistent @ file:///Users/amit/Library/Caches/pypoetry/artifacts/a3/50/5c/55747fd13209ee597a3e45d1ad70eef133559516e470c05a47c29365b0/pyrsistent-0.18.0-cp39-cp39-macosx_10_9_x86_64.whl
pytest @ file:///Users/amit/Library/Caches/pypoetry/artifacts/17/a3/46/eb89acf91c8962553e409da649186ff5e3d2c1c93195f0643e7dfd1b57/pytest-6.2.4-py3-none-any.whl
python-dateutil @ file:///Users/amit/Library/Caches/pypoetry/artifacts/93/67/cf/49f56d9e954addcfc50e5ffc9faee013c2eb00c6d77d56c6a22cb33b54/python_dateutil-2.8.1-py2.py3-none-any.whl
python-docx @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7f/3f/b0/ca05b61dd6a8beb8bc8317700154416271ddda4db5425c92e9d780cba7/python-docx-0.8.11.tar.gz
python-dotenv @ file:///Users/amit/Library/Caches/pypoetry/artifacts/a6/61/71/a8300bb6be27750f8810f5d2a0c070e220ebc1a416f0837d4bbc283391/python_dotenv-0.17.1-py2.py3-none-any.whl
pytz @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b0/a7/8d/54de3ab4d1ff29abbbca1e9ccbaefdc2a1b290138311b84f73bee16de1/pytz-2021.1-py2.py3-none-any.whl
PyYAML @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b6/55/22/537845ea953a4d8d5006f11bdd1b03824425d7f809d5a7ae8efbbeab95/PyYAML-5.4.1-cp39-cp39-macosx_10_9_x86_64.whl
pyzmq @ file:///Users/amit/Library/Caches/pypoetry/artifacts/0f/06/79/957f3cfec70cf4f952ea59dd7a62281c8687323814b532588d3690e2ec/pyzmq-22.1.0-cp39-cp39-macosx_10_15_universal2.whl
qtconsole @ file:///Users/amit/Library/Caches/pypoetry/artifacts/88/04/9d/bfea17c2892fc9d54fe273876b3c28dc50389a75af0a43c4e5db123bbd/qtconsole-5.1.1-py3-none-any.whl
QtPy @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7b/28/5f/a53ed26195df09abacb0aa3383076185f0496f7dc9f7b496180c7316a1/QtPy-1.9.0-py2.py3-none-any.whl
regex @ file:///Users/amit/Library/Caches/pypoetry/artifacts/0a/f5/b9/ac8ceed381bfa6deb8be166808f27353d5565085d6b762baac40befdff/regex-2021.7.6-cp39-cp39-macosx_10_9_x86_64.whl
requests @ file:///Users/amit/Library/Caches/pypoetry/artifacts/22/0a/9d/0df883fbffbb406d0cddbb35e881e4ac6bfb8f0dee8733056b6a054bf7/requests-2.25.1-py2.py3-none-any.whl
restructuredtext-lint @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b6/09/80/91d176f17ba9a28291203e41600b294aa26214e185082bcb0cc3543588/restructuredtext_lint-1.3.2.tar.gz
rich @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d8/09/d1/c1e297bde3276fe5289c3fc9cf6804eb531e28372b527f854b1427b012/rich-10.5.0-py3-none-any.whl
rsa @ file:///Users/amit/Library/Caches/pypoetry/artifacts/27/21/1f/fea99b1c1766c11c2c47dd961d7773ebab5c6acbf730200bd2e021b836/rsa-4.7.2-py3-none-any.whl
s3transfer @ file:///Users/amit/Library/Caches/pypoetry/artifacts/8f/96/42/4ec7dc1795d747cb348df8b6aad3b471251863f3eab457ce33668cf8a1/s3transfer-0.4.2-py2.py3-none-any.whl
sacremoses @ file:///Users/amit/Library/Caches/pypoetry/artifacts/96/d8/48/cdb78bf884395d731e5af8316b4de4517cd3b3b2b7bd28ede180216c83/sacremoses-0.0.45-py3-none-any.whl
scikit-learn @ file:///Users/amit/Library/Caches/pypoetry/artifacts/04/83/07/6b7befd90140a96d92542597559789a8e6f99ebcd86366336d38472ea6/scikit_learn-0.24.2-cp39-cp39-macosx_10_13_x86_64.whl
scipy @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d1/03/cb/ba3c405831dab5ed08abcc721f597e4374beb9ac88f9b0a426d101b02b/scipy-1.6.1-cp39-cp39-macosx_10_9_x86_64.whl
Send2Trash @ file:///Users/amit/Library/Caches/pypoetry/artifacts/44/b5/57/aae8fa9005db0161e198bc50572d47f162988a2e2f3434bd4d8e6f78a0/Send2Trash-1.7.1-py3-none-any.whl
sentencepiece @ file:///Users/amit/Library/Caches/pypoetry/artifacts/0c/cd/fe/92b552fb5511ad2ab9906a6b85474f6a7b6312629fe9524f11bff2933d/sentencepiece-0.1.96-cp39-cp39-macosx_10_6_x86_64.whl
sentry-sdk @ file:///Users/amit/Library/Caches/pypoetry/artifacts/29/89/c1/911ea3ca1b49d945fd159b841827b63a00f57c439d6c0c26cb7aca5f7c/sentry_sdk-1.3.0-py2.py3-none-any.whl
sgmllib3k @ file:///Users/amit/Library/Caches/pypoetry/artifacts/48/41/c1/47c574e94f31057312eab350c2a7e7b75d1105eb5b673a14efe485c128/sgmllib3k-1.0.0.tar.gz
shortuuid @ file:///Users/amit/Library/Caches/pypoetry/artifacts/80/85/8d/5bdb9fbab8b4fc7bd9599a4982cac0ae2498f4c863d13869d4e1e7b722/shortuuid-1.0.1-py3-none-any.whl
six @ file:///Users/amit/Library/Caches/pypoetry/artifacts/08/9f/47/c16ae03035fc69eaf100ea39657a49baaeef714e25a52575710c34cd48/six-1.16.0-py2.py3-none-any.whl
smmap @ file:///Users/amit/Library/Caches/pypoetry/artifacts/fb/95/d9/27c304575d15e0faf1b64e46ec12611c0a683b4d9d6aa459850d5a77df/smmap-4.0.0-py2.py3-none-any.whl
snowballstemmer @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c7/56/66/7613028d4906686fd240574f9e4ec773d99d60753a515f163d21b44935/snowballstemmer-2.1.0-py2.py3-none-any.whl
sortedcontainers @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b9/80/e1/4bdfa349488797fd308ecbe48f4fad57a3245777fb47c8741730583262/sortedcontainers-2.4.0-py2.py3-none-any.whl
soupsieve @ file:///Users/amit/Library/Caches/pypoetry/artifacts/20/16/55/4a9893b172bb2a7815f46f6a947ff3506dd241ea679377ffdc0b2c811e/soupsieve-2.2.1-py3-none-any.whl
spacy @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7d/eb/24/0b844c7393780507707bd2faddc484095f21df0f6d0bc69be88d6d7137/spacy-2.3.7-cp39-cp39-macosx_10_9_x86_64.whl
srsly @ file:///Users/amit/Library/Caches/pypoetry/artifacts/4e/7f/ef/750f5466b050be2d76de8aaf076c3ffcdff5cd0a5af0cf11ef6ebea3c6/srsly-1.0.5-cp39-cp39-macosx_10_9_x86_64.whl
stevedore @ file:///Users/amit/Library/Caches/pypoetry/artifacts/c2/31/a7/c2802b19c1cba9407f4254c97dacd72884d3b27c63bbbb3ada4edbf3a8/stevedore-3.3.0-py3-none-any.whl
subprocess32 @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b9/91/2e/cc8d3ccbf05fa27ee73859de9d02ef1a7eba84ed701970db1063a1848d/subprocess32-3.5.4.tar.gz
tempora @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b9/f8/26/4304b7c3157148ded2a9f388c0c3e6b1c0016147ff5995928d8db2ce8b/tempora-4.1.1-py3-none-any.whl
tensorboardX @ file:///Users/amit/Library/Caches/pypoetry/artifacts/f6/18/d3/c562b4cfa8f42a7172c29cfea8ded406d87b3e79e6bcc0032dc3eace99/tensorboardX-2.4-py2.py3-none-any.whl
termcolor @ file:///Users/amit/Library/Caches/pypoetry/artifacts/a2/5d/c7/e4ccb3b3bb8d3e3aff995fb6ffb12cfc78bbc8affa283907ee5eb5a5a5/termcolor-1.1.0.tar.gz
terminado @ file:///Users/amit/Library/Caches/pypoetry/artifacts/ac/6f/f6/464b3b1e95eff0b0b464b918cbcf0b6a2f81b4fda008b8570450a25a7a/terminado-0.10.1-py3-none-any.whl
testfixtures @ file:///Users/amit/Library/Caches/pypoetry/artifacts/87/49/bd/0bf2640dd53740f9dc21bb1eecb8093631a71ea4703c8ef1a3a7bdd42d/testfixtures-6.17.1-py2.py3-none-any.whl
testpath @ file:///Users/amit/Library/Caches/pypoetry/artifacts/1e/2d/08/76691a9e7e429930fb378dd96f760de96f2686841c47da2b35a04c5aad/testpath-0.5.0-py3-none-any.whl
thinc @ file:///Users/amit/Library/Caches/pypoetry/artifacts/71/1d/6d/f57a2fc5d6752b008f1583c84234c8909921813d7f863b65fac52afdc4/thinc-7.4.5-cp39-cp39-macosx_10_9_x86_64.whl
threadpoolctl @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7b/b5/49/550b4953bb841e92404d74b7c7671139fd8bbdcec26c2e89c1843fcb76/threadpoolctl-2.2.0-py3-none-any.whl
tokenizers @ file:///Users/amit/Library/Caches/pypoetry/artifacts/e3/67/87/626827d63f69f9c1357c553076ccf0ee9721c9764ef434fa3654531f3f/tokenizers-0.10.3-cp39-cp39-macosx_10_11_x86_64.whl
toml @ file:///Users/amit/Library/Caches/pypoetry/artifacts/6b/6a/c9/53b19f7870a77d855e8b05ecdc98193944e5d246dafe11bbcad850ecba/toml-0.10.2-py2.py3-none-any.whl
tomlkit @ file:///Users/amit/Library/Caches/pypoetry/artifacts/fd/06/32/b79e75623225a9b5af79899482b9c2933c2fa2c6fb0eff80fcec10ae48/tomlkit-0.7.2-py2.py3-none-any.whl
torch @ file:///Users/amit/Library/Caches/pypoetry/artifacts/15/c9/5c/5d6856af4e4b18034b2e50b2cc08ba37035276b823f21dfa97a2e454b5/torch-1.8.1-cp39-none-macosx_10_9_x86_64.whl
torchvision @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b2/82/34/8438fea281caa4f92140f0bdd1ff59ee8bc2e3e0b35d68bfc3d0d14111/torchvision-0.9.1-cp39-cp39-macosx_10_9_x86_64.whl
tornado @ file:///Users/amit/Library/Caches/pypoetry/artifacts/8a/d4/49/50c44b642217b81c4d63991cb2560ebb78029f8090bd4a23c1c11a4ac0/tornado-6.1-cp39-cp39-macosx_10_9_x86_64.whl
tqdm @ file:///Users/amit/Library/Caches/pypoetry/artifacts/1a/46/38/2897fecc5f3ff99d118d7dd77d749365e163113a712337b2b28837bedd/tqdm-4.61.2-py2.py3-none-any.whl
traitlets @ file:///Users/amit/Library/Caches/pypoetry/artifacts/f3/6b/36/998ab52c38eb1c4820cdef1e66043ddebb64e04535f88dbfd04486ce03/traitlets-5.0.5-py3-none-any.whl
transformers @ file:///Users/amit/Library/Caches/pypoetry/artifacts/ae/3e/26/9a99e73cfd7d44463aad270e850034478e6258f45900175d268a0cd976/transformers-4.5.1-py3-none-any.whl
types-orjson @ file:///Users/amit/Library/Caches/pypoetry/artifacts/e3/ac/f5/782a3257b6f17101317a753b694ce3dffc227c7eee054f8633cbb18700/types_orjson-0.1.1-py2.py3-none-any.whl
typing-extensions @ file:///Users/amit/Library/Caches/pypoetry/artifacts/3d/38/26/2c9b521373bbaf207e658ec81f51aa2a8af7454bfe4d7c15743a6533d5/typing_extensions-3.10.0.0-py3-none-any.whl
urllib3 @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d8/4b/3f/9e8027e7f15b2f99244ad505328c3cf87912ad87446c1c8e89efacf731/urllib3-1.25.11-py2.py3-none-any.whl
virtualenv @ file:///Users/amit/Library/Caches/pypoetry/artifacts/13/ab/72/1f00c98674e32cc5fcae271032c164917dfd53be7c009e119f1cf47f8d/virtualenv-20.4.7-py2.py3-none-any.whl
wandb @ file:///Users/amit/Library/Caches/pypoetry/artifacts/49/f4/c8/324b20beeceb351e72c821219d999d460442c4b4ff903122f29979ab5e/wandb-0.10.33-py2.py3-none-any.whl
wasabi @ file:///Users/amit/Library/Caches/pypoetry/artifacts/d5/9d/af/58d834e926bfc5371fb8208596bdec3d5824083600c5681f98ce0790d7/wasabi-0.8.2-py3-none-any.whl
wcwidth @ file:///Users/amit/Library/Caches/pypoetry/artifacts/7d/f4/60/0737157bb9711fec72c70dff523aa54491eef317e0d586cf5388ff0908/wcwidth-0.2.5-py2.py3-none-any.whl
webencodings @ file:///Users/amit/Library/Caches/pypoetry/artifacts/ed/d4/da/61384706cfac042ba3bd148746d66e50695463993be117c7c8dadeef7a/webencodings-0.5.1-py2.py3-none-any.whl
wemake-python-styleguide @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b2/30/6d/aff16dd6cc6e7169bd34e8f2e8feff71a3cc593d6ebb617acc9cf7e927/wemake_python_styleguide-0.15.3-py3-none-any.whl
widgetsnbextension @ file:///Users/amit/Library/Caches/pypoetry/artifacts/eb/b0/c5/e9e106309ddf8d2cbebcd3c9f2c2be8c7c7346f58d8ff7ace4196371d8/widgetsnbextension-3.5.1-py2.py3-none-any.whl
word2number @ file:///Users/amit/Library/Caches/pypoetry/artifacts/91/7b/91/fd4e6b1580eb2a2f0bb8b725ba137628acb0adb21522a3ff9d69e6f5e1/word2number-1.1.zip
zc.lockfile @ file:///Users/amit/Library/Caches/pypoetry/artifacts/b5/a8/c8/e94e98335e585be92e35e5d07dd8a75e5c2e7774c8bd24410160f9cfe0/zc.lockfile-2.0-py2.py3-none-any.whl
```
</p>
</details>
## Steps to reproduce
I wasn't sure how to do this step but I will if I'm not making any sense and you'd like me to.
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
```
</p>
</details>
| 2hard
|
Title: [Bug]: Checkpoint / LoRA gallery slows down the browser
Body: ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Since the last release, the checkpoint / LoRA galleries slow down Firefox to the point that Firefox shows a "This site slows down Firefox [stop]" banner at top of the page, if you have many checkpoints/LoRA.
### Steps to reproduce the problem
1. Have many checkpoints and/or LoRA
2. Click the Checkpoints or LoRa tab
### What should have happened?
Before the last release, it had no noticable delay. Now it takes quite some time until the tab switches, the webui gets unresponsive and Firefox warns that the site is slowing down the browser.
If wonder if it previously had some kind of lazy-loading in place that was removed or similar things. I do not even need to scroll in the last, changing the tab from Generation to Lora alone is enough to slow down the browser before I even see the first rectangle. Most Lora do not have preview images but only the standard grey placeholder.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
(can add later if needed as I would need to remove sensitive information)
### Console logs
```Shell
(I don't see anything relevant)
```
### Additional information
_No response_ | 1medium
|
Title: It is able to support RNN models like LSTM? (3D array INPUT)
Body: It is possible to adapt this set of dashboards for RNN-based models, such as LSTM networks.
The main problem I am encountering is in the input dataset which, being pre-processed, does not have the traditional linear structure, but is produced by the application of window-based techniques, for example.
The INPUT it's a 3D array

| 2hard
|
Title: CN-CLIPViT-B/16 模型在Flickr30K-CN数据集上的zero-shot结果复现不出来
Body: 你好,我使用你们开源的CN-CLIPViT-B/16 模型在MUGE上复现的结果和你们汇报的结果差不多

但是在Flickr30K-CN数据集上的结果差很远,不知道问题出在哪里,麻烦帮忙看下,谢谢。

| 1medium
|
Title: Dependency parsing: is there some way to discourage multiple `nsubj` dependents?
Body: A very weird English tree produced by Stanza 1.6.0 in the [demo](http://stanza.run/):
> My cousin my extremely rude colleague admired last year chewed the chicken enthusiastically.
<img width="1171" alt="image" src="https://github.com/stanfordnlp/stanza/assets/985263/a6468f67-9d09-4ad5-8f3e-4cbc92d08e6b">
In UD, no word is allowed to have multiple (plain) `nsubj` dependents. But "admired" has two.
Is there a recommended alternate training or decoding method in Stanza that could avoid this sort of problem? | 2hard
|
Title: No demo assignment in topic-3
Body: [jupyter_english/topic03_decision_trees_kNN](https://github.com/Yorko/mlcourse.ai/tree/main/jupyter_english/topic03_decision_trees_kNN) and [jupyter_french/topic03_decision_trees_kNN](https://github.com/Yorko/mlcourse.ai/tree/main/jupyter_french/topic03_decision_trees_kNN) has no .ipynb files for the demo assignment and solution to it.
Can you add them to the repo? | 1medium
|
Title: Add tutorial notebook to docs.cleanlab.ai
Body: Make tutorial notebook appear on docs.cleanlab.ai (and verify it looks good on the website) by adding it to:
1) https://github.com/cleanlab/cleanlab/blob/master/docs/source/tutorials/index.rst
2) https://github.com/cleanlab/cleanlab/blob/master/docs/source/index.rst | 1medium
|
Title: No layer style UI for ee.ImageCollection
Body: ### Environment Information
Colab
Mon Mar 25 21:27:30 2024 UTC
OS Linux CPU(s) 2 Machine x86_64
Architecture 64bit RAM 12.7 GiB Environment IPython
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
geemap 0.32.0 ee 0.1.394 ipyleaflet 0.18.2
folium 0.14.0 jupyterlab Module not found notebook 6.5.5
ipyevents 2.0.2 geopandas 0.13.2
### Description
The layer visualization editing tool is not available when the input to `Map.add_layer` is an `ee.ImageCollection`. If an image collection is passed, I believe `.mosaic()` is called so that image tiles can be requested for viewing. Since we're visualizing an `ee.Image` object, we should be able to set and apply visualization parameters (consistent with Code Editor behavior).
### What I Did
1. Add an `ee.ImageCollection` object to a `Map` with `add_layer`
2. Display the map and click layer gear icon to set visualization properties (no tool "vis params are uneditable")
3. Mosaic the `ee.ImageCollection` first and then add to a map
4. Display the new map and click layer gear icon to set visualization properties (tool appears and WAI).
```py
# [NEW CELL] Construct an ee.ImageCollection.
col = ee.ImageCollection([ee.Image(1).byte(), ee.Image(10).byte()])
# [NEW CELL] Display the ee.ImageCollection.
m0 = geemap.Map()
m0.add_layer(col)
m0
# [NEW CELL] Display the .mosaic() of the ee.ImageCollection.
m1 = geemap.Map()
m1.add_layer(col.mosaic())
m1
```

| 1medium
|
Title: Making django_db_blocker un-opt-outable is zealous and lame
Body: Use of `django_db_blocker` to cripple use of the database outside of specially tagged tests and fixtures interferes with Django model `get_or_create` calls at import time, which is how some systems ensure the presence of fundamental data across environments.
Making this behavior not configurable is disruptive and annoying. It results in the need to patch the patcher, a la
```
# settings/test.py
from pytest_django import plugin
plugin._blocking_manager = mock.MagicMock()
```
Please consider a means to disable this behavior through settings. | 1medium
|
Title: Swarmplot and markers
Body: I am exploring the `swarmplot` functionality provided by seaborn. I tried to follow [stackoverflow suggestions](https://stackoverflow.com/questions/52878845/swarmplot-with-hue-affecting-marker-beyond-color) on changing the default marker for groups, but without success.
Looking at the documentation of the functionality provided by `swarmplot`, it says "additional parameters are passed to the `scatter` functions giving as an example:
```python
import seaborn as sns
tips = sns.load_dataset("tips")
sns.swarmplot(
data=tips, x="total_bill", y="day",
marker="x", linewidth=1,
)
```
However, combining this with e.g. `hue`, does not give the expected result (and I found a hint in #1595 that it is also not possible:
```python
sns.swarmplot(
data=tips, x="total_bill", y="day",
marker="x", linewidth=1,
hue='sex'
)
```

I would have expected that the crosses are now in blue and orange, but this is not the case. Happy to have a look at the code or update the documentation, but a hint where to start would be great. | 1medium
|
Title: Form validation errors on Sign Up and Login pages
Body: I found a bug where the frontend errors out when you press the submit button on empty fields for the login and sign up pages.
Here is a gif of the error for the sign up page. The same error occurred with login page as well.

| 1medium
|
Title: Namespaces should be lazy registerable
Body: As @frol suggested, for big APIs splitted on multiple files, Namespace should be the only class to import.
As Blueprints for Flask, it should provide all necessary methods to not require the API instance in the file and to be lazy registerable.
The target is being able to write:
``` python
# feature.py
from flask_restplus import Namespace, Resource
ns = Namespace('feature', 'Some namespace')
model = ns.model('Model', {})
@ns.route('/')
class MyFeature(Resource):
@ns.marshal_with(model)
def get(self):
pass
```
``` python
# api.py
from flask_restplus import Api
from feature import ns
api = Api()
api.register_namespace(ns)
```
| 1medium
|
Title: Inconsistent results in using torch.jit.script API from API documentation.
Body: ### 🐛 Describe the bug
Expects an AttributeError is raised since the ignored_method is skipped in compiling as per API documentation.
```python
def test_script_module_with_ignored_method(self):
class IgnoredMethodModule(nn.Module):
def forward(self, x):
return x * 2
@torch.jit.ignore
def ignored_method(self, x):
return x * 3
module = IgnoredMethodModule()
scripted_module = torch.jit.script(module)
input_tensor = torch.tensor(5)
self.assertEqual(scripted_module(input_tensor), module(input_tensor))
# Ensure ignored method is not part of the scripted module
with self.assertRaises(AttributeError):
scripted_module.ignored_method(input_tensor)
```
```
======================================================================
FAIL: test_script_module_with_ignored_method (__main__.TestTorchJitScript)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/projects/api_guided_testgen/out/bug_detect_gpt4o/exec/basic_rag_apidoc/torch/torch.jit.script.py", line 70, in test_script_module_with_ignored_method
scripted_module.ignored_method(input_tensor)
AssertionError: AttributeError not raised
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) E-2224G CPU @ 3.50GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU max MHz: 4700.0000
CPU min MHz: 800.0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_2
[conda] mkl_fft 1.3.11 py39h5eee18b_0
[conda] mkl_random 1.2.8 py39h1128e8f_0
[conda] numpy 2.0.1 py39h5f9d8c6_1
[conda] numpy-base 2.0.1 py39hb5e798b_1
[conda] pytorch 2.5.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 2.5.0 py39_cpu pytorch
[conda] torchvision 0.20.0 py39_cpu pytorch
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | 1medium
|
Title: Smaller mark width for overlapping data with `so.Hist(common_bins=False)`
Body: When doing a `so.Hist(common_bins=False)`, if the bins for each group overlap, the width calculated for each mark is smaller that it should be.
Here's a minimal working example, where I have a dataset `A`, and its x-shifted version `B = A + shift`. In each row, I'm plotting a different shift, and when they start overlapping, the bar width is smaller than the bin width.

```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn.objects as so
def plot(ax, shift: float):
data = np.random.default_rng(0).normal(size=50)
df = pd.DataFrame({"A": data, "B": data + shift}).melt()
return (
so.Plot(df, x="value", color="variable")
.add(so.Bars(), so.Hist(common_bins=False))
.on(ax)
.plot()
)
shifts = [4.5, 4, 3.9, 3.8]
fig, axes = plt.subplots(len(shifts), sharex=True, gridspec_kw={"hspace": 0})
for ax, shift in zip(axes, shifts):
plot(ax, shift)
ax.set(ylabel=f"{shift = }")
```
I could trace it to this width calculation:
https://github.com/mwaskom/seaborn/blob/b4e5f8d261d6d5524a00b7dd35e00a40e4855872/seaborn/_core/plot.py#L1453
which ends up running the following line for all groups as one:
https://github.com/mwaskom/seaborn/blob/b4e5f8d261d6d5524a00b7dd35e00a40e4855872/seaborn/_core/scales.py#L467
If the bin edges are `[0, 1, 2]` and `[0.5, 1.5, 2.5]` for each group, it calculates the bin width from `[0, 0.5, 1, 1.5, ...]` and finds a width of 0.5 instead of a width of 1.
Maybe this is not a bug but something by design when there is overlap between marks?
In case it is a bug, I could contribute a fix, but would probably need some direction as to where to fix it.
Thanks! | 1medium
|
Title: gotImport Error
Body: ImportError Traceback (most recent call last)
Cell In[8], line 4
2 import numpy as np
3 from collections.abc import Iterable
----> 4 import databricks.koalas as ks
5 import matplotlib.pyplot as plt
7 from pyspark import SparkContext, SparkConf
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\databricks\koalas\__init__.py:74
66 raise RuntimeError(
67 "Please explicitly unset 'ARROW_PRE_0_15_IPC_FORMAT' environment variable in both "
68 "driver and executor sides. It is required to set this environment variable only "
69 "when you use pyarrow>=0.15 and pyspark<3.0."
70 )
73 from databricks.koalas.frame import DataFrame
---> 74 from databricks.koalas.indexes import Index, MultiIndex
75 from databricks.koalas.series import Series
76 from databricks.koalas.typedef import pandas_wraps
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\databricks\koalas\indexes.py:51
49 from databricks.koalas.frame import DataFrame
50 from databricks.koalas.missing.indexes import _MissingPandasLikeIndex, _MissingPandasLikeMultiIndex
---> 51 from databricks.koalas.series import Series, _col
52 from databricks.koalas.utils import (
53 compare_allow_null,
54 compare_disallow_null,
(...)
61 validate_bool_kwarg,
62 )
63 from databricks.koalas.internal import _InternalFrame, NATURAL_ORDER_COLUMN_NAME
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\databricks\koalas\series.py:22
20 import re
21 import inspect
---> 22 from collections import Iterable, OrderedDict
23 from functools import partial, wraps, reduce
24 from typing import Any, Generic, List, Optional, Tuple, TypeVar, Union
ImportError: cannot import name 'Iterable' from 'collections' (C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\collections\__init__.py) | 1medium
|
Title: Monitor tensors every batch
Body: Default epoch monitors for training are quite useful.
Could you give an example of how to monitor one tensor's value **every batch** ?(and write it to tensorboard file.)
Thanks a lot.
| 1medium
|
Title: Remove folder from core integrations https://github.com/deepset-ai/haystack-core-integrations/tree/main/nodes
Body: | 1medium
|
Title: Cannot load the dataset go_emotions
Body: ### Describe the bug
When I run the following code I get an exception;
`go_emotions = load_dataset("go_emotions")`
> AttributeError Traceback (most recent call last)
Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1)
----> [1](vscode-notebook-cell:?execution_count=6&line=1) go_emotions = load_dataset("go_emotions")
[2](vscode-notebook-cell:?execution_count=6&line=2) data = go_emotions.data
File [c:\Users\hijik\anaconda3\Lib\site-packages\datasets\load.py:2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
[2518](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2518) verification_mode = VerificationMode(
[2519](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2519) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
[2520](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2520) )
[2522](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2522) # Create a dataset builder
-> [2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523) builder_instance = load_dataset_builder(
[2524](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2524) path=path,
[2525](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2525) name=name,
[2526](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2526) data_dir=data_dir,
[2527](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2527) data_files=data_files,
[2528](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2528) cache_dir=cache_dir,
[2529](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2529) features=features,
[2530](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2530) download_config=download_config,
[2531](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2531) download_mode=download_mode,
[2532](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2532) revision=revision,
[2533](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2533) token=token,
[2534](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2534) storage_options=storage_options,
[2535](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2535) trust_remote_code=trust_remote_code,
[2536](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2536) _require_default_config_name=name is None,
...
---> [63](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:63) if issubclass(obj_type, transformers.PreTrainedTokenizerBase):
[64](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:64) pklregister(obj_type)(_save_transformersPreTrainedTokenizerBase)
[66](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:66) # Unwrap `torch.compile`-ed functions
AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase'
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?10bc0728-6947-456e-9a3e-f056872b04c6) or open in a [text editor](command:workbench.action.openLargeOutput?10bc0728-6947-456e-9a3e-f056872b04c6). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Steps to reproduce the bug
```
from datasets import load_dataset
go_emotions = load_dataset("go_emotions")
```
### Expected behavior
Should simply load the variable with the data from the file
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.16.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.3
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | 1medium
|
Title: Foreign key relationships are not created when use_alter is used.
Body: **Migrated issue, originally created by tomkcook ([@tomkcook](https://github.com/tomkcook))**
The basic problem is that when a column is marked as ForeignKey and the use_alter option is set to True, the foreign key constraint is not created in the database. The database is postgresql 9.5.
```
$ pip freeze
alembic==0.8.7
click==6.6
Flask==0.11.1
Flask-Alembic==2.0.1
Flask-Login==0.3.2
Flask-SQLAlchemy==2.1
GeoAlchemy2==0.3.0
gunicorn==19.6.0
itsdangerous==0.24
Jinja2==2.8
Mako==1.0.4
MarkupSafe==0.23
pkg-resources==0.0.0
psycopg2==2.6.2
python-editor==1.0.1
SQLAlchemy==1.0.15
Werkzeug==0.11.11
```
My test application is attached. Briefly, I have these models:
```
from test import db
from sqlalchemy.dialects.postgres import UUID
from sqlalchemy.sql import func
class User(db.Model):
id = db.Column(UUID, primary_key = True, default=func.uuid_generate_v1())
name = db.Column(db.String)
class Post(db.Model):
id = db.Column(UUID, primary_key = True, default=func.uuid_generate_v1())
name = db.Column(db.String)
owner_id = db.Column(UUID, db.ForeignKey('user.id', use_alter=True))
owner = db.relationship("User", backref=db.backref('posts', lazy='dynamic'))
if __name__ == '__main__':
me = User(name = 'me')
db.session.add(me)
db.session.add(Post(name = 'My Post', owner = me))
db.session.commit()
```
When I generate a revision I get this:
```
""".
Revision ID: f4f67ed72eac
Revises:
Create Date: 2016-09-12 13:26:02.697400
"""
# revision identifiers, used by Alembic.
revision = 'f4f67ed72eac'
down_revision = None
branch_labels = ('default',)
depends_on = None
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table('post',
sa.Column('id', postgresql.UUID(), nullable=False),
sa.Column('name', sa.String(), nullable=True),
sa.Column('owner_id', postgresql.UUID(), nullable=True),
sa.ForeignKeyConstraint(['owner_id'], ['test.user.id'], use_alter=True),
sa.PrimaryKeyConstraint('id'),
schema='test'
)
op.create_table('user',
sa.Column('id', postgresql.UUID(), nullable=False),
sa.Column('name', sa.String(), nullable=True),
sa.PrimaryKeyConstraint('id'),
schema='test'
)
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_table('user', schema='test')
op.drop_table('post', schema='test')
### end Alembic commands ###
```
As you can see, there is a ForeignKeyConstraint created. But when I run alembic upgrade to that version, the table in the database doesn't have any foreign key constraints:
```
test=> \d user
Table "test.user"
Column | Type | Modifiers
--------+-------------------+-----------
id | uuid | not null
name | character varying |
Indexes:
"user_pkey" PRIMARY KEY, btree (id)
test=> \d post
Table "test.post"
Column | Type | Modifiers
----------+-------------------+-----------
id | uuid | not null
name | character varying |
owner_id | uuid |
Indexes:
"post_pkey" PRIMARY KEY, btree (id)
```
If the `use_alter` flag is omitted, then foreign key constraints are created correctly.
----------------------------------------
Attachments: [test.tar.bz2](../wiki/imported_issue_attachments/387/test.tar.bz2)
| 1medium
|
Title: Disable javascript not working
Body: Hey there, I tried to disable the javascript running on the undetected_chromedriver but without luck.
This command does not work:
`self.chrome_options.add_argument('--disable-javascript')`
This alternative is not supported by the undetected_chromedriver
` prefs = {'profile.default_content_setting_values': {
'images': 2,
'javascript': 2,
'geolocation': 2,
'popups': 2,
'cookies': 2}}
self.chrome_options.add_experimental_option("prefs", prefs)
`
Not sure if there's another way to do it, but both methods above work on normal selenium chrome.
Thank you. | 1medium
|
Title: 【建议】main 启动的时候添加一下启动参数
Body: python mian.py 5 直接web页面启动 | 0easy
|
Title: No license file
Body: The `license.md` file helps with how your files can be edited and even forked also, guides how pull request can be done on your main branch.
It should be added.
I can work on it. | 0easy
|
Title: Publish allennlp-server to pypi
Body: It's annoying that it's separate. It doesn't get proper test coverage this way. We should either support this feature properly, or not at all. | 1medium
|
Title: Missing scikit-image nightly wheels on MacOS
Body: We installed the nightlies in Dask CI (usually from here: https://pypi.anaconda.org/scientific-python-nightly-wheels/simple), but we are getting errors that the wheels can't be found there anymore (and seem to be gone), is this intended and if yes is there another source? | 1medium
|
Title: Support of num_workers (multiprocessing) in map for IterableDataset
Body: ### Feature request
Currently, IterableDataset doesn't support setting num_worker in .map(), which results in slow processing here. Could we add support for it? As .map() can be run in the batch fashion (e.g., batch_size is default to 1000 in datasets), it seems to be doable for IterableDataset as the regular Dataset.
### Motivation
Improving data processing efficiency
### Your contribution
Testing | 1medium
|
Title: [BUG] Response validation fails when using authentication and custom session
Body: ### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
Validating responses fails when an endpoint specifies a `security` parameter and a custom test client is passed as the `session` argument to `case.call_and_validate()`, such as in [this example from the documentation](https://schemathesis.readthedocs.io/en/stable/auth.html#custom-test-client-in-python-tests).
### To Reproduce
🚨 **Mandatory** 🚨: Steps to reproduce the behavior:
Running this test example with `pytest` should reproduce the issue:
```python
from typing import Annotated
from fastapi import FastAPI, Security
from fastapi.security import APIKeyHeader
from starlette_testclient import TestClient
import schemathesis
app = FastAPI()
@app.get("/", responses={200: {"model": {}}, 403: {"model": {}}})
def root(api_key: Annotated[str, Security(APIKeyHeader(name="x-api-key"))]):
return {"message": "Hello world"}
schemathesis.experimental.OPEN_API_3_1.enable()
schema = schemathesis.from_asgi("/openapi.json", app)
@schema.parametrize()
def test_api(case):
client = TestClient(app)
case.call_and_validate(session=client)
```
This gives me the following error:
```
FAILED tests/test_schema.py::test_api[GET /] - requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7c2e41ffa6...
```
This is the OpenAPI schema:
```json
{
"openapi": "3.1.0",
"info": {
"title": "FastAPI",
"version": "0.1.0"
},
"paths": {
"/": {
"get": {
"summary": "Root",
"operationId": "root__get",
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {}
}
}
},
"403": {
"description": "Forbidden"
}
},
"security": [
{
"APIKeyHeader": []
}
]
}
}
},
"components": {
"securitySchemes": {
"APIKeyHeader": {
"type": "apiKey",
"in": "header",
"name": "x-api-key"
}
}
}
}
```
### Expected behavior
The test given above should pass
### Environment
```
- OS: Linux
- Python version: 3.12
- Schemathesis version: 3.34.1
- Spec version: 3.1.0
```
### Additional context
I believe the issue is in this check: https://github.com/schemathesis/schemathesis/blob/master/src/schemathesis/specs/openapi/checks.py#L351 The check is creating a `requests.Session` which will lead to an actual HTTP call. It should be using the specified session instead.
| 1medium
|
Title: Command & response translations/Internationalization
Body: ### I am...
* [ ] Reporting a bug
* [X] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 4.3.5
* OS version: Fedora 25
* Python version: 3.5.2
* Using a virtual environment: yes
### Suggested Feature...
Could we make some kind of decorator that allow us to setup a command alias or translation?
In multi language environment can be useful if we can simplify add some kind of option to translate the command names.
Additionally could be useful the option of make the output of a command translatable based on the User or MUC preference.
Or maybe a initially simple approach that let us run multiple instances of the same bot in different languages. I think that this is not-so-hard(tm) to be done for the responses part, but still requires some work in the command name and params translations. | 1medium
|
Title: Need delete widget option
Body: Doesn't exist.
| 0easy
|
Title: Screen flashes at each step
Body: This has always been an issue for me. Each time I step through a line of code, the screen flashes. If the line takes a long time to complete, all I can see is my original prompt (bash or IPython or whatever). This is annoying when stepping through code, because the flashing is disorienting, and it's also annoying when executing a long line of code, because I can no longer see what is executing until it is done. This also happens a lot if there are print statements, as the statement is printed at the "original prompt".
Is this easily fixable, or is it an Urwid issue? I haven't looked at the code yet, but maybe just the screen redrawing commands need to be reorganized a little bit. Can you also reproduce this?
| 1medium
|
Title: loops
Body: i cant do it:
Find the ten most spoken languages from the data
Find the 10 most populated countries in the world | 1medium
|
Title: How to combine two models to a single pb file?
Body: Dear,
My goal is adding more dataset to the pre-trained model to increase the accuracy.
1. I freezed the graph to get the 1st model named **pre-trained-model.pb**
2. Then I trained another model based on my own images followed [this topic](https://github.com/davidsandberg/facenet/wiki/Classifier-training-of-inception-resnet-v1) named **new-model.pb**
How can I merge these 2 model into one **final-model.pb**? or can I feed the 1st model to the second model to produce the final single model (pb)? | 1medium
|
Title: 运行weixin.py报错诶
Body: ImportError: No module named requests_toolbelt.multipart.encoder | 1medium
|
Title: Unfavorable language? How to change the language fully to english !?
Body: Dear Devs, it is a pleasure to use such incredible app, but there are some things which needs to be resolved, the gui is quite good, but many settings are written in Chinese language which is unfavorable to many, pls kindly try to change the whole language system into English which will be appreciated!
Looking forward for your actions.
Thanks
| 0easy
|
Title: 【求助】有关于判断done_geetest()
Body: **Python 版本:** 3.10.7
**模块版本:** 15.0.0
**运行环境:** Windows
---
我是一个初学者,正在尝试写一个我自己的bilibili相关的想法来锻炼自己的能力
我参考本库的文档设计了一个窗口,其他一切正常,但我不知道如何在主函数中判断done_geetest()来关闭极验的窗口随后获取验证码.
我一开始想到的就是使用while true来循环判断并根据if来break,但很显然这会导致线程阻塞并使窗口崩溃
随后我想到使用async进行异步处理,但相关的教程过于繁琐导致我找不到实际需要的部分
我也进行了一些尝试,但实际上还是阻塞了
如果有大佬愿意帮助解决不胜感激
| 1medium
|
Title: Dvc commit operation is too slow
Body: # Bug Report
## Description
I use git and dvc to manage my training datasets, which consists of thousands of jsonl files.
After I modify several jsonl files, I use `dvc status && dvc commit`. `dvc status` operation is completed quickly (I know dvc will only hash files once until it gets modified. Here only several jsonl files are modified so `dvc status` operation cost little). However, `dvc commit` operation cost a lot of time.
While `dvc commit` is executing, I see lots of "Checking out xxx/xxx/xxx.jsonl" shows in the terminal, and I believe those jsonl files are not modified. Why dvc need to check out files that are not modified?
### Expected
Assume two files `a.jsonl` and `b.jsonl` are modified, I think `dvc commit` should equal to `dvc add a.jsonl b.jsonl`. However, it seems that `dvc commit` will check out all files tracked by dvc.
I expect `dvc commit` operation skip files which are not modified, so it can be completed quickly.
### Environment information
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.55.2 (pip)
-------------------------
Platform: Python 3.9.19 on Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.31
Subprojects:
dvc_data = 3.16.5
dvc_objects = 5.1.0
dvc_render = 1.0.2
dvc_task = 0.4.0
scmrepo = 3.3.7
Supports:
http (aiohttp = 3.10.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.10.5, aiohttp-retry = 2.8.3),
s3 (s3fs = 2024.6.1, boto3 = 1.35.7)
Config:
Global: /mnt/afs/jiangtan/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: fuse.quarkfs_client on quarkfs_client
Caches: local
Remotes: s3
Workspace directory: fuse.quarkfs_client on quarkfs_client
Repo: dvc, git
Repo.site_cache_dir: /path/to/repo/.dvc/site_cache_dir/repo/eddf3641719990517f0cfc808ea33376
```
| 1medium
|
Title: blocks and graphs don't work
Body: ### ⚠️ Search for existing issues first ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
### Which Operating System are you using?
Windows
### Which version of AutoGPT are you using?
Latest Release
### What LLM Provider do you use?
Azure
### Which area covers your issue best?
Installation and setup
### What commit or version are you using?
13da8af170602005b7a51ae527c388758825ed15
### Describe your issue.
After going to the frontend page of AutoGPT, my blocks and graphs do not work
### Upload Activity Log Content
(venv) PS G:\Stable-diffusion\AutoGPT\autogpt_platform\frontend> npm run dev
> [email protected] dev
> next dev
▲ Next.js 14.2.13
- Local: http://localhost:5151
- Environments: .env.local
- Experiments (use with caution):
· instrumentationHook
✓ Starting...
○ Compiling /instrumentation ...
✓ Compiled /instrumentation in 3.2s (1256 modules)
✓ Ready in 7.9s
✓ Compiled /src/middleware in 353ms (562 modules)
○ Compiling / ...
✓ Compiled / in 14.8s (7445 modules)
Supabase auth error: AuthSessionMissingError: Auth session missing!
at eval (webpack-internal:///(rsc)/./node_modules/@supabase/auth-js/dist/module/GoTrueClient.js:885:59)
at SupabaseAuthClient._useSession (webpack-internal:///(rsc)/./node_modules/@supabase/auth-js/dist/module/GoTrueClient.js:787:26)
at async SupabaseAuthClient._getUser (webpack-internal:///(rsc)/./node_modules/@supabase/auth-js/dist/module/GoTrueClient.js:877:20)
at async eval (webpack-internal:///(rsc)/./node_modules/@supabase/auth-js/dist/module/GoTrueClient.js:864:20)
at async eval (webpack-internal:///(rsc)/./node_modules/@supabase/auth-js/dist/module/GoTrueClient.js:732:28) {
__isAuthError: true,
status: 400,
code: undefined
}
GET / 200 in 15947ms
⚠ The "images.domains" configuration is deprecated. Please use "images.remotePatterns" configuration instead.
○ Compiling /login ...
✓ Compiled /login in 4.9s (8595 modules)
Supabase auth error: AuthSessionMissingError: Auth session missing!
at eval (webpack-internal:///(rsc)/./node_modules/@supabase/auth-js/dist/module/GoTrueClient.js:885:59)
at SupabaseAuthClient._useSession (webpack-internal:///(rsc)/./node_modules/@supabase/auth-js/dist/module/GoTrueClient.js:787:26)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async SupabaseAuthClient._getUser (webpack-internal:///(rsc)/./node_modules/@supabase/auth-js/dist/module/GoTrueClient.js:877:20)
at async eval (webpack-internal:///(rsc)/./node_modules/@supabase/auth-js/dist/module/GoTrueClient.js:864:20)
at async eval (webpack-internal:///(rsc)/./node_modules/@supabase/auth-js/dist/module/GoTrueClient.js:732:28) {
__isAuthError: true,
status: 400,
code: undefined
}
GET /login?_rsc=v3pub 200 in 191ms
○ Compiling /signup ...
✓ Compiled /signup in 1627ms (8591 modules)
GET / 200 in 63ms
POST /signup 303 in 838ms
○ Compiling /build ...
✓ Compiled /build in 3.4s (9538 modules)
### Upload Error Log Content
_No response_ | 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.