text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: CPU use only efficiency core
Body: Hello,
I recently upgrade my computer from i5 9400f to i9 12900k
before I upgrade(i5 9400f)
deepfacelab using my cpu around 100% and after I upgrade to i9 deep face use efficiency core and not use performance core.

I tried to update the version of deep face and issue found again.
Window 10 Pro
| 1medium
|
Title: `fastui-bootstrap` allow more customisation
Body: `fastui-bootstrap` should take functions matching `CustomRender` and `ClassNameGenerator` to those functions respectively, so you can use `fastui-bootstrap` while still overriding some components. | 1medium
|
Title: Is it possible to use classic mapping?
Body: SqlAlchemy allows the user to use classic mapping - http://docs.sqlalchemy.org/en/rel_1_0/orm/mapping_styles.html#classical-mappings
But how can I use classic mapping when using flask-sqlalchemy?
| 1medium
|
Title: 🧑💻Add conf ngnix for upload in dev mode
Body: ## Feature Request
Add conf ngnix for upload with local dev mode working with docker-compose.
We have 2 ways to develop in local mode, with `Tilt` (k8s stack) and with `docker-compose` (docker-compose stack), the upload image process works with Tilt but not with the docker-compose stack.
## Code
On Tilt dev:
https://github.com/numerique-gouv/impress/blob/67a20f249e33ffbea326f2be825e085847c34331/src/helm/env.d/dev/values.impress.yaml.gotmpl#L107-L119
Adapt this file to use the same conf:
https://github.com/numerique-gouv/impress/blob/main/docker/files/etc/nginx/conf.d/default.conf
----
See: #118
| 1medium
|
Title: CI test darwin://python/ray/tests:test_placement_group_3 is consistently_failing
Body: CI test **darwin://python/ray/tests:test_placement_group_3** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge-macos/builds/4657#01955f62-ed51-458c-8bfb-a4a96b5b7134
- https://buildkite.com/ray-project/postmerge-macos/builds/4657#01955dd4-ae7a-4bd0-ab9d-14abaf0cdd17
DataCaseName-darwin://python/ray/tests:test_placement_group_3-END
Managed by OSS Test Policy | 1medium
|
Title: Multi class emotion classification for text in russian
Body: Как использовать BERT Classifier для multi class классификаций текста? У меня есть свой датасет, нужно тренировать модель на этом датасете.
Пример Input:
Я сегодня чувствую себя не очень хорошо
Output:
Sadness
Классов должно быть 5 или 6
Знаю что есть rusentiment_bert.json. Это как я понимаю pretrained и здесь только Positive neutral negative speech skip, а мне надо чтобы были эмоций типа (радость, печаль итп)
Мне получается нужно быть изменить конфиг rusentiment_bert.json? Если да – то как и что надо изменить для настройки данной модели?
Прошу помочь c гайденсом как работает весь процесс.
| 1medium
|
Title: [BUG-REPORT] Group By memory Issue
Body: Hello,
I have a project running on vaex v4.0.0, I also have it wrapped around flask to have API's running off it. I was hopping to get some help related to memory.
I do face memory leak issues while using group by here's an example.
df.groupby(['rooms_count'], agg={vx.agg.mean('price_per_meter'),vx.agg.min('price_per_meter'),vx.agg.max('price_per_meter'),vx.agg.count('price_per_meter')})
My issue is not with the amount of memory being used. But after the API call is executed the memory used is not released back to the OS. Scale it to multiple API requests and soon I am out of memory on server. I have tried using garbage collection but still the memory isn't released back to the OS.
I was asked to help replicate the issue. You can find the code and steps to replicate over there
[https://github.com/MHK107/vaex-groupby-memory-issue/tree/main](Link to the repo)
Please let me know if I can help in any way possible to replicate and resolve this | 1medium
|
Title: Bug(CI): Updated lockfile changes type checking CI causing failures
Body: ### Description
https://github.com/litestar-org/polyfactory/actions/runs/8928572773/job/24524431663
```
mypy.....................................................................Failed
- hook id: mypy
- exit code: 1
polyfactory/value_generators/constrained_dates.py:41: error: Redundant cast to "date" [redundant-cast]
polyfactory/factories/base.py:508: error: Argument 1 to "UUID" has incompatible type "bytes | str | UUID"; expected "str | None" [arg-type]
tests/test_random_configuration.py:68: error: Redundant cast to "int" [redundant-cast]
polyfactory/factories/pydantic_factory.py:546: error: Incompatible return value type (got "dict[Any, object]", expected "dict[Any, Callable[[], Any]]") [return-value]
tests/test_recursive_models.py:56: error: Non-overlapping identity check (left operand type: "PydanticNode", right operand type: "type[_Sentinel]") [comparison-overlap]
docs/examples/decorators/test_example_1.py:19: error: Returning Any from function declared to return "datetime" [no-any-return]
docs/examples/decorators/test_example_1.py:19: error: Redundant cast to "timedelta" [redundant-cast]
polyfactory/factories/beanie_odm_factory.py:32: error: Unused "type: ignore" comment [unused-ignore]
Found 8 errors in 7 files (checked 129 source files)
```
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Release Version
CI
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [X] Other (Please specify in the description above) | 1medium
|
Title: Code Conventions Guide for Documentation and Examples
Body: In our documentation and code examples, we have several different styles around referring to the workflow and how to format code examples.
It would be helpful to identify and establish a handful of code conventions that we follow to reduce the cognitive load for using this library.
Code Examples:
- Should always include the import path of the visualizer
| 1medium
|
Title: Doc suggestion - Docker run
Body: Hi, great project! I just wanted to share a quick-and-dirty docker run one liner if anyone feel a docker container is useful there as I did.
You may close this issue without changing anything if you want.
I didn't know where else to share this.
Cheers guys
```
# Access the working directory that you have the markdown files
# change what you require on the command below, then run it
docker run -it --name python-grip --rm \
-p 6419:6419 \
--env FILE=README.md \
--env DEBUG=True \
--env DEBUG_GRIP=True \
--env HOST=0.0.0.0 \
-v "$(pwd)":/workspace \
python bash -c "pip install grip && mkdir ~/.grip/ && bash -c \"echo -e \\\"DEBUG=\$DEBUG\nDEBUG_GRIP=\$DEBUG_GRIP\nHOST='\$HOST'\\\" >> ~/.grip/settings.py \" && cd workspace/ && grip \$FILE"
# access the page at localhost:6419 on your browser
``` | 0easy
|
Title: replace ORM loader depth warning with notes in cache disabled message
Body: ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10895
| 1medium
|
Title: [🐛 BUG] Data Node Selector does not display nodes with CYCLE Scope
Body: ### What went wrong? 🤔
When I create a Scenario with Cycles (that is, I add Scope.CYCLE to certain Data Node configuration objects + I add a Frequency to the Scenario), I can't see the Data Node in the Data Node selector. I see the GLOBAL and the SCENARIO Data Nodes, but not those with Scope CYCLE.
### Expected Behavior
I would expect to see all Data Nodes, including those with Scope set to CYCLE.
### Steps to Reproduce Issue
This code shows the issue:
```python
import datetime as dt
import taipy as tp
import taipy.gui.builder as tgb
from taipy import Config, Frequency, Gui, Scope
def add_three(a, b, c):
return a + b + c
a_node_config = Config.configure_data_node(id="a", default_data=1, scope=Scope.GLOBAL)
b_node_config = Config.configure_data_node(id="b", default_data=2, scope=Scope.CYCLE)
c_node_config = Config.configure_data_node(id="c", default_data=3, scope=Scope.SCENARIO)
result_node_config = Config.configure_data_node(id="result", scope=Scope.SCENARIO)
add_three_scenario_task = Config.configure_task(
id="add_three",
function=add_three,
input=[a_node_config, b_node_config, c_node_config],
output=result_node_config,
)
add_three_scenario_config = Config.configure_scenario(
id="scenario",
task_configs=add_three_scenario_task,
frequency=Frequency.MONTHLY,
)
with tgb.Page() as page:
tgb.text("# Data Node selector does not show Cycle Data Nodes", mode="md")
tgb.data_node_selector()
if __name__ == "__main__":
tp.Orchestrator().run()
scenario = tp.create_scenario(add_three_scenario_config)
scenario.submit()
gui = Gui(page=page)
gui.run(
title="test data node selector",
use_reloader=True,
)
```
### Screenshots

### Runtime Environment
Windows 10
### Browsers
Brave
### OS
Windows
### Version of Taipy
4.0.2
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | 1medium
|
Title: Swagger-codegen params order is changing with every update on the backend breaking the frontend
Body: When using swagger-codegen to generate typescript-angular swagger the params order is changed thus breaking my frontend application.

| 1medium
|
Title: Swagger issue for endpoints register & update
Body: Hi,
First of all, great job. It's a very useful library.
However, after having setup my project. I noticed a few issues in the generated Swagger documentation. Indeed, the request body is pre-filled with the following information:
```
{
"id": "string",
"email": "[email protected]",
"is_active": true,
"is_superuser": false,
"password": "string"
}
```
However, according to your documentation, only the fields `email` & `password` are required. It can lead to some misunderstandings for someone wanting to use the API for the first time since the Swagger (or redoc) should describe how to use the API.
I think it's a cheap fix that can be very useful for when you'll find a solution for adding auth in the Swagger. Indeed, after having had a look at your code, one solution could be to make the models `BaseUserCreate` and `BaseUserUpdate` not to inherit from `BaseUser` but `BaseModel` instead.
Looking forward to hearing from you :)
| 1medium
|
Title: Keystroke ] not detected on Windows
Body: In Powershell and cmd.exe I encountered that sorting didn't work in both orders. The `[` shortcut was detected and had its effect, but the `]` didn't. I narrowed it down to a problem with `windows-curses`, and in turn with its dependency `PDCurses`: https://github.com/zephyrproject-rtos/windows-curses/issues/41
Here's my plan on how to address it. I hope I'll get around to it somewhere next week.
- [ ] Improve the mapping in `PDCurses` and submit a pull request
- [ ] Bump the git submodule in `windows-curses` to the `PDCurses` version that has the fix and ask/wait for a release of this package
- [ ] Address the issue in this repository, perhaps by pinning `windows-curses` to a version of at least the newly released package.
I'm making this issue here just to document it and track progress. If you're reading this because you have this issue, I would recommend using WSL instead. (WSL is not an option for me unfortunately).
I didn't include the `.vd`-file to reproduce this issue. The simplest way to reproduce it is to get a Windows computer, run `visidata` from Powershell or cmd.exe and sort any column by pressing `]`. | 1medium
|
Title: live update of the chart
Body: I have data streaming through API calls to Alpaca and it is real time stock market data, it is using the code below to get data, and "on_message" event triggers I parse the data to pandas dataframe object dfObj, then plot the candlestick chart. Now the issue is the chart will need to be closed manually in order for it to continue execute the next "on message event", anyway to update the chart? and continue without plotting a new one?
--------------------------------------------
ws = websocket.WebSocketApp("wss://socket.polygon.io/stocks",
on_message = on_message,
on_error = on_error,
on_close = on_close)
ws.on_open = on_open
ws.run_forever()
------------------------------------------------------------
```python
import mplfinance as mpf
def on_message(ws, message):
# code to parse message and add data to dfObj .....
mpf.plot(dfObj, type='candle')
```
the code above just fragments | 1medium
|
Title: Rebrand
Body: Rename to `respx` for shorter and more alike `httpx`. | 1medium
|
Title: Significant Response Delay After Idle Period in Version 3
Body: ### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
Oracle Linux 7
### Python version
3.9
### aiogram version
3.6.0
### Expected behavior
The bot should respond promptly without significant delay, even after being idle.
### Current behavior
The bot responds with a significant delay after being idle for more than 15 seconds.

### Steps to reproduce
Deploy a standard bot on a server.
Allow the bot to be idle for more than 15 seconds.
Send a message to the bot.
Observe the delay in the bot's response.
### Code example
```python3
import os
import sys
import asyncio
import logging
from aiogram import Bot, types
from aiogram import Dispatcher
from aiogram.client.default import DefaultBotProperties
from aiogram.enums import ParseMode
TOKEN = os.getenv('TOKEN')
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s [%(name)s] - %(message)s",
stream=sys.stdout,
force=True,
)
logger = logging.getLogger(__name__)
dp = Dispatcher()
@dp.message()
async def echo(message: types.Message):
logger.info('Handler Start')
await message.answer(message.text)
logger.info('Handler End')
async def main():
bot = Bot(
token=TOKEN,
default=DefaultBotProperties(parse_mode=ParseMode.HTML)
)
await dp.start_polling(bot, skip_updates=True)
if __name__ == '__main__':
try:
logger.info("Starting bot")
asyncio.run(main())
except (KeyboardInterrupt, SystemExit):
logger.info("Bot stopped!")
```
### Logs
_No response_
### Additional information
This issue is not present in version 2.x. | 1medium
|
Title: Make dashboard page a true dashboard for productivity?
Body: Hi, first I really like the dashboard / intro page as it's pretty informative for first-start users.
But if you work a lot with the application, I noticed, that it doesn't help me to do work, so I only skip it.
Instead, it could help me as a personal entrypoint, similar like it shows already the installation stats
Some ideas for (often used) functions:
* tagcloud
* saved search views (maybe also as inline list?)
* new in inbox (unprocessed -> need to be checked)
If we allow users to hide the first steps, there might be enough space to show up this personalized content. | 1medium
|
Title: Could not find Source.Txt file in dataset
Body: Hi
In ENCODER, speaker.py reads source.txt from data set which is not found. When i run the train loop , it is showing me error:
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/shivani/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/shivani/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/shivani/Projects/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py", line 55, in collate
return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames)
File "/home/shivani/Projects/Real-Time-Voice-Cloning/encoder/data_objects/speaker_batch.py", line 8, in __init__
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "/home/shivani/Projects/Real-Time-Voice-Cloning/encoder/data_objects/speaker_batch.py", line 8, in <dictcomp>
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "/home/shivani/Projects/Real-Time-Voice-Cloning/encoder/data_objects/speaker.py", line 34, in random_partial
self._load_utterances()
File "/home/shivani/Projects/Real-Time-Voice-Cloning/encoder/data_objects/speaker.py", line 14, in _load_utterances
with self.root.joinpath("_sources.txt").open("r") as sources_file:
File "/home/shivani/anaconda3/lib/python3.7/pathlib.py", line 1203, in open
opener=self._opener)
File "/home/shivani/anaconda3/lib/python3.7/pathlib.py", line 1058, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'LibriSpeech/dev-clean/_sources.txt'
Please do an urgent help. | 1medium
|
Title: Cannot install horovod[spark] for Tensorflow 2.6
Body: **Environment:**
1. Framework: TensorFlow
2. Framework version:2.6.2
3. Horovod version: 0.23
4. MPI version:4.1.1
5. CUDA version:N/A
6. NCCL version:N/A
7. Python version: 3.7
8. Spark / PySpark version: 2.4.5
9. Ray version:N/A
10. OS and version: RHEL 8.4
11. GCC version: 9.3.0
12. CMake version: 3.5.0
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? N/A
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? N/A
4. Did you check if you question is answered in the [troubleshooting guide] (https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
```
Installing collected packages: pyparsing, pycparser, pyzmq, pyyaml, pyarrow, psutil, packaging, future, fsspec, diskcache, dill, cloudpickle, cffi, petastorm, horovod, h5py
Attempting uninstall: h5py
Found existing installation: h5py 3.1.0
Uninstalling h5py-3.1.0:
Successfully uninstalled h5py-3.1.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.6.2 requires h5py~=3.1.0, but you have h5py 2.10.0 which is incompatible.
```
**Reproduce Steps:**
1. `conda create -n horovod python=3.7`
2. `conda activate horovod`
3. `conda install pyspark=2.4.5 openmpi-mpicc cmake -c conda-forge`
4. `pip install tensorflow==2.6.2`
5. `HOROVOD_WITH_MPI=1 HOROVOD_WITH_TENSORFLOW=1 pip install horovod[spark]`
| 1medium
|
Title: pip3 install detectron2
Body: I'm trying to run a Docker file and install layoutparser using the following pip command
RUN pip3 install layoutparser torchvision && pip install "git+https://github.com/facebookresearch/[email protected]#egg=detectron2"
I get the following error message back
`#15 354.2 aarch64-linux-gnu-gcc: fatal error: Killed signal terminated program cc1plus
#15 354.2 compilation terminated.
#15 354.2 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
#15 354.2 ----------------------------------------
#15 354.2 ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-j9rv61td/detectron2_679fb568038548bf8b387f71e68646a7/setup.py'"'"'; __file__='"'"'/tmp/pip-install-j9rv61td/detectron2_679fb568038548bf8b387f71e68646a7/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-t9fgp6nd/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.8/detectron2 Check the logs for full command output.
------
executor failed running [/bin/sh -c pip3 install layoutparser torchvision && pip install "git+https://github.com/facebookresearch/[email protected]#egg=detectron2"]: exit code: 1`
Can you advise what I am doing wrong an how I can go about resolving it? | 2hard
|
Title: Error installing vaex on win10
Body: Error installing vaex on win10
**Description**
I try to install vaex on **windows 10**, **amd64** cpu inside a **venv**.
with command:
- `pip install vaex`
- `pip install vaex-core vaex-viz vaex-jupyter vaex-server vaex-hdf5 vaex-astro vaex-ml`
with the same problem.
**Software information**
- Vaex version : vaex-4.9.1
- Vaex was installed via: pip
- OS: windows 10
- Python version: 3.10
- CPU: amd64
- vsbuildtools c++ installed
**Error**
`Building wheels for collected packages: vaex-core
Building wheel for vaex-core (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for vaex-core (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [260 lines of output]
setup.py:4: DeprecationWarning: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses
import imp
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-310
creating build\lib.win-amd64-cpython-310\vaex
copying vaex\agg.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\array_types.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\asyncio.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\benchmark.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\cache.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\column.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\config.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\convert.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\cpu.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataframe.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataframe_protocol.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset_misc.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset_mmap.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset_utils.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\datatype.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\datatype_test.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\delayed.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\docstrings.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\encoding.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\events.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\execution.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\export.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\expression.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\expresso.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\formatting.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\functions.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\geo.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\grids.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\groupby.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\hash.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\image.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\itertools.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\join.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\json.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\kld.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\legacy.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\logging.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\memory.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\meta.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\metal.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\misc_cmdline.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\multiprocessing.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\multithreading.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\parallelize.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\progress.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\promise.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\registry.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\rolling.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\samp.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\schema.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\scopes.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\selections.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\serialize.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\settings.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\shift.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\stat.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\strings.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\struct.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\tasks.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\utils.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\version.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\_version.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\__init__.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\__main__.py -> build\lib.win-amd64-cpython-310\vaex
package init file 'vaex\arrow\__init__.py' not found (or not a regular file)
creating build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\convert.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\dataset.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\numpy_dispatch.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\opener.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\utils.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\utils_test.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\_version.py -> build\lib.win-amd64-cpython-310\vaex\arrow
creating build\lib.win-amd64-cpython-310\vaex\core
copying vaex\core\_version.py -> build\lib.win-amd64-cpython-310\vaex\core
copying vaex\core\__init__.py -> build\lib.win-amd64-cpython-310\vaex\core
creating build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\asyncio.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\cache.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\column.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\gcs.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3arrow.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3fs.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3_test.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\__init__.py -> build\lib.win-amd64-cpython-310\vaex\file
creating build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\all.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\cmodule.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\dataset.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\expresso.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\misc.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\plot.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\ui.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\__init__.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\__main__.py -> build\lib.win-amd64-cpython-310\vaex\test
creating build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\bokeh.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\common.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\ipyvolume.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\jprops.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\readcol.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\__init__.py -> build\lib.win-amd64-cpython-310\vaex\ext
creating build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\expressions.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\ordereddict.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\pandawrap.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\parallelize.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\progressbar.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\samp.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\__init__.py -> build\lib.win-amd64-cpython-310\vaex\misc
creating build\lib.win-amd64-cpython-310\vaex\datasets
copying vaex\datasets\__init__.py -> build\lib.win-amd64-cpython-310\vaex\datasets
running egg_info
writing vaex_core.egg-info\PKG-INFO
writing dependency_links to vaex_core.egg-info\dependency_links.txt
writing entry points to vaex_core.egg-info\entry_points.txt
writing requirements to vaex_core.egg-info\requires.txt
writing top-level names to vaex_core.egg-info\top_level.txt
reading manifest file 'vaex_core.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.c' under directory 'vendor'
warning: no files found matching '*.h' under directory 'src'
warning: no files found matching '*.c' under directory 'src'
adding license file 'LICENSE.txt'
writing manifest file 'vaex_core.egg-info\SOURCES.txt'
copying vaex\datasets\iris.hdf5 -> build\lib.win-amd64-cpython-310\vaex\datasets
copying vaex\datasets\titanic.hdf5 -> build\lib.win-amd64-cpython-310\vaex\datasets
running build_ext
building 'vaex.vaexfast' extension
creating build\temp.win-amd64-cpython-310
creating build\temp.win-amd64-cpython-310\Release
creating build\temp.win-amd64-cpython-310\Release\src
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\wolvi\AppData\Local\Temp\pip-build-env-mequ5492\overlay\Lib\site-packages\numpy\core\include -IC:\Users\wolvi\Desktop\RICERCA\venv\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\vaexfast.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src\vaexfast.obj /EHsc
vaexfast.cpp
src\vaexfast.cpp(18): warning C4005: 'INFINITY': ridefinizione macro
C:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt\corecrt_math.h(88): note: vedere la precedente definizione di 'INFINITY'
C:\Users\wolvi\AppData\Local\Temp\pip-build-env-mequ5492\overlay\Lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(14) : Warning Msg: Using deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
src\vaexfast.cpp(201): warning C4244: 'argomento': conversione da '__int64' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(532): warning C4244: 'argomento': conversione da '__int64' a 'const int'. Possibile perdita di dati.
src\vaexfast.cpp(956): warning C4244: '=': conversione da 'Py_ssize_t' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(1798): warning C4244: 'argomento': conversione da '__int64' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(1798): warning C4244: 'argomento': conversione da '__int64' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(64): warning C4244: '=': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(198): note: vedere il riferimento all'istanza 'void object_to_numpy1d_nocopy<double>(T *&,PyObject *,__int64 &,int &,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=double
]
src\vaexfast.cpp(88): warning C4244: '=': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(280): note: vedere il riferimento all'istanza 'void object_to_numpy1d_nocopy_endian<double>(T *&,PyObject *,__int64 &,bool &,int &,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=double
]
src\vaexfast.cpp(105): warning C4244: 'inizializzazione': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(644): note: vedere il riferimento all'istanza 'void object_to_numpy2d_nocopy<double>(T *&,PyObject *,int &,int &,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=double
]
src\vaexfast.cpp(108): warning C4244: 'inizializzazione': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(667): warning C4244: 'inizializzazione': conversione da 'const double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(775): note: vedere il riferimento all'istanza 'void histogram2d_f4<__int64>(const float *__restrict const ,const float *__restrict const ,const float *const ,const __int64,bool,bool,bool,Tout *__restrict const ,const int,const int,const double,const double,const double,const double,const __int64,const __int64)' della funzione modello di cui Š in corso la compilazione
with
[
Tout=__int64
]
src\vaexfast.cpp(667): warning C4244: 'inizializzazione': conversione da 'const double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(668): warning C4244: 'inizializzazione': conversione da 'const double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(668): warning C4244: 'inizializzazione': conversione da 'const double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(669): warning C4244: 'inizializzazione': conversione da 'const double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(669): warning C4244: 'inizializzazione': conversione da 'const double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(670): warning C4244: 'inizializzazione': conversione da 'const double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(670): warning C4244: 'inizializzazione': conversione da 'const double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(671): warning C4244: 'inizializzazione': conversione da 'double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(671): warning C4244: 'inizializzazione': conversione da 'double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(672): warning C4244: 'inizializzazione': conversione da 'double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(672): warning C4244: 'inizializzazione': conversione da 'double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(133): warning C4244: 'inizializzazione': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(887): note: vedere il riferimento all'istanza 'void object_to_numpy3d_nocopy<double>(T *&,PyObject *,int &,int &,int &,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=double
]
src\vaexfast.cpp(136): warning C4244: 'inizializzazione': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(139): warning C4244: 'inizializzazione': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(174): warning C4244: '=': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(983): note: vedere il riferimento all'istanza 'void object_to_numpyNd_nocopy<double>(T *&,PyObject *,int,int &,int *,__int64 *,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=double
]
src\vaexfast.cpp(1335): warning C4244: '=': conversione da 'Py_ssize_t' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(2072): note: vedere il riferimento all'istanza 'PyObject *statisticNd_<double,NPY_DOUBLE>(PyObject *,PyObject *)' della funzione modello di cui Š in corso la compilazione
src\vaexfast.cpp(1338): warning C4244: '=': conversione da 'Py_ssize_t' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(1149): warning C4244: 'inizializzazione': conversione da 'double' a 'T'. Possibile perdita di dati.
with
[
T=float
]
src\vaexfast.cpp(1271): note: vedere il riferimento all'istanza 'void statisticNd<T,op_add1<T,double,endian>,endian>(const T *__restrict const [],const T *__restrict const [],__int64,const int,const int,double *__restrict const ,const __int64 *__restrict const ,const int *__restrict const ,const T *__restrict const ,const T *__restrict const ,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=float,
endian=functor_double_to_native
]
src\vaexfast.cpp(1308): note: vedere il riferimento all'istanza 'void statisticNd_wrap_template_endian<T,functor_double_to_native>(const T *const [],const T *const [],__int64,int,int,double *,__int64 [],int [],T [],T [],int,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=float
]
src\vaexfast.cpp(1402): note: vedere il riferimento all'istanza 'void statisticNd_wrap_template<T>(const T *const [],const T *const [],__int64,int,int,double *,__int64 [],int [],T [],T [],bool,int,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=float
]
src\vaexfast.cpp(2073): note: vedere il riferimento all'istanza 'PyObject *statisticNd_<float,NPY_FLOAT>(PyObject *,PyObject *)' della funzione modello di cui Š in corso la compilazione
src\vaexfast.cpp(1178): warning C4244: 'inizializzazione': conversione da 'double' a 'T'. Possibile perdita di dati.
with
[
T=float
]
src\vaexfast.cpp(1198): warning C4244: 'inizializzazione': conversione da 'double' a 'T'. Possibile perdita di dati.
with
[
T=float
]
src\vaexfast.cpp(1216): warning C4244: 'inizializzazione': conversione da 'double' a 'T'. Possibile perdita di dati.
with
[
T=float
]
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\link.exe" /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:C:\Users\wolvi\Desktop\RICERCA\venv\libs /LIBPATH:C:\Users\wolvi\AppData\Local\Programs\Python\Python310\libs /LIBPATH:C:\Users\wolvi\AppData\Local\Programs\Python\Python310 /LIBPATH:C:\Users\wolvi\Desktop\RICERCA\venv\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\lib\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.19041.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.19041.0\um\x64" /EXPORT:PyInit_vaexfast build\temp.win-amd64-cpython-310\Release\src\vaexfast.obj /OUT:build\lib.win-amd64-cpython-310\vaex\vaexfast.cp310-win_amd64.pyd /IMPLIB:build\temp.win-amd64-cpython-310\Release\src\vaexfast.cp310-win_amd64.lib
Creazione della libreria build\temp.win-amd64-cpython-310\Release\src\vaexfast.cp310-win_amd64.lib e dell'oggetto build\temp.win-amd64-cpython-310\Release\src\vaexfast.cp310-win_amd64.exp
Generazione codice in corso...
Generazione codice terminata
building 'vaex.superstrings' extension
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\wolvi\AppData\Local\Temp\pip-build-env-mequ5492\overlay\Lib\site-packages\numpy\core\include -Ivendor/pybind11/include -Ivendor/pybind11/include -Ivendor/string-view-lite/include -Ivendor/boost -IC:\Users\wolvi\Desktop\RICERCA\venv\include -IC:\Users\wolvi\Desktop\RICERCA\venv\Library\include -Ivendor\pcre\Library\include -IC:\Users\wolvi\Desktop\RICERCA\venv\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\string_utils.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src\string_utils.obj /EHsc
string_utils.cpp
C:\Users\wolvi\AppData\Local\Temp\pip-install-oz16ctc3\vaex-core_234b08d7a5484e2aacaa3951062cdba9\src\string_utils.hpp(208): warning C4244: '=': conversione da 'char32_t' a 'char'. Possibile perdita di dati.
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\wolvi\AppData\Local\Temp\pip-build-env-mequ5492\overlay\Lib\site-packages\numpy\core\include -Ivendor/pybind11/include -Ivendor/pybind11/include -Ivendor/string-view-lite/include -Ivendor/boost -IC:\Users\wolvi\Desktop\RICERCA\venv\include -IC:\Users\wolvi\Desktop\RICERCA\venv\Library\include -Ivendor\pcre\Library\include -IC:\Users\wolvi\Desktop\RICERCA\venv\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\strings.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src\strings.obj /EHsc
strings.cpp
vendor/pybind11/include\pybind11/numpy.h(35): error C2065: 'ssize_t': identificatore non dichiarato
vendor/pybind11/include\pybind11/numpy.h(35): error C2338: ssize_t != Py_intptr_t
C:\Users\wolvi\AppData\Local\Temp\pip-install-oz16ctc3\vaex-core_234b08d7a5484e2aacaa3951062cdba9\src\string_utils.hpp(208): warning C4244: '=': conversione da 'char32_t' a 'char'. Possibile perdita di dati.
vendor\pcre\Library\include\pcrecpp.h(701): warning C4251: 'pcrecpp::RE::pattern_': class 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>' deve avere un'interfaccia dll per essere utilizzata dai client di class 'pcrecpp::RE'
C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include\xstring(4905): note: vedere la dichiarazione di 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>'
src\strings.cpp(273): warning C4018: '>': errata corrispondenza tra signed e unsigned
src\strings.cpp(282): warning C4018: '>': errata corrispondenza tra signed e unsigned
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for vaex-core
Failed to build vaex-core
ERROR: Could not build wheels for vaex-core, which is required to install pyproject.toml-based projects
`
| 2hard
|
Title: Include PySocks package in st2client module.
Body: ## SUMMARY
By default, the st2client cli doesn't support SOCKS proxy connection using `HTTP_PROXY`, `HTTPS_PROXY` because it lacks the `pysocks` pypi package.
### STACKSTORM VERSION
```
❯ st2 --version
st2 3.5.0, on Python 3.8.12
```
### OS, environment, install method
Client: MacOS Big Sur, Python 3.8
Stackstorm: Ubuntu 18.04, Python 3.6
## Steps to reproduce the problem
Attempt to use the st2 cli with `HTTP_PROXY`, `HTTPS_PROXY` set.
## Expected Results
No errors, and able to query the stackstorm server.
## Actual Results
Currently get the following error.
```
ERROR: Missing dependencies for SOCKS support.
```
To work around this for now, I install pysocks on my virtual environment where st2client is installed `pip install pysocks==1.7.1`.
| 1medium
|
Title: [BUG] How to convert <class 'pandas.core.frame.DataFrame'> to Lux dataframe
Body: Hello,
Is there any method to explicitly convert pandas dataframe to lux dataframe. Because when I'm trying to convert pandas dataframe df.save_to_html then pandas dataframe is not supporting. I'm getting error as : AttributeError: 'DataFrame' object has no attribute 'save_as_html'.
Please help on this issue. Thank you. | 1medium
|
Title: ValueError("loaded state dict contains a parameter group " 使用別人的訓練合成器延續訓練下去都會出錯。
Body: D:\MockingBird-0.0.1\MockingBird-0.0.1>python synthesizer_train.py CZC D:\Down\Ai\SV2TTS\synthesizer
Arguments:
run_id: CZC
syn_dir: D:\Down\Ai\SV2TTS\synthesizer
models_dir: synthesizer/saved_models/
save_every: 1000
backup_every: 25000
log_every: 200
force_restart: False
hparams:
Checkpoint path: synthesizer\saved_models\CZC\CZC.pt
Loading training data from: D:\Down\Ai\SV2TTS\synthesizer\train.txt
Using model: Tacotron
Using device: cuda
Initialising Tacotron Model...
Trainable Parameters: 31.948M
Loading weights at synthesizer\saved_models\CZC\CZC.pt
Traceback (most recent call last):
File "D:\MockingBird-0.0.1\MockingBird-0.0.1\synthesizer_train.py", line 37, in <module>
train(**vars(args))
File "D:\MockingBird-0.0.1\MockingBird-0.0.1\synthesizer\train.py", line 114, in train
model.load(weights_fpath, optimizer)
File "D:\MockingBird-0.0.1\MockingBird-0.0.1\synthesizer\models\tacotron.py", line 526, in load
optimizer.load_state_dict(checkpoint["optimizer_state"])
File "C:\Users\chen7\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\optimizer.py", line 201, in load_state_dict
raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
已參考#209 #37 每個訓練合成器也都試過了不是出現不匹配就是出現RuntimeError: Error(s) in loading state_dict for Tacotron
V.0.0.1跟新的兩個版本也都用過,都是一樣不行
| 2hard
|
Title: why does my gevent run slowly than the normal program ?
Body: Python Version : 3.7.3
IDE : pycharm
problems :
I make a speed testing. I am trying to read the local file whihc are more then 2000 , it took for 6.5 seconds when i use the gevent moulde, another took for 2.8 seconds when normally i used 'for in'.
I want to know why does not gevent raise efficiency for I/O.
| 1medium
|
Title: Set different Cache Directory for the Predictor.from_path api
Body: Hi all,
I am using Dataiku platform for my project developement and there I need allennlp in my pipeline.
But while using the **Predictor.from_path** api, I am basically facing a Permission Denied issue, as Dataiku is not allowing to create the CACHE_ROOT directory ".allennlp" under its root folder. Please see the below error.
PermissionError Traceback (most recent call last)
<ipython-input-9-7c48c0dd7567> in <module>
----> 1 predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/bidaf-elmo.2021-02-11.tar.gz")
~/code-env/lib/python3.7/site-packages/allennlp/predictors/predictor.py in from_path(cls, archive_path, predictor_name, cuda_device, dataset_reader_to_load, frozen, import_plugins, overrides, **kwargs)
364 plugins.import_plugins()
365 return Predictor.from_archive(
--> 366 load_archive(archive_path, cuda_device=cuda_device, overrides=overrides),
367 predictor_name,
368 dataset_reader_to_load=dataset_reader_to_load,
~/code-env/lib/python3.7/site-packages/allennlp/models/archival.py in load_archive(archive_file, cuda_device, overrides, weights_file)
204 """
205 # redirect to the cache, if necessary
--> 206 resolved_archive_file = cached_path(archive_file)
207
208 if resolved_archive_file == archive_file:
~/code-env/lib/python3.7/site-packages/allennlp/common/file_utils.py in cached_path(url_or_filename, cache_dir, extract_archive, force_extract)
135 cache_dir=cache_dir or CACHE_DIRECTORY,
136 extract_archive=extract_archive,
--> 137 force_extract=force_extract,
138 )
139
~/code-env/lib/python3.7/site-packages/cached_path/_cached_path.py in cached_path(url_or_filename, cache_dir, extract_archive, force_extract)
119 cache_dir = cache_dir if cache_dir else get_cache_dir()
120 cache_dir = os.path.expanduser(cache_dir)
--> 121 os.makedirs(cache_dir, exist_ok=True)
122
123 if not isinstance(url_or_filename, str):
~/code-env/lib/python3.7/os.py in makedirs(name, mode, exist_ok)
211 if head and tail and not path.exists(head):
212 try:
--> 213 makedirs(head, exist_ok=exist_ok)
214 except FileExistsError:
215 # Defeats race condition when another thread created the path
~/code-env/lib/python3.7/os.py in makedirs(name, mode, exist_ok)
221 return
222 try:
--> 223 mkdir(name, mode)
224 except OSError:
225 # Cannot rely on checking for EEXIST, since the operating system
PermissionError: [Errno 13] Permission denied: '/opt/dataiku/.allennlp'
-----------------------------------------------------------------------------
So my question is, instead of the root folder, if I want to set any other folder as the CACHE_ROOT folder, by declaring it through Predictor.from_path api, then how should I do that? Please help me. | 1medium
|
Title: if _name is not defined in HPO space, the experiment will not stop.
Body: **Describe the issue**:
When testing nested sub-search-space in HPO, if _name is not defined, the experiment will not stop.
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Client OS: linux
- Server OS (for remote mode only):
- Python version: 3.7
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log: **RuntimeError: '_name' key is not found in this nested search space.**
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
command: `python test_hpo_space.py --space test.yaml`
test_hpo_space.py:
```
import nni
import numpy as np
import torch
import os
import logging
import random
import time
import argparse
import json
import yaml
nni.silence_stdout()
from nni.experiment import Experiment
def run_trial():
param = nni.get_next_parameter()
logging.info(f"param: {param}")
# time.sleep(1)
nni.report_final_result(random.random())
def main(space: dict):
experiment = Experiment("local")
experiment.config.trial_command = f"python test_hpo_space.py run_trial --space 123"
experiment.config.experiment_name = "HPO"
experiment.config.trial_code_directory = os.getcwd()
experiment.config.search_space = space
experiment.config.tuner.name = "Evolution"
experiment.config.tuner.class_args["optimize_mode"] = "maximize"
experiment.config.tuner.class_args["population_size"] = 60
experiment.config.max_trial_number = 60
experiment.config.trial_concurrency = 10
experiment.start(18189, debug=True, run_mode=nni.experiment.RunMode.Background)
try:
experiment._wait_completion()
except KeyboardInterrupt:
logging.warning("KeyboardInterrupt detected")
finally:
experiment.stop()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--space", type=str, required=True)
args, extra_args = parser.parse_known_args()
if "run_trial" in extra_args:
run_trial()
else:
space_file = args.space
try:
space = json.load(open(space_file))
except Exception:
with open(space_file, "r", encoding="utf-8") as f:
space = yaml.safe_load(f)
main(space)
```
test.yaml:
```
layer0:
_type: choice
_value:
- Empty
- kernel_size:
_type: choice
_value: [1, 2, 3, 5]
- _name: Max_pool
pooling_size:
_type: choice
_value: [2, 3, 5]
- _name: Avg_pool
pooling_size:
_type: choice
_value: [2, 3, 5]
```
| 1medium
|
Title: Plugin: nonebot-plugin-searchgames
Body: ### PyPI 项目名
nonebot-plugin-searchgames
### 插件 import 包名
nonebot_plugin_searchgame
### 标签
[{"label":"Steam","color":"#ea5252"},{"label":"switch","color":"#ea5252"}]
### 插件配置项
_No response_ | 3misc
|
Title: [3.x] aiogram is looking for redis when aioredis is installed (fix imports)
Body: ### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
macos 12.2.1 (21D62)
### Python version
3.9
### aiogram version
3.0.0b5
### Expected behavior
redis fsm storage works with aioredis
### Current behavior
redis fsm storage does not work with aioredis
### Steps to reproduce
install aiogram 3.0.0.b5
install aioredis `pip install aioredis`. Currently 2.0.1
create redis client `redis_client = Redis.from_url("redis://localhost:6379/3")`
create dispatcher `dp = Dispatcher(storage=RedisStorage(redis=redis_client))`
### Code example
```python3
from aiogram.fsm.storage.redis import RedisStorage
from aioredis.client import Redis
redis_client = Redis.from_url("redis://localhost:6379/3")
dp = Dispatcher(storage=RedisStorage(redis=redis_client))
```
### Logs
```sh
Traceback (most recent call last):
File "/Users/dev/projects/OWN/shopping_bot/src/bot/bot.py", line 6, in <module>
from aiogram.fsm.storage.redis import RedisStorage
File "/Users/dev/projects/OWN/shopping_bot/venv/lib/python3.9/site-packages/aiogram/fsm/storage/redis.py", line 5, in <module>
from redis.asyncio.client import Redis
ModuleNotFoundError: No module named 'redis'
```
### Additional information
imports failing in ../env/lib/python3.9/site-packages/aiogram/fsm/storage/redis.py
from redis.asyncio.client import Redis
from redis.asyncio.connection import ConnectionPool
from redis.asyncio.lock import Lock
from redis.typing import ExpiryT | 1medium
|
Title: AsyncClient does not recognize a `cert` has been passed.
Body: I'm capable of doing the following
```
$ curl https://foobar.com --cert /path/to/cert
```
However, when using a Session instance of `httpx.AsyncClient`, passing the cert gives me error from the server that no certs have been passed.
| 1medium
|
Title: No rename method on GoogleCloudStorage
Body: I'm using the following to allow renaming:
```
@deconstruct.deconstructible
class GoogleCloudStorage(gcloud.GoogleCloudStorage):
def path(self, name) -> typing.AnyStr:
raise NotImplementedError()
def get_accessed_time(self, name) -> datetime.datetime:
raise NotImplementedError()
def rename(self, old_name: str, new_name: str) -> None:
blob = self.bucket.blob(old_name)
self.bucket.rename_blob(blob, new_name)
```
Requesting feature. | 1medium
|
Title: During training of RTMDet loss_box and loss_mask are always 0
Body: **Describe the bug**
When trying to train an RTMDet model of any size using MMDetection on a COCO format dataset, during training the loss and loss_cls parameters will descend as normal, but the loss_box and loss_mask parameters start and stay at 0 for all of training. The model also does not produce any results during inference.
**Reproduction**
The exact training command: `tools/dist_train.sh configs/custom/rtmdet-ins-custom-s.py 2 --auto-scale-lr`
My config file:
```
_base_ = '../rtmdet/rtmdet-ins_s_8xb32-300e_coco.py'
dataset_type = 'CocoDataset'
data_root = '../../datasets/MyDataset/'
num_classes = 8
classes = ('Circular', 'Elliptical', 'Triangular', 'Quadrilateral', 'Polygonal', 'Capsule', 'Unique', 'Spheroid')
metainfo = {
'classes': ('Circular', 'Elliptical', 'Triangular', 'Quadrilateral', 'Polygonal', 'Capsule', 'Unique', 'Spheroid'),
'palette': [
(135, 206, 235),
(255, 192, 203),
(255, 218, 185),
(147, 112, 219),
(60, 179, 113),
(255, 165, 0),
(220, 20, 60),
(255, 255, 0)
]
}
train_dataloader = dict(
batch_size = 8,
num_workers = 10,
dataset = dict(
data_root=data_root,
metainfo=metainfo,
ann_file=data_root + '/annotations/instances_train.json',
data_prefix=dict(img=data_root + 'train/')
)
)
find_unused_parameters=True
val_dataloader = dict(
batch_size = 4,
num_workers = 10,
dataset = dict(
data_root=data_root,
metainfo=metainfo,
ann_file=data_root + '/annotations/instances_val.json',
data_prefix=dict(img=data_root + 'val/')
)
)
test_dataloader = val_dataloader
val_evaluator = dict(ann_file=data_root + 'annotations/instances_val.json')
test_evaluator = val_evaluator
```
A sample of my logs:
```
11/09 10:01:10 - mmengine - INFO - Epoch(train) [1][ 50/3256] lr: 1.9623e-05 eta: 1 day, 10:48:45 time: 0.3850 data_time: 0.0542 memory: 4411 loss: 0.5551 loss_cls: 0.5551 loss_bbox: 0.0000 loss_mask: 0.0000
11/09 10:01:24 - mmengine - INFO - Epoch(train) [1][ 100/3256] lr: 3.9643e-05 eta: 1 day, 5:15:10 time: 0.2621 data_time: 0.0017 memory: 4411 loss: 0.5109 loss_cls: 0.5109 loss_bbox: 0.0000 loss_mask: 0.0000
11/09 10:01:37 - mmengine - INFO - Epoch(train) [1][ 150/3256] lr: 5.9663e-05 eta: 1 day, 3:24:11 time: 0.2623 data_time: 0.0015 memory: 4411 loss: 0.4392 loss_cls: 0.4392 loss_bbox: 0.0000 loss_mask: 0.0000
11/09 10:01:50 - mmengine - INFO - Epoch(train) [1][ 200/3256] lr: 7.9683e-05 eta: 1 day, 2:35:58 time: 0.2678 data_time: 0.0014 memory: 4411 loss: 0.3513 loss_cls: 0.3513 loss_bbox: 0.0000 loss_mask: 0.0000
```
The only modifications I made to base configs were to increase the maximum number of detections to 500 (I am doing small object detection so this is needed for my use case) and to change the checkpoint interval to 5 so that I could evaluate my progress in finer steps. I have not modified the actual mmdetection codebase.
I am using a custom instance segmentation dataset in COCO format created synthetically. Due to the nature of my task I cannot share my dataset in full. However, the directory structure is as follows:
```
> Dataset
| > annotations
| | instances_train.json
| | instances_val.json
| | instances_test.json
| > train
| | trainimage0.png
| | trainimage1.png
| | trainimage2.png
| | ...
| | > val
| | valimage0.png
| | valimage1.png
| | valimage2.png
| | ...
| | > test
| | testimage0.png
| | testimage1.png
| | testimage2.png
| | ...
```
And here is a sample of my images and annotations:
```
"images": [
{
"id": 0,
"file_name": "img_0.png",
"height": 1800,
"width": 1800
},
{
"id": 1,
"file_name": "img_1.png",
"height": 1800,
"width": 1800
},
],
"annotations":[
{
"id": 13384448,
"image_id": 74402,
"category_id": 0,
"segmentation": {
"size": [
1800,
1800
],
"counts": "WhW74mg1>E7J5K4M4L3N2M3M2O2N1O1N3O0O1O1O1O1O2O0O100O10000O2O0000000000001O000001O000O10000O10001N1O100O1O1O100O2N1N2O2N1N3N2M3M3M4L4K5J8GSZPh2"
},
"bbox": [
131.0,
1480.0,
66.0,
66.0
],
"area": 3460,
"iscrowd": 0
},
{
"id": 13384449,
"image_id": 74402,
"category_id": 0,
"segmentation": {
"size": [
1800,
1800
],
"counts": "Rl]?:kg16K4M3L3M3M4L3M3M2N3M3M3N1O2N2N1O2N100O2O0O2O0O10001O0O100000000000000000000001O000O101O0O101N100O2O0O2N2N1O2M3N1N3L4M3M3M3M4L3M4L4L6H\\ef_2"
},
"bbox": [
280.0,
1696.0,
68.0,
66.0
],
"area": 3403,
"iscrowd": 0
}
],
```
I have written a script to visualize my dataset to confirm that my masks and bounding boxes align with their respective instances as expected, so the annotations are definitely accurate.
**Environment**
```
sys.platform: linux
Python: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0,1: NVIDIA RTX A5500
CUDA_HOME: /usr/local/cuda-11.7
NVCC: Cuda compilation tools, release 11.7, V11.7.64
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 2.0.1
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.7
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
- CuDNN 8.5
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.15.2
OpenCV: 4.7.0
MMEngine: 0.7.3
MMDetection: 3.2.0+fe3f809
```
Additional Environment Info:
- Environment is running inside of WSL2 with CUDA access enabled.
- Installation instructions were followed as per the mmdetection website guide exactly. Pytorch was installed using the official torch installation instructions for conda and WSL.
| 2hard
|
Title: Example "volume_render" does not work
Body: I'm trying to run an example [volume_renderer](https://github.com/K3D-tools/K3D-jupyter/blob/master/examples/volume_renderer.ipynb) and I have error. In the case of Binder, module 'nibabel' don't exist. If I try locally with installed 'nibabel' - there is no 3D object, only an empty grid of coordinates is visible.
| 1medium
|
Title: Swagger UI assumes body payload when using api_ns.expect(model) for HTTP GET handler
Body: Looking through the the [docs](http://flask-restplus.readthedocs.io/en/stable/swagger.html), there's an example of setting a model as an expected input to the get request handler, and to me it would be reasonable to assume that restplus would use this model to validate query string parameters as it's the only place that would make sense for it to be in the request. When using the same model for a post, swagger UI renders it as body request parameter, which makes sense. I'm just wondering if I'm wrong in my assumptions and this is by design, or it's a bug?
Current version of flask-restplus: 0.11.0
```python
class SomeResource(Resource):
@my_api_ns.expect(my_super_cool_model)
def get(self):
# This will render as a body request param, not expected
return {}
@my_api_ns.expect(my_super_cool_model)
def post(self):
# This will render as a body request param, as expected
return {}
``` | 1medium
|
Title: Add ability to specify extra during checkout
Body: **Is your feature request related to a problem? Please describe.**
Saving extra data on order (eg. "phone number") during the checkout requires an additional POST request to `/api/basket/extra/`.
**Describe the solution you'd like**
When sending a POST request to `/api/checkout/` add an ability to send `extra` data in a payload directly.
**Describe alternatives you've considered**
`-`
**Additional context**
Validation for extra data should be enforced here as well (#1).
| 1medium
|
Title: It prompts ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
Body:
```
from d2l import torch as d2l
```
```
Traceback (most recent call last):
File "/xxx/pytorch/linear_regression/linear_regression.py", line 6, in <module>
from d2l import torch as d2l
File "/xxx/miniconda3/envs/d2l/lib/python3.9/site-packages/d2l/torch.py", line 32, in <module>
import pandas as pd
File "/xxx/miniconda3/envs/d2l/lib/python3.9/site-packages/pandas/__init__.py", line 29, in <module>
from pandas._libs import hashtable as _hashtable, lib as _lib, tslib as _tslib
File "/xxx/miniconda3/envs/d2l/lib/python3.9/site-packages/pandas/_libs/__init__.py", line 13, in <module>
from pandas._libs.interval import Interval
File "pandas/_libs/interval.pyx", line 1, in init pandas._libs.interval
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
(d2l)
```
Neither the master branch nor 2.0.0 release can fix this issue
and
d2l==1.0.0b0 prompts
```
ERROR: Could not find a version that satisfies the requirement gym==0.21.0 (from d2l) (from versions: none)
ERROR: No matching distribution found for gym==0.21.0
```
Versions:
python: 3.9.16
d2l: 0.17.6 | 1medium
|
Title: Notification Rules for unread reports
Body: ### Proposal
At the moment the notifications for unread reports are sent to all recipients of a context, also if one of them as already read the report.
Ex. Context with 3 recipients: A, B, C.
If A has read the reports, notifications should not be sent to B and C because in this way these two recipients get spammed while the report has been already read by A.
Then, it would be better to configure this feature also on the sub-sites and not only on the main one, because it is intended to have sub-sites that may want to use different set up.

### Motivation and context
- Reduce spam email notifications
- Grant possibility to configure this feature in different ways for single sub-sites | 1medium
|
Title: Please provide updated example code of pruning
Body: <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
provide updated example code of pruning
**Why is this needed**:
I got warning: `WARNING: The old API trainer,traced_optimizer,criterion,training_batches,mode,dummy_input will be deprecated after NNI v3.0, please using the new one evaluator,training_steps,mode,dummy_input` when runing the example pruning code from [here](https://github.com/microsoft/nni/blob/master/examples/model_compress/pruning/activation_pruning_torch.py). As far as i know, other pruning api have the same warning like TaylorFOWeightPruner, ADMMPruner etc.
**Without this feature, how does current nni work**:
It can work in nni 2.9, but after nni 3.0, the api will deprecated
**Components that may involve changes**:
**Brief description of your proposal if any**:
| 1medium
|
Title: Add support for async functions to @capture_logging
Body: `@capture_logging` won't work for async functions, but it should be possible to make it do so. `pytest-asyncio` for example allows async methods to be run as tests. | 1medium
|
Title: [Bug]: Possible performance issue with _LazyTickList
Body: ### Bug summary
The descriptor seems to get called twice and thus the expensive `instance._get_tick(major=True)` is executed twice. Ping @anntzer, who helped craft the implementation.
### Code for reproduction
When adding the following into `_LazyTickList.__get__`
```
if self._major:
# <snip>
import traceback, sys
print(f"\n*** Initializing major ticks on {type(instance)} **\n")
traceback.print_stack(file=sys.stdout, limit=6)
# </snip>
instance.majorTicks = []
tick = instance._get_tick(major=True)
instance.majorTicks.append(tick)
return instance.majorTicks
```
we see that it is called twice per Axis:
```
*** Initializing major ticks on <class 'matplotlib.axis.XAxis'> **
File "/home/tim/git/matplotlib/lib/matplotlib/axes/_base.py", line 1399, in clear
self.__clear()
File "/home/tim/git/matplotlib/lib/matplotlib/axes/_base.py", line 1315, in __clear
self.grid(False) # Disable grid on init to use rcParameter
File "/home/tim/git/matplotlib/lib/matplotlib/axes/_base.py", line 3295, in grid
self.xaxis.grid(visible, which=which, **kwargs)
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 1726, in grid
self.set_tick_params(which='major', **gridkw)
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 984, in set_tick_params
for tick in self.majorTicks:
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 549, in __get__
traceback.print_stack(file=sys.stdout, limit=6)
*** Initializing major ticks on <class 'matplotlib.axis.XAxis'> **
File "/home/tim/git/matplotlib/lib/matplotlib/axes/_base.py", line 1315, in __clear
self.grid(False) # Disable grid on init to use rcParameter
File "/home/tim/git/matplotlib/lib/matplotlib/axes/_base.py", line 3295, in grid
self.xaxis.grid(visible, which=which, **kwargs)
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 1726, in grid
self.set_tick_params(which='major', **gridkw)
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 984, in set_tick_params
for tick in self.majorTicks:
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 553, in __get__
instance.majorTicks.append(tick)
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 549, in __get__
traceback.print_stack(file=sys.stdout, limit=6)
[... same repeated for YAxis]
```
Looking at the second traceback it seems that the line `instance.majorTicks.append(tick)` re-triggers the descriptor, even though we have previously set `instance.majorTicks = []`. I would have expected that at the time, the name `instance.majorTicks` is already re-bound to the list (which is sort of the purpose of the init-empty-and-append acrobatics - see the code comment above). But then again, this is higher magic and we might be hitting some implementation details of descriptors.
This observation may have two implications:
- We're apparently running the expensive `instance._get_tick(major=True)` twice. This should be fixed.
- It may be that init-empty-and-append acrobatics does not fulfill its intended purpose of providing `instance.majorTicks` to the implementation of `_get_tick`.
### Possible alternative
have a dummy empty list for `_get_tick` - I assume it's anyway only trying to read and not modify or hold references.
And then create a new list with the tick. This prevents read-access to `instance.majorTicks` and re-calling the descriptor. i.e. replace
```
instance.majorTicks = []
tick = instance._get_tick(major=True)
instance.majorTicks.append(tick)
return instance.majorTicks
```
by
```
instance.majorTicks = []
tick = instance._get_tick(major=True)
instance.majorTicks = [tick]
return instance.majorTicks
```
Not sure how that works when `_get_tick` accesses `majorTicks` but it should not make it worse there, and we improve inside `__get__`.
#### Performance measurement:
While performance measurement is a bit tricky, I think fair timings are
| | before (ms) | after (ms) | change |
|----------------------|-------------|------------|-------------|
| plt.subplots() | 38±2 | 31±1 | -18% |
| plt.subplots(10, 10) | 1420±13 | 1063±7 | -25% | | 1medium
|
Title: Bug: BufferError in confluent kafka broker
Body: **Describe the bug**
Hello, everyone! I have a question when processing messages using Confluent Kafka. If I have multiple subscribers going that all process mesages and publish them to another topic, I quickly get a `BufferError`. Updating `max_batch_size` on my broker didn't seem to help. I resorted to catching these exceptions and digging pretty deep in the the broker internals to call `poll()`. Here's a snippet of that code:
```python
try:
await publish_node_institution.publish(*nodes)
except BufferError:
broker._producer._producer.producer.poll(0) # type: ignore
await publish_node_institution.publish(*nodes)
```
My question is has anyone run into this issue using the Confluent Kafka broker? Does anyone have any suggestions for a better way of handling this?
Thanks!
[Discord message](https://discord.com/channels/1085457301214855171/1085457302280228898/1212835445822459934)
| 1medium
|
Title: How can I use this model for Feature Extraction. Everytime i reload the model, i get different set of feature values (output from the last hidden state)for the same image
Body: | 1medium
|
Title: problem, random cells write as zero
Body: Hi hi :3
I found this writing data with pandas + xlsxwriter, actually some formulas just don't are written correctly, and are not exactly random, lets test with this code:
```
import pandas
a=[]
for i in range(50):
a.append("=1+1.0")
dd = pandas.DataFrame(a, columns=["test"])
dd.to_excel("test.xlsx", engine='xlsxwriter')
```
In the result xlsx we should have 50 equal formulas, but, in the cell B4 I get 0, and is not like the value has not been evaluated, literally the formula of the cell is zero, I don't know why that formula is not written...
Is "random" because from all that "equals" formulas some of them just don't works, but is not random because the cell with the problem is the same in some iterations, after some of them it change.
(every iterations is running the same code again)
A clue, if we use "=1+10" without decimals works, at least with 50 results.
If we want to get a similar result getting zero every where, use "=1+1,0".
The first time I run the script, the formula we read in the file is "=1+1.0", as we wrote, testing again and again I get "=1+1" for some reason... now we don't have the ".0"...
I'm using WPS, but all what I wrote here I check it with the formula bar, not the results.
Thx. | 1medium
|
Title: ERROR: Cannot install jina because these package versions have conflicting dependencies.
Body: When trying to install clip_server, I am getting this error with the jina package. The error showed up a few hours ago (was fine yesterday on September 4).
pip install jina
ERROR: Cannot install jina because these package versions have conflicting dependencies.
The conflict is caused by:
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.40b0 depends on opentelemetry-instrumentation==0.40b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.39b0 depends on opentelemetry-instrumentation==0.39b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.38b0 depends on opentelemetry-instrumentation==0.38b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.37b0 depends on opentelemetry-instrumentation==0.37b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.36b0 depends on opentelemetry-instrumentation==0.36b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.35b0 depends on opentelemetry-instrumentation==0.35b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.34b0 depends on opentelemetry-instrumentation==0.34b0 | 1medium
|
Title: Better UX
Body: #### Problem Description
The software lacks of basic features that would make user experience much better
#### Proposal
- Delete button should have a dropdown menu entry to delete all intercepted requests
- Keyboard key `delete` should delete selected intercepted request
- There should be a pause button to temporarily stop intercepting traffic without killing the server or filtering
- Select multiple requests by holding keyboard keys `shift` / `ctrl` to do a certain action i.e removal, export | 1medium
|
Title: Add text input
Body: Add text input. Please remember to sanitize the input. | 1medium
|
Title: Bug: Boolean Issue
Body: ### Current behavior
The issue occurs in locally installed Microsoft Office 365, when a spreadsheet generated by xlsxWriter contains a column of type BOOLEAN.
When I open the worksheet I get a message saying that a problem was found, and Excel asks me if I want it to recover the worksheet as much as it can.
### Expected behavior
That the worksheet opens without warning of recovering lost data.
### Sample code to reproduce
```markdown
import xlsxwriter as xlsx
workbook = xlsx.Workbook('minimal.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write('A1', 'Hello Destaxa')
worksheet. set_column('A:A', 20, workbook.add_format({'num_format': 'BOOLEAN'}))
workbook.close()
```
### Environment
```markdown
- XlsxWriter version: 3.0.3
- Python version: 3.10.7
- Excel version: Microsoft Office 365
- OS: Windows 10
- The issue does not occur in Microsoft Office 2016
- The issue does not occur in Microsoft Office 365 Web
```
### Any other information

### OpenOffice and LibreOffice users
- [X] I have tested the output file with Excel. | 1medium
|
Title: some notes on MAPE and infinite values
Body: Lecture 9 describes MAPE and other metrics. As noted by @amber4eg it's good to mention that these metrics can explode around zero. | 1medium
|
Title: add an example for processing independent chunks
Body: This is a recurrent pattern and a few users have asked us about it so we should have a working example.
Assume you're getting data on batches, users want to pass the batch through the full pipeline when the next batch arrives, they want to repeat the process. An extension of this problem is when they already have all the batches and want to process them in parallel. Note that this is different from the `grid` feature because this will process all batches at once. What we want here, is essentially a giant for loop (one per batch) where the loop runs the full pipeline for each batch. | 1medium
|
Title: Example code for speech to text using tensor2tensor
Body: ### Description
Hi,
Can you please share an example code to convert speech to text using Tensor2Tensor (maybe with transformer) mode?
This will help a lot.
Thanks
Nagaraju
...
### Environment information
Python 3.7.7
tensor2tensor 1.15.7
```
OS: <Windows 10 (64 bit)>
$ pip freeze | grep tensor
# your output here
$ python -V
# your output here
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
...
```
| 1medium
|
Title: How to smooth 3D surface plot
Body: I have a 3D surface plot like this:

I am not sure how to smooth this plot, I searched but could not find any information. | 1medium
|
Title: Login in sub sites not work
Body: ### What version of GlobaLeaks are you using?
4.14.3
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Linux
### Describe the issue
Hi,
I have created subsites in globaleaks, but when creating users within these subsites, the login does not work. It only works if I create them on the base site and this causes them to have access on the rest of the subsites
### Proposed solution
_No response_ | 1medium
|
Title: Implement some form of web interface
Body: It's annoying to write scripts to save / delete runs. We should just run a bootstrap frontend or something.
Probably to do when / after we refactor the server. | 1medium
|
Title: Clean up README.md
Body: Some possible corrections:
1. Say somewhere one needs to install pyyaml and import yaml
2. In the first code snippet in 'Creating a login widget', replace `Loader=SafeLoader` by `Loader=yaml.SafeLoader`
Or something like that as I struggled to realize the above from the README.md. It may be obvious to a lot of folks but it was not to me.
I could do it myself but I am hesitant as I don't want to interfere in your README. Package is great by the way! Thanks! | 0easy
|
Title: AssertionError: View function mapping is overwriting an existing endpoint function: api.specs
Body: Hello,
I'm working on an API using Flask (full library versions available below) and Flask-Restx and [importing via Blueprints](https://flask-restx.readthedocs.io/en/latest/scaling.html#use-with-blueprints) with a project structure like so:
```bash
.
├── application
│ ├── apis
│ │ ├── __init__.py
│ │ └── organisations.py
│ ├── app.py
│ └── config.py
└── wsgi.py
```
The code in `app.py` calls a blueprint created in `application/apis/__init__.py` as follows:
**application/apis/__init__.py**
```python
from flask import Blueprint
from flask_restx import Api
from .organisations import api as orgapi
api_v1 = Blueprint('api', __name__)
api = Api(
api_v1,
title='My API',
version='1.0',
description='Access to my api',
)
api.add_namespace(orgapi)
```
**application/app.py**
```python
# ...
from application.apis import api
# ...
def create_app(config_name):
app = Flask(__name__)
# ...
api.init_app(app)
# Blueprints
from application.apis import api_v1
app.register_blueprint(api_v1, url_prefix="/api/v1")
```
The `organisations.py` code does not include any views at all, however when I try and access the application, I get the error in the title bar:
**application/apis/organisations.py**
```python
from flask_restx import Namespace, Resource, fields
api = Namespace('organisation', description='Organisation related operations')
```
The only reference I can find to this is a [stackoverflow](https://stackoverflow.com/questions/17256602/assertionerror-view-function-mapping-is-overwriting-an-existing-endpoint-functi) question, and [a github issue dating back to Flask 0.9 vs Flask 0.10](https://github.com/pallets/flask/issues/796), however given how old those questions are I'm pretty confident I'm just holding it wrong!
#### Additional context
**Library Versions (`pip freeze | grep -i flask`)**:
```
Flask==1.1.2
Flask-Admin==1.5.6
Flask-Migrate==2.5.3
flask-restx==0.2.0
Flask-SQLAlchemy==2.4.4
pytest-flask==1.0.0
```
#### Full Stack Trace
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/code/wsgi.py", line 5, in <module>
app = create_app(os.environ["FLASK_CONFIG"])
File "/opt/code/application/app.py", line 54, in create_app
app.register_blueprint(api_v1, url_prefix="/api/v1")
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 98, in wrapper_func
return f(self, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1168, in register_blueprint
blueprint.register(self, options, first_registration)
File "/usr/local/lib/python3.9/site-packages/flask/blueprints.py", line 256, in register
deferred(state)
File "/usr/local/lib/python3.9/site-packages/flask/blueprints.py", line 294, in <lambda>
self.record(lambda s: s.add_url_rule(rule, endpoint, view_func, **options))
File "/usr/local/lib/python3.9/site-packages/flask_restx/api.py", line 809, in _blueprint_setup_add_url_rule_patch
blueprint_setup.app.add_url_rule(
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 98, in wrapper_func
return f(self, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1282, in add_url_rule
raise AssertionError(
AssertionError: View function mapping is overwriting an existing endpoint function: api.specs
``` | 1medium
|
Title: Voila and Jupyterlab notebook command interactions
Body: <!--
Welcome! Before creating a new issue please search for relevant issues and recreate the issue in a fresh environment.
-->
Thank you for the work behind Voila. It is the perfect tool for the demos I had the opportunity to show in the past years. I am however facing an issue in a trick I use to control the flow of a demo.
## Description
<!--Describe the bug clearly and concisely. Include screenshots/gifs if possible-->
I use an ipywidget button callback to programmatically drive the execution of a notebook. For example, pressing on a button executes a determined number of cells below the one that created the button. More information regarding notebook commands can be found [here](https://jupyterlab.readthedocs.io/en/latest/user/commands.html).
The approach works in Jupyterlab, but not when rendered with Voila.
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
Cell 1
```python
import ipywidgets as widgets
from IPython.display import display
from ipylab import JupyterFrontEnd
button = widgets.Button(description="Click Me!")
output = widgets.Output()
app = JupyterFrontEnd()
state = False
clicked = False
display(button, output)
def on_button_clicked(b):
global state, clicked
clicked = True
state = not state
with output:
print("Button clicked.", state)
app.commands.execute('notebook:move-cursor-down')
app.commands.execute('notebook:run-cell-and-select-next')
button.on_click(on_button_clicked)
```
Cell 2
```python
if clicked:
with output:
print("exec'ed", state)
```
Execute the first cell, press the button a few times, you should get the following output:
```
Button clicked. True
exec'ed True
Button clicked. False
exec'ed False
Button clicked. True
exec'ed True
Button clicked. False
exec'ed False
```
If rendered in Voila, I obtain the following:
```
Button clicked. True
Button clicked. False
Button clicked. True
Button clicked. False
```
The callback is executed, but the notebook command has no effect.
The `clicked` global skips the execution of the cells I want to control with the button instead.
<!--Describe how you diagnosed the issue -->
## Expected behavior
<!--Describe what you expected to happen-->
I would like the same behavior in Voila as the one observed in the notebook, so that pressing the button triggers the execution of the cell below.
I can't tell whether this is a bug or a feature request, nor whether this is technically achievable when using Voila.
## Context
<!--Complete the following for context, and add any other relevant context-->
I couldn't spot anything useful in the context.
- voila version 0.5.8
- Operating System and version: Debian trixie/sid
- Browser and version: Chrome 131.0.6778.69
<details><summary>Troubleshoot Output</summary>
<pre>
$PATH:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/bin
/usr/local/bin
/usr/bin
/bin
/usr/local/games
/usr/games
sys.path:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/bin
/usr/lib/python311.zip
/usr/lib/python3.11
/usr/lib/python3.11/lib-dynload
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/lib/python3.11/site-packages
sys.executable:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/bin/python
sys.version:
3.11.9 (main, Apr 10 2024, 13:16:36) [GCC 13.2.0]
platform.platform():
Linux-6.11.5-amd64-x86_64-with-glibc2.40
which -a jupyter:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/bin/jupyter
pip list:
Package Version
--------------------------------- --------------
anyio 4.6.2.post1
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asttokens 2.4.1
async-lru 2.0.4
attrs 24.2.0
babel 2.16.0
bap 1.3.1
beautifulsoup4 4.12.3
bleach 6.2.0
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.4.0
comm 0.2.2
contourpy 1.3.1
cycler 0.12.1
debugpy 1.8.8
decorator 5.1.1
defusedxml 0.7.1
executing 2.1.0
fastjsonschema 2.20.0
fonttools 4.55.0
fqdn 1.5.1
freetype-py 2.5.1
h11 0.14.0
hsluv 5.0.4
httpcore 1.0.7
httpx 0.27.2
idna 3.10
ipykernel 6.29.5
ipylab 1.0.0
ipympl 0.9.4
ipython 8.29.0
ipython-genutils 0.2.0
ipywidgets 8.1.5
isoduration 20.11.0
jedi 0.19.2
Jinja2 3.1.4
json5 0.9.28
jsonpointer 3.0.0
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
jupyter_client 8.6.3
jupyter_contrib_core 0.4.2
jupyter_contrib_nbextensions 0.7.0
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-highlight-selected-word 0.2.0
jupyter-lsp 2.2.5
jupyter_nbextensions_configurator 0.6.4
jupyter_server 2.14.2
jupyter_server_terminals 0.5.3
jupyterlab 4.2.6
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.3
jupyterlab_widgets 3.0.13
kiwisolver 1.4.7
lief 0.15.1
lxml 5.3.0
MarkupSafe 3.0.2
matplotlib 3.9.2
matplotlib-inline 0.1.7
mistune 3.0.2
nbclient 0.10.0
nbconvert 7.16.4
nbformat 5.10.4
nest-asyncio 1.6.0
networkx 3.4.2
notebook 7.2.2
notebook_shim 0.2.4
numpy 2.1.3
overrides 7.7.0
packaging 24.2
pandocfilters 1.5.1
parso 0.8.4
pexpect 4.9.0
pillow 11.0.0
pip 24.2
platformdirs 4.3.6
prometheus_client 0.21.0
prompt_toolkit 3.0.48
psutil 6.1.0
ptyprocess 0.7.0
pure_eval 0.2.3
pycparser 2.22
Pygments 2.18.0
pyparsing 3.2.0
pypower 2.3.1
PyQt5 5.15.11
PyQt5-Qt5 5.15.15
PyQt5_sip 12.15.0
PyQtWebEngine 5.15.7
PyQtWebEngine-Qt5 5.15.15
python-dateutil 2.9.0.post0
python-json-logger 2.0.7
PyYAML 6.0.2
pyzmq 26.2.0
QDarkStyle 3.0.3
QtPy 2.4.2
referencing 0.35.1
requests 2.32.3
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rpds-py 0.21.0
Send2Trash 1.8.3
setuptools 75.5.0
six 1.16.0
sniffio 1.3.1
soupsieve 2.6
stack-data 0.6.3
tabulate 0.9.0
tenacity 9.0.0
terminado 0.18.1
tinycss2 1.4.0
tornado 6.4.1
traitlets 5.14.3
types-python-dateutil 2.9.0.20241003
typing_extensions 4.12.2
uri-template 1.3.0
urllib3 2.2.3
vispy 0.11.0
voila 0.5.8
wcwidth 0.2.13
webcolors 24.11.1
webencodings 0.5.1
websocket-client 1.8.0
websockets 14.1
wheel 0.44.0
widgetsnbextension 4.0.13
</pre>
</details>
<details><summary>Command Line Output</summary>
<pre>
[Voila] Looking for voila in /etc/jupyter
[Voila] Looking for voila in /usr/local/etc/jupyter
[Voila] Looking for voila in ${HOME}/.jupyter
[Voila] Looking for voila in ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/etc/jupyter
[Voila] Looking for voila in /shared/Work/Projects/2024.09.25.Demo_Olivier_Flous/demo_wbc
[Voila] Loaded config file: /shared/Work/Projects/2024.09.25.Demo_Olivier_Flous/demo_wbc/voila.json
[Voila] using template: lab
[Voila] template paths:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/templates/lab
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/nbconvert/templates/lab
/usr/share/jupyter/nbconvert/templates/lab
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/templates/base
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/nbconvert/templates/base
/usr/share/jupyter/nbconvert/templates/base
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/templates
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/nbconvert/templates
${HOME}/.local/share/jupyter
${HOME}/.local/share/jupyter/voila/templates
${HOME}/.local/share/jupyter/nbconvert/templates
/usr/local/share/jupyter
/usr/local/share/jupyter/voila/templates
/usr/local/share/jupyter/nbconvert/templates
/usr/share/jupyter
/usr/share/jupyter/voila/templates
/usr/share/jupyter/nbconvert/templates
[Voila] static paths:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/templates/lab/static
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/nbconvert/templates/lab/static
${HOME}/.local/share/jupyter/voila/templates/lab/static
${HOME}/.local/share/jupyter/nbconvert/templates/lab/static
/usr/local/share/jupyter/voila/templates/lab/static
/usr/local/share/jupyter/nbconvert/templates/lab/static
/usr/share/jupyter/voila/templates/lab/static
/usr/share/jupyter/nbconvert/templates/lab/static
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/templates/base/static
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/nbconvert/templates/base/static
${HOME}/.local/share/jupyter/voila/templates/base/static
${HOME}/.local/share/jupyter/nbconvert/templates/base/static
/usr/local/share/jupyter/voila/templates/base/static
/usr/local/share/jupyter/nbconvert/templates/base/static
/usr/share/jupyter/voila/templates/base/static
/usr/share/jupyter/nbconvert/templates/base/static
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/lib/python3.11/site-packages/jupyter_server/static
[Voila] Using /tmp to store connection files
[Voila] Storing connection files in /tmp/voila_g1k2qe3t.
[Voila] Serving static files from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/lib/python3.11/site-packages/voila/static.
[Voila] serving directory: '/shared/Work/Projects/2024.09.25.Demo_Olivier_Flous/demo_wbc'
[Voila] Voilà is running at:
http://localhost:8866/
[Voila] WARNING | Clearing invalid/expired login cookie username-localhost-8866
[Voila] Generating new user for token-authenticated request: 902b9ae8875a4958a33ee85425f4d1d5
[Voila] Paths used for configuration of page_config:
/etc/jupyter/labconfig/page_config.json
[Voila] Paths used for configuration of page_config:
/usr/local/etc/jupyter/labconfig/page_config.json
[Voila] Paths used for configuration of page_config:
${HOME}/.jupyter/labconfig/page_config.json
[Voila] Paths used for configuration of page_config:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/etc/jupyter/labconfig/page_config.json
[Voila] Using contents: services/contents
[Voila] Path jupyterlab_pygments/static/remoteEntry.5cbb9d2323598fbda535.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/jupyterlab_pygments/static/remoteEntry.5cbb9d2323598fbda535.js
[Voila] Path ipylab/static/remoteEntry.1c9b77c557d03a2498f4.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/ipylab/static/remoteEntry.1c9b77c557d03a2498f4.js
[Voila] Path @jupyter-notebook/lab-extension/static/remoteEntry.04dfa589925e7e7c6a3d.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-notebook/lab-extension/static/remoteEntry.04dfa589925e7e7c6a3d.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/remoteEntry.e4ff09401a2f575928c0.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/remoteEntry.e4ff09401a2f575928c0.js
[Voila] Path @voila-dashboards/widgets-manager8/static/remoteEntry.958dac8c7410b5fcc9ee.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/remoteEntry.958dac8c7410b5fcc9ee.js
[Voila] Path jupyter-matplotlib/static/remoteEntry.a0518cb14ef99e994963.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/jupyter-matplotlib/static/remoteEntry.a0518cb14ef99e994963.js
404 GET /favicon.ico (::1) 0.52ms
[Voila] Path jupyterlab_pygments/static/747.67662283a5707eeb4d4c.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/jupyterlab_pygments/static/747.67662283a5707eeb4d4c.js
[Voila] Path jupyterlab_pygments/static/568.1e2faa2ba0bbe59c4780.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/jupyterlab_pygments/static/568.1e2faa2ba0bbe59c4780.js
[Voila] Path @voila-dashboards/widgets-manager8/static/651.d9c6fa52270ea21fdf9e.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/651.d9c6fa52270ea21fdf9e.js
[Voila] Path @voila-dashboards/widgets-manager8/static/264.95d855dc9ed80b79c78e.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/264.95d855dc9ed80b79c78e.js
[Voila] Path jupyter-matplotlib/static/480.18f23d468bae372d1c77.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/jupyter-matplotlib/static/480.18f23d468bae372d1c77.js
[Voila] Path ipylab/static/480.16044a8abb039e4c2a69.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/ipylab/static/480.16044a8abb039e4c2a69.js
[Voila] Path ipylab/static/78.bae6a35721d5e7309228.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/ipylab/static/78.bae6a35721d5e7309228.js
[Voila] Path @jupyter-notebook/lab-extension/static/928.bf5955f09ff1e05edfbb.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-notebook/lab-extension/static/928.bf5955f09ff1e05edfbb.js
[Voila] Path @jupyter-notebook/lab-extension/static/42.33f638f0a4239bed9676.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-notebook/lab-extension/static/42.33f638f0a4239bed9676.js
[Voila] Path @jupyter-notebook/lab-extension/static/568.3dd58d88e32a98358776.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-notebook/lab-extension/static/568.3dd58d88e32a98358776.js
[Voila] Path @jupyter-notebook/lab-extension/static/93.eae3497dd223d842d198.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-notebook/lab-extension/static/93.eae3497dd223d842d198.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/651.fe40a967a60b543cf15c.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/651.fe40a967a60b543cf15c.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/420.063e2ee9f71033206b1f.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/420.063e2ee9f71033206b1f.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/439.33696bc45fbd403becbb.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/439.33696bc45fbd403becbb.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/327.8166aeb81cf1531ca240.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/327.8166aeb81cf1531ca240.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/722.3fefeac9cae358348cbc.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/722.3fefeac9cae358348cbc.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/446.bf169bd3821a9ba1aa62.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/446.bf169bd3821a9ba1aa62.js
[Voila] Path @voila-dashboards/widgets-manager8/static/883.bbe30bf61f3074749dda.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/883.bbe30bf61f3074749dda.js
[Voila] Path @voila-dashboards/widgets-manager8/static/324.aa49bd5aec16839cc9e0.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/324.aa49bd5aec16839cc9e0.js
[Voila] Path @voila-dashboards/widgets-manager8/static/603.9866b69497a4a124e57f.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/603.9866b69497a4a124e57f.js
[Voila] Path @voila-dashboards/widgets-manager8/static/496.45f50ff8111515264be7.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/496.45f50ff8111515264be7.js
404 GET /api/kernels?1732034622770 (::1) 0.34ms
</pre>
</details>
<details><summary>Browser Output</summary>
<pre>
Connection lost, reconnecting in 0 seconds.
_reconnect @ :8888/static/notebook/3676.bundle.js:1
reconnect @ :8888/static/notebook/3676.bundle.js:1
restart @ :8888/static/notebook/3676.bundle.js:1
await in restart
restartKernel @ :8888/static/notebook/9605.bundle.js:2
restart @ :8888/static/notebook/9605.bundle.js:2
await in restart
execute @ :8888/static/notebook/1962.bundle.js:1
execute @ :8888/static/notebook/3301.bundle.js:1
onClick @ :8888/static/notebook/7506.bundle.js:703
Yo.r @ :8888/static/notebook/7506.bundle.js:703
Oe @ :8888/static/notebook/1542.bundle.js:2
Be @ :8888/static/notebook/1542.bundle.js:2
(anonymous) @ :8888/static/notebook/1542.bundle.js:2
Ir @ :8888/static/notebook/1542.bundle.js:2
Ur @ :8888/static/notebook/1542.bundle.js:2
(anonymous) @ :8888/static/notebook/1542.bundle.js:2
cs @ :8888/static/notebook/1542.bundle.js:2
Le @ :8888/static/notebook/1542.bundle.js:2
Qr @ :8888/static/notebook/1542.bundle.js:2
qn @ :8888/static/notebook/1542.bundle.js:2
$n @ :8888/static/notebook/1542.bundle.js:2Understand this warningAI
Scrolling to a new item is requested.</pre>
</details>
### If using JupyterLab
- JupyterLab version: v4.2.6
<details><summary>Installed Labextensions</summary>
<pre>
JupyterLab v4.2.6
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions
jupyterlab_pygments v0.3.0 enabled OK (python, jupyterlab_pygments)
jupyter-matplotlib v0.11.4 enabled OK
ipylab v1.0.0 enabled OK (python, ipylab)
@voila-dashboards/jupyterlab-preview v2.3.8 enabled OK (python, voila)
@jupyter-notebook/lab-extension v7.2.2 enabled OK
@jupyter-widgets/jupyterlab-manager v5.0.13 enabled OK (python, jupyterlab_widgets)</pre>
</details>
| 1medium
|
Title: UserWarning "this overload of nonzero is deprecated" when using with PyTorch 1.6
Body: Hi,
Not really a big deal, just started getting a deprecation warning after updating to the PyTorch 1.6:
```
/opt/conda/lib/python3.7/site-packages/pytorch_metric_learning/utils/loss_and_miner_utils.py:79: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
a1_idx = matches.nonzero()[:, 0].flatten()
```
I use pytorch-metric-learning==0.9.89 | 1medium
|
Title: Enhancing of validate_argument_spec documentaiton
Body: ### Summary
The documentation lack clarity and information about options.
For example, default and required are mutually exclusive but it is only listed in the spec of the module https://docs.ansible.com/ansible/latest/dev_guide/developing_program_flow_modules.html#argument-spec but not in the main documentation https://docs.ansible.com/ansible/latest/collections/ansible/builtin/validate_argument_spec_module.html
And I think we currently have 2 documentations for 2 differents usages linked together. The spec seems to be more oriented for module developer and the module is targeted to validate role inputs.
Imo everything we can use in a meta/argument_specs.yaml should be describe in the module documentation.
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/validate_argument_spec.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.8]
config file = /home/myuser/myproject/ansible.cfg
configured module search path = ['/home/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/myuser/.local/pipx/venvs/ansible-core/lib/python3.12/site-packages/ansible
ansible collection location = /home/myuser/myproject/collections
executable location = /home/gaupee/.local/bin/ansible
python version = 3.12.7 (main, Oct 3 2024, 15:15:22) [GCC 14.2.0] (/home/gaupee/.local/pipx/venvs/ansible-core/bin/python)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/myuser/myproject/ansible.cfg) = True
COLLECTIONS_PATHS(/home/myuser/myproject/ansible.cfg) = ['/home/myuser/myproject/collections']
CONFIG_FILE() = /home/myuser/myproject/ansible.cfg
DEFAULT_FORKS(/home/myuser/myproject/ansible.cfg) = 20
DEFAULT_ROLES_PATH(/home/myuser/myproject/ansible.cfg) = ['/home/myuser/myproject/roles']
DEFAULT_STDOUT_CALLBACK(/home/myuser/myproject/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/myuser/myproject/ansible.cfg) = /home/myuser/myproject/.vault_pass
EDITOR(env: EDITOR) = nvim
PAGER(env: PAGER) = less
```
### OS / Environment
Debian 12
### Additional Information
I'm available to help for this issue, please understand that since I'm asking for documentation I don't have a lot of experience with this module and mistakes could happen
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | 1medium
|
Title: 使用MUGE微调时,训练日志里面的Image2Text Acc与评测时的R@1的召回指标不一致?
Body: 评测时top1的召回, 与训练日志里的acc不一致,是两者的计算方式有差别吗? | 1medium
|
Title: Fix Ivy Failing Test: jax - searching.nonzero
Body: | 1medium
|
Title: add_graph raises RuntimeError when parsing constant node
Body: Hello
I got "RuntimeError: VariableType::ID() not implemented" when parsing constant nodes in the computation graph.
code to reproduce the RuntimeError:
```python
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x * 2
input = (torch.zeros(1, 2, 3),)
model = SimpleModel()
with SummaryWriter(comment='test') as w:
w.add_graph(model, input)
```
Stack:
File "...tensorboardX\writer.py", line 419, in add_graph
self.file_writer.add_graph(graph(model, input_to_model, verbose))
File "...tensorboardX\graph.py", line 85, in graph
list_of_nodes = parse(graph)
File "...tensorboardX\graph.py", line 28, in parse
attrs = {k: n[k] for k in n.attributeNames()}
File "...tensorboardX\graph.py", line 28, in <dictcomp>
attrs = {k: n[k] for k in n.attributeNames()}
File "...torch\onnx\utils.py", line 444, in _node_getitem
return getattr(self, sel)(k)
RuntimeError: VariableType::ID() not implemented
The stack shows that calling `Constant["value"]` will give `RuntimeError`
str(n) = "%1 : Dynamic = onnx::Constant\[value={2}\](), scope: SimpleModel"
n["value"] ==> RuntimeError
So is this a bug or an unimplemented feature of ONNX ?
My temporary workaround for this is to set `attrs = str(n)` if `{k: n[k] for k in n.attributeNames()}` raises `RuntimeError`.
pytorch = 0.4.0
tensorflow = 1.8.0
tensorboardX = 1.2
| 1medium
|
Title: alpn custom config
Body: درود alpn رو در پنل تعین میکنم ولی با کاستوم کانفیگ منتقل نمی شود
کانفیگ reality و vless ws tls رو تست کردم | 1medium
|
Title: Cannot install package `d2l` due to failure of collecting `matplotlib` version 3.4
Body: Dear all,
I am on Macbook Pro Early 2011 and macOS 10.13.6. I was trying to install the `d2l` package and it outputs the following. I have `matplotlib` version 3.6.2 and `matplotlib-inline` version 0.1.6 installed on my machine. The Python version I use is 3.11.2, which I think is the latest(?).
```
➜ ~ python3 -m pip install -U d2l
Defaulting to user installation because normal site-packages is not writeable
Collecting d2l
Using cached d2l-0.17.6-py3-none-any.whl (112 kB)
Collecting jupyter==1.0.0
Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Collecting d2l
Using cached d2l-0.17.5-py3-none-any.whl (82 kB)
Using cached d2l-0.17.4-py3-none-any.whl (82 kB)
Requirement already satisfied: numpy==1.22.2 in ./Library/Python/3.11/lib/python/site-packages (from d2l) (1.22.2)
Collecting matplotlib==3.4
Using cached matplotlib-3.4.0.tar.gz (37.1 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [84 lines of output]
/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/dist.py:286: SetuptoolsDeprecationWarning: The namespace_packages parameter is deprecated, consider using implicit namespaces instead (PEP 420).
warnings.warn(msg, SetuptoolsDeprecationWarning)
Edit setup.cfg to change the build options; suppress output with --quiet.
BUILDING MATPLOTLIB
matplotlib: yes [3.4.0]
python: yes [3.11.2 (main, Feb 10 2023, 08:25:48) [Clang 9.1.0
(clang-902.0.39.2)]]
platform: yes [darwin]
tests: no [skipping due to configuration]
macosx: yes [installing]
running egg_info
creating /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info
writing /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/PKG-INFO
writing dependency_links to /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/dependency_links.txt
writing namespace_packages to /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/namespace_packages.txt
writing requirements to /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/requires.txt
writing top-level names to /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/top_level.txt
writing manifest file '/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/SOURCES.txt'
/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/egg_info.py:643: SetuptoolsDeprecationWarning: Custom 'build_py' does not implement 'get_data_files_without_manifest'.
Please extend command classes from setuptools instead of distutils.
warnings.warn(
Python(31612,0x7fff8c346380) malloc: *** mach_vm_map(size=18446744072367222784) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
init_dgelsd failed init
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-install-hg_s7w6r/matplotlib_63c8b20ecb3849b6b370d5c25a120a6d/setup.py", line 258, in <module>
setup( # Finally, pass this all along to distutils to do the heavy lifting.
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
self.run_command(cmd)
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/egg_info.py", line 308, in run
self.find_sources()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/egg_info.py", line 316, in find_sources
mm.run()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/egg_info.py", line 560, in run
self.add_defaults()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/egg_info.py", line 597, in add_defaults
sdist.add_defaults(self)
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/sdist.py", line 106, in add_defaults
super().add_defaults()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/command/sdist.py", line 252, in add_defaults
self._add_defaults_ext()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/command/sdist.py", line 336, in _add_defaults_ext
build_ext = self.get_finalized_command('build_ext')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/cmd.py", line 306, in get_finalized_command
cmd_obj.ensure_finalized()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/cmd.py", line 109, in ensure_finalized
self.finalize_options()
File "/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-install-hg_s7w6r/matplotlib_63c8b20ecb3849b6b370d5c25a120a6d/setup.py", line 90, in finalize_options
self.distribution.ext_modules[:] = [
^
File "/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-install-hg_s7w6r/matplotlib_63c8b20ecb3849b6b370d5c25a120a6d/setup.py", line 90, in <listcomp>
self.distribution.ext_modules[:] = [
^
File "/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-install-hg_s7w6r/matplotlib_63c8b20ecb3849b6b370d5c25a120a6d/setupext.py", line 383, in get_extensions
add_numpy_flags(ext)
File "/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-install-hg_s7w6r/matplotlib_63c8b20ecb3849b6b370d5c25a120a6d/setupext.py", line 498, in add_numpy_flags
import numpy as np
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/numpy/__init__.py", line 380, in <module>
raise RuntimeError(msg)
RuntimeError: Polyfit sanity test emitted a warning, most likely due to using a buggy Accelerate backend.
If you compiled yourself, more information is available at:
https://numpy.org/doc/stable/user/building.html#accelerated-blas-lapack-libraries
Otherwise report this to the vendor that provided NumPy.
RankWarning: Polyfit may be poorly conditioned
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
``` | 1medium
|
Title: How to use of anogan ??
Body: Hello Sir,
I have interesting on Anomaly Detection Task.
My task is to detect anomaly on Image.
So I tried to find anogan model and I got some source from another github-site.
Surprisingly, PyOD already has anogan.
(But I think that PyOD's input type is feature vector-based, but anogan's input type is image-based)
If you don't mind, please tell me sample code about anogan on PyOD.
The input type of pyOD is image-based, Right??
Thanks,
Edward Cho. | 1medium
|
Title: [Bug]: 'Stock' object has no attribute 'df'
Body: ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I get the following error and could not check if data is updated:
`'Stock' object has no attribute 'df'`
### Expected Behavior
I should not get any error and I should be able to check if data is updated
### Steps To Reproduce
```markdown
Python 3.7 64bit
Alpaca-trade-api 2.0.0
```
### Anything else?
The bot was running ok with Alpaca-trade-api version__ = '0.42'
The error is present after upgrade to Alpaca-trade-api 2.0.0 | 1medium
|
Title: No way to manually serialize objects?
Body: If I write my own route, that for example creates a new object, sometimes I would want to send that object back to the client as JSON. It's not possible to return a SQLAlchemy object - I get:
```
raise TypeError(repr(o) + " is not JSON serializable")
```
Is there an easy way to pass Potion an obect and have it return the serialised form to the client?
| 1medium
|
Title: Support hierarchical hyperparameter combinations
Body: /kind feature
**Describe the solution you'd like**
I'd like to be able to do hyperparameter tuning over a hierarchical hyperparameter space. Specifically, I'd like to be able to do something like this Optuna example:
https://github.com/optuna/optuna-examples/blob/main/sklearn/sklearn_simple.py#L24-L32
Where first a particular classifier is chosen, and then relevant hyperparameters for the chosen classifier are selected. This might even go on further, with particular parameters for SGD vs Adam.
**Anything else you would like to add:**
Although Katib can use Optuna for hyperparameter suggestions, I didn't see a way get Katib to use Optuna features like the linked example.
---
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| 1medium
|
Title: Error 504 Gateway Time-out while executing a schedule
Body: ### Please confirm the following
- [x] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [x] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [x] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [x] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `[email protected]` instead.)
### Bug Summary
On the new AWX user interface, after creating a schedule with a valid rule, when clicking on the button "Finish" at step _4, we have an error 504 **Gateway Time-out** after 1 or 2 minutes. Indeed, when we repeat the process, we are disconnected from AWX.
### AWX version
24.6.1
### Select the relevant components
- [x] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [x] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
1 - After connection, we have an information panel "A tech preview of the new AWX user interface can be found here." Click on **here** to access to the new interface.
2 - Click on "Schedules" on left panel.
3 - On step 1, choose a name for schedule name, for instance "Test schedule" and a time zone, for instance **Europe/Paris**
4 - On step 2, define a valid rule, for instance "DTSTART;TZID=Europe/Paris:20250213T081500
RRULE:FREQ=DAILY;INTERVAL=1;WKST=SU;BYSETPOS=3;BYHOUR=16;BYMINUTE=0;BYSECOND=0"
5 - No need to define any exception on step 3
6 - On step 4, click on button "Finish" to launch the schedule
### Expected results
The schedule runs and ends after a few seconds.
### Actual results
After approximately 1 or 2 minutes, an error occurred "Gateway Time-out".
When we repeat the process, we noticed the same error and, after several times, we are disconnected from AWX.
Sometimes, when we repeat the process, we noticed and othe error "This schedule will never run. If you have defined exceptions it is likely that the exceptions cancel out all the rules defined in the rules step."
### Additional information
_No response_ | 1medium
|
Title: UserWarning: ``square=True`` ignored in clustermap
Body: Hi,
Whenever I run
```
sb.clustermap(
master_table_top_10_pearson.transpose(),
square=True
)
```
for my DataFrame which looks like a perfectly normal DataFrame

I get `UserWarning: ``square=True`` ignored in clustermap warnings.warn(msg)`. However, I need the squares in my plot. I cannot see a reason why the parameter gets ignored.
Thank you very much! | 1medium
|
Title: Where can I get trained models?
Body: Hi, everyone,
I want some trained models (VGG, Inception, AlexNet) for feature extraction, but I cannot find any. For me, because of the GTX980 memory limitation, retraining a VGG model on imagenet is impossible. I'll be very grateful if someone could offer some trained models.
| 1medium
|
Title: Cannot install on windows 10
Body: I am having trouble installing on windows 10, I get the error "The system cannot find the path specified: 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\PlatformSDK\\lib".
Does it have to use visual studio 14? How would I be able to change it to use the version of visual studio I have (which is 15)
Here is the console output:
 | 1medium
|
Title: "Choose your search engine" google chrome popup
Body: Since this week this popup appears, which seems to be very similar to the popup about privacy which could be solved with this argument: --add_argument('--disable-features=PrivacySandboxSettings4').
Maybe someone has a clue how to get past this one. The html tag is "search-engine-choice-app".
Thanks in advance


| 1medium
|
Title: cannot load localhost:8000
Body: whenever I run `docker-compose up -d` everything works, but when I go to `localhost:8000` nginx returns `499` then `504`, and it does this every time.
| 1medium
|
Title: Handle Exception on mangum
Body: Thank you for creating great library!!
I found the not good behavior when calling `exception_handler` with `Exception` on `FastAPI`
I defined `exception_hanlder` for `Exception` which returns `JSONResponse`.
```
@app.exception_handler(Exception)
def all_exception_handler(_: Any, error: Exception):
return JSONResponse(status_code=500, content={"message": error.args, "code": 500})
```
I want to get a response on which status code is 500 with content.
FastAPI(Starrett) raises `Exception`.
Mangum doesn't handle the exception and lambda was die. I can't get an excepted response from APIGateway.
However, `Uvicorn` handles the exception and returns expected response.
Could you change Mangum to return excepted response?
If you need the PR then, I can do it.
Thank you.
| 1medium
|
Title: [BUG] Cannot install from source
Body: **Describe the bug**
$ pip install -vvv --no-build-isolation -e .
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/vic/workspace/AutoGPTQ/setup.py", line 111, in <module>
local_arch_list = detect_local_sm_architectures()
File "/home/vic/workspace/AutoGPTQ/setup.py", line 68, in detect_local_sm_architectures
arch_list[-1] += '+PTX'
IndexError: list index out of range
**Hardware details**
24GB RAM, Intel CPU
**Software version**
python 3.8.5
| 1medium
|
Title: Run python -m unittest seq2seq.test.pipeline_test on win7
Body: when i run python -m unittest seq2seq.test.pipeline_test on win7 after 1 step,there is an "Permission denied" error, is the "ResourceWarning: unclosed file <_io.BufferedRandom name=6>" warning case this error?
2017-08-31 22:07:39.700066: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow li
brary wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
INFO:tensorflow:Saving checkpoints for 1 into C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr\model.ckpt.
INFO:tensorflow:Prediction followed by Target @ Step 1
====================================================================================================
SEQUENCE_END a a a 泣
c c c c SEQUENCE_END
c c c c c
泣 泣 泣 泣 SEQUENCE_END
====================================================================================================
INFO:tensorflow:loss = 1.94618, step = 1
INFO:tensorflow:Performing full trace on next step.
INFO:tensorflow:Captured full trace at step 11
INFO:tensorflow:Saved run_metadata to C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr\run_meta
INFO:tensorflow:Saved timeline to C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr\timeline.json
WARNING:tensorflow:From E:\git\seq2seq\seq2seq\training\hooks.py:133: write_op_log (from tensorflow.contrib.tfprof.tfprof_logger) is deprecated and wi
ll be removed after 2018-01-01.
Instructions for updating:
Use `tf.profiler.write_op_log. go/tfprof`
INFO:tensorflow:Saved op log to C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr
INFO:tensorflow:Saving checkpoints for 50 into C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr\model.ckpt.
INFO:tensorflow:Loss for final step: 1.93593.
INFO:tensorflow:Evaluating model now.
INFO:tensorflow:Creating AttentionSeq2Seq in mode=eval
INFO:tensorflow:
AttentionSeq2Seq:
attention.class: AttentionLayerBahdanau
attention.params: {num_units: 10}
bridge.class: seq2seq.models.bridges.ZeroBridge
bridge.params: {}
decoder.class: seq2seq.decoders.AttentionDecoder
decoder.params:
rnn_cell:
cell_class: GRUCell
cell_params: {num_units: 8}
embedding.dim: 10
embedding.init_scale: 0.04
embedding.share: false
encoder.class: seq2seq.encoders.BidirectionalRNNEncoder
encoder.params:
rnn_cell:
cell_class: GRUCell
cell_params: {num_units: 8}
inference.beam_search.beam_width: 0
inference.beam_search.choose_successors_fn: choose_top_k
inference.beam_search.length_penalty_weight: 0.0
optimizer.clip_embed_gradients: 0.1
optimizer.clip_gradients: 5.0
optimizer.learning_rate: 0.0001
optimizer.lr_decay_rate: 0.99
optimizer.lr_decay_steps: 100
optimizer.lr_decay_type: ''
optimizer.lr_min_learning_rate: 1.0e-12
optimizer.lr_staircase: false
optimizer.lr_start_decay_at: 0
optimizer.lr_stop_decay_at: 2147483647
optimizer.name: Adam
optimizer.params: {}
optimizer.sync_replicas: 0
optimizer.sync_replicas_to_aggregate: 0
source.max_seq_len: 50
source.reverse: true
target.max_seq_len: 50
vocab_source: C:\Users\ADMINI~1\AppData\Local\Temp\tmpx283xxm9
vocab_target: C:\Users\ADMINI~1\AppData\Local\Temp\tmpyhe62_cm
INFO:tensorflow:Creating vocabulary lookup table of size 7
INFO:tensorflow:Creating vocabulary lookup table of size 7
INFO:tensorflow:Creating BidirectionalRNNEncoder in mode=eval
INFO:tensorflow:
BidirectionalRNNEncoder:
init_scale: 0.04
rnn_cell:
cell_class: GRUCell
cell_params: {num_units: 8}
dropout_input_keep_prob: 1.0
dropout_output_keep_prob: 1.0
num_layers: 1
residual_combiner: add
residual_connections: false
residual_dense: false
INFO:tensorflow:Creating AttentionLayerBahdanau in mode=eval
INFO:tensorflow:
AttentionLayerBahdanau: {num_units: 10}
INFO:tensorflow:Creating AttentionDecoder in mode=eval
INFO:tensorflow:
AttentionDecoder:
init_scale: 0.04
max_decode_length: 100
rnn_cell:
cell_class: GRUCell
cell_params: {num_units: 8}
dropout_input_keep_prob: 1.0
dropout_output_keep_prob: 1.0
num_layers: 1
residual_combiner: add
residual_connections: false
residual_dense: false
INFO:tensorflow:Creating ZeroBridge in mode=eval
INFO:tensorflow:
ZeroBridge: {}
INFO:tensorflow:Starting evaluation at 2017-08-31-14:09:07
INFO:tensorflow:Restoring parameters from C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr\model.ckpt-50
2017-08-31 22:09:09.336193: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\kernels\queue_base.cc:303] _25_dev_input_fn/paralle
l_read_1/common_queue: Skipping cancelled dequeue attempt with queue not closed
sys:1: ResourceWarning: unclosed file <_io.BufferedRandom name=9>
sys:1: ResourceWarning: unclosed file <_io.BufferedRandom name=10>
2017-08-31 22:09:11.446314: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\framework\op_kernel.cc:1192] Unknown: PermissionErr
or: [Errno 13] Permission denied: 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmpaq6c50c1'
EC:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=3>
outcome.errors.clear()
C:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=4>
outcome.errors.clear()
C:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=5>
outcome.errors.clear()
C:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=6>
outcome.errors.clear()
C:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=7>
outcome.errors.clear()
C:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=8>
outcome.errors.clear()
======================================================================
ERROR: test_train_infer (seq2seq.test.pipeline_test.PipelineTest)
Tests training and inference scripts.
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
return fn(*args)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1306, in _run_fn
status, run_metadata)
File "C:\Program Files\Anaconda3\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.UnknownError: PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmpaq
6c50c1'
[[Node: bleu/value = PyFunc[Tin=[DT_STRING, DT_STRING], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](b
leu/Identity, bleu/Identity_1)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\git\seq2seq\seq2seq\test\pipeline_test.py", line 148, in test_train_infer
train_script.main([])
File "E:\git\seq2seq\bin\train.py", line 272, in main
schedule=FLAGS.schedule)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_runner.py", line 209, in run
return _execute_schedule(experiment, schedule)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_runner.py", line 46, in _execute_schedule
return task()
File "E:\git\seq2seq\seq2seq\contrib\experiment.py", line 112, in continuous_train_and_eval
hooks=self._eval_hooks)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 296, in new_func
return func(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 546, in evaluate
log_progress=log_progress)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 858, in _evaluate_model
config=self._session_config)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\evaluation.py", line 182, in _evaluate_once
session.run(eval_ops, feed_dict)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\monitored_session.py", line 518, in run
run_metadata=run_metadata)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\monitored_session.py", line 862, in run
run_metadata=run_metadata)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\monitored_session.py", line 818, in run
return self._sess.run(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\monitored_session.py", line 972, in run
run_metadata=run_metadata)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\monitored_session.py", line 818, in run
return self._sess.run(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 895, in run
run_metadata_ptr)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1124, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run
options, run_metadata)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmpaq
6c50c1'
[[Node: bleu/value = PyFunc[Tin=[DT_STRING, DT_STRING], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](b
leu/Identity, bleu/Identity_1)]]
Caused by op 'bleu/value', defined at:
File "C:\Program Files\Anaconda3\lib\runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "C:\Program Files\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Program Files\Anaconda3\lib\unittest\__main__.py", line 18, in <module>
main(module=None)
File "C:\Program Files\Anaconda3\lib\unittest\main.py", line 94, in __init__
self.runTests()
File "C:\Program Files\Anaconda3\lib\unittest\main.py", line 255, in runTests
self.result = testRunner.run(self.test)
File "C:\Program Files\Anaconda3\lib\unittest\runner.py", line 176, in run
test(result)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 122, in run
test(result)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 122, in run
test(result)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 122, in run
test(result)
File "C:\Program Files\Anaconda3\lib\unittest\case.py", line 648, in __call__
return self.run(*args, **kwds)
File "C:\Program Files\Anaconda3\lib\unittest\case.py", line 600, in run
testMethod()
File "E:\git\seq2seq\seq2seq\test\pipeline_test.py", line 148, in test_train_infer
train_script.main([])
File "E:\git\seq2seq\bin\train.py", line 272, in main
schedule=FLAGS.schedule)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_runner.py", line 209, in run
return _execute_schedule(experiment, schedule)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_runner.py", line 46, in _execute_schedule
return task()
File "E:\git\seq2seq\seq2seq\contrib\experiment.py", line 112, in continuous_train_and_eval
hooks=self._eval_hooks)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 296, in new_func
return func(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 546, in evaluate
log_progress=log_progress)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 832, in _evaluate_model
model_fn_results = self._get_eval_ops(features, labels, metrics)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1199, in _get_eval_ops
metrics, features, labels, model_fn_ops.predictions))
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 271, in _make_metrics_ops
result[name] = metric.create_metric_ops(features, labels, predictions)
File "E:\git\seq2seq\seq2seq\metrics\metric_specs.py", line 124, in create_metric_ops
name="value")
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\ops\script_ops.py", line 203, in py_func
input=inp, token=token, Tout=Tout, name=name)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_script_ops.py", line 36, in _py_func
name=name)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
op_def=op_def)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1204, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
UnknownError (see above for traceback): PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmpaq6c50c1'
[[Node: bleu/value = PyFunc[Tin=[DT_STRING, DT_STRING], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](b
leu/Identity, bleu/Identity_1)]]
----------------------------------------------------------------------
Ran 2 tests in 114.668s
FAILED (errors=1) | 2hard
|
Title: Bug Report: Alpha setting with Panel not working
Body: **Describe the bug**
When Adding a plot mpf.make_addplot((df['SPY']),color='black',alpha=0.3,panel=0) the Alpha Setting is not reflected once Panel number is assigned. The line remains a solid black.
**To Reproduce**
Steps to reproduce the behavior:
1. Add Panel_ratios and num_panels to mpf.plot
2. Add apds = [mpf.make_addplot((df['SPY']),color='black',alpha=0.3,panel=0)]
3. Add addplot = pads to **kwargs
**Expected behavior**
Alpha setting to work
**Screenshots**
None | 1medium
|
Title: DML RETURNING omits other mapped cols due to bulk insert assumptions
Body:
### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/12327
```py
from __future__ import annotations
from sqlalchemy import create_engine
from sqlalchemy import ForeignKey
from sqlalchemy import update
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
class Base(DeclarativeBase):
pass
class A(Base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(primary_key=True)
data: Mapped[str]
bs: Mapped[list[B]] = relationship("B")
class B(Base):
__tablename__ = "b"
id: Mapped[int] = mapped_column(primary_key=True)
a_id: Mapped[int] = mapped_column(ForeignKey("a.id"))
data: Mapped[str]
e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add(
A(data='a1', bs=[B(data='b2')])
)
s.flush()
result = s.execute(
update(A).values(data='foo').where(A.id == B.a_id).returning(A.data, B.a_id, B.data)
)
print(result.all())
```
renders:
```
UPDATE a SET data=%(data)s FROM b WHERE a.id = b.a_id RETURNING a.id, a.data
```
and fails
```
sqlalchemy.exc.NoSuchColumnError: Could not locate column in row for column 'b.a_id'
``` | 2hard
|
Title: I am getting this type of error
Body: 2023-12-13 11:53:05,058 - crypto_trading_logger - INFO - Starting
bridge
bridge
hourtokeepscouthistory
hourtokeepscouthistory
scout_multiplier
scout_multiplier
scout_sleep_time
scout_sleep_time
api_key
api_key
api_secret_key
api_secret_key
tld
tld
current_coin
current_coin
strategy
strategy
sell_timeout
sell_timeout
buy_timeout
buy_timeout
Traceback (most recent call last):
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 203, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\connection.py", line 60, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\socket.py", line 962, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
socket.gaierror: [Errno 11001] getaddrinfo failed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 790, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 491, in _make_request
raise new_e
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 1092, in _validate_conn
conn.connect()
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 611, in connect
self.sock = sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 210, in _new_conn
raise NameResolutionError(self.host, self, e) from e
urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x000001E34914CCD0>: Failed to resolve 'api.binance.'com'' ([Errno 11001] getaddrinfo failed)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 844, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host="api.binance.'com'", port=443): Max retries exceeded with url: /api/v3/ping (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x000001E34914CCD0>: Failed to resolve 'api.binance.'com'' ([Errno 11001] getaddrinfo failed)"))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\binance-trade-bot\binance_trade_bot\__main__.py", line 5, in <module>
main()
File "D:\binance-trade-bot\binance_trade_bot\crypto_trading.py", line 18, in main
manager = BinanceAPIManager(config, db, logger)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\binance-trade-bot\binance_trade_bot\binance_api_manager.py", line 27, in __init__
self.binance_client = Client(
^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\binance\client.py", line 132, in __init__
self.ping()
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\binance\client.py", line 447, in ping
return self._get('ping', version=self.PRIVATE_API_VERSION)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\binance\client.py", line 292, in _get
return self._request_api('get', path, signed, version, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\binance\client.py", line 242, in _request_api
return self._request(method, uri, signed, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\binance\client.py", line 236, in _request
self.response = getattr(self.session, method)(uri, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host="api.binance.'com'", port=443): Max retries exceeded with url: /api/v3/ping (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x000001E34914CCD0>: Failed to resolve 'api.binance.'com'' ([Errno 11001] getaddrinfo failed)")) | 1medium
|
Title: ValueError: Cannot convert a partially known TensorShape to a Tensor: (1, 0, ?)
Body: F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING: Logging before flag parsing goes to stderr.
W0808 21:46:41.206200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\model_utils.py:295: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
W0808 21:46:41.221700 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:858: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.
W0808 21:46:41.225200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:639: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.
W0808 21:46:41.225200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:639: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.
W0808 21:46:41.225700 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:647: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.
W0808 21:46:41.255199 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\model_utils.py:27: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
W0808 21:46:41.957199 7844 lazy_loader.py:50]
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
2019-08-08 21:46:41.962200: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
I0808 21:46:41.963700 7844 cross_device_ops.py:1174] Device is available but not used by distribute strategy: /device:CPU:0
W0808 21:46:41.964200 7844 cross_device_ops.py:1177] Not all devices in `tf.distribute.Strategy` are visible to TensorFlow.
W0808 21:46:41.964200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\model_utils.py:40: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.
I0808 21:46:41.964699 7844 model_utils.py:41] Use MirroredStrategy with 8 devices.
I0808 21:46:41.965199 7844 run_config.py:558] Initializing RunConfig with distribution strategies.
I0808 21:46:41.965199 7844 estimator_training.py:167] Not using Distribute Coordinator.
I0808 21:46:41.965199 7844 estimator.py:209] Using config: {'_model_dir': 'F:/kaggleData/GS_ROOT/exp/imdb/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 500, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
, '_keep_checkpoint_max': 0, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x0000000012010320>, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x000000000D96DB00>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_distribute_coordinator_mode': None, '_tpu_config': TPUConfig(iterations_per_loop=500, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None, eval_training_input_configuration=2), '_cluster': None}
W0808 21:46:41.965700 7844 model_fn.py:630] Estimator's model_fn (<function get_model_fn.<locals>.model_fn at 0x0000000011EBC950>) includes params argument, but params are not passed to Estimator.
W0808 21:46:41.966200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:314: The name tf.gfile.ListDirectory is deprecated. Please use tf.io.gfile.listdir instead.
W0808 21:46:41.966200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:318: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.
I0808 21:46:41.967200 7844 run_classifier.py:730] Num of eval samples: 4
I0808 21:46:41.967700 7844 run_classifier.py:404] Do not overwrite tfrecord F:/kaggleData/GS_ROOT/proc_data/imdb/model.model.len-512.dev.predict.tf_record exists.
W0808 21:46:41.967700 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:452: The name tf.FixedLenFeature is deprecated. Please use tf.io.FixedLenFeature instead.
I0808 21:46:41.967700 7844 run_classifier.py:461] Input tfrecord file F:/kaggleData/GS_ROOT/proc_data/imdb/model.model.len-512.dev.predict.tf_record
F:/kaggleData/prediction\imdb.tsv
<class 'function'>
W0808 21:46:41.994199 7844 deprecation.py:323] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:506: map_and_batch (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.map_and_batch(...)`.
W0808 21:46:41.994199 7844 deprecation.py:323] From F:\tensorflow3\lib\site-packages\tensorflow\contrib\data\python\ops\batching.py:273: map_and_batch (from tensorflow.python.data.experimental.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map(map_func, num_parallel_calls)` followed by `tf.data.Dataset.batch(batch_size, drop_remainder)`. Static tf.data optimizations will take care of using the fused implementation.
W0808 21:46:41.995699 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:465: The name tf.parse_single_example is deprecated. Please use tf.io.parse_single_example instead.
I0808 21:46:42.019700 7844 estimator.py:1145] Calling model_fn.
W0808 21:46:42.029199 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\xlnet.py:220: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
W0808 21:46:42.029700 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\xlnet.py:220: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.
I0808 21:46:42.029700 7844 modeling.py:453] memory input None
I0808 21:46:42.030200 7844 modeling.py:455] Use float type <dtype: 'float32'>
Traceback (most recent call last):
File "F:\tensorflow3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1877, in zeros
tensor_shape.TensorShape(shape))
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 326, in _tensor_shape_tensor_conversion_function
"Cannot convert a partially known TensorShape to a Tensor: %s" % s)
ValueError: Cannot convert a partially known TensorShape to a Tensor: (1, 0, ?)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\hansaizhou\workspace\XLnet\run_classifier.py", line 858, in <module>
tf.app.run()
File "F:\tensorflow3\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "F:\tensorflow3\lib\site-packages\absl\app.py", line 300, in run
_run_main(main, args)
File "F:\tensorflow3\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "C:\Users\hansaizhou\workspace\XLnet\run_classifier.py", line 827, in main
checkpoint_path=FLAGS.predict_ckpt)):
File "F:\tensorflow3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 619, in predict
features, None, ModeKeys.PREDICT, self.config)
File "F:\tensorflow3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1146, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "C:\Users\hansaizhou\workspace\XLnet\run_classifier.py", line 525, in model_fn
FLAGS, features, n_class, is_training)
File "C:\Users\hansaizhou\workspace\XLnet\function_builder.py", line 152, in get_classification_loss
input_mask=inp_mask)
File "C:\Users\hansaizhou\workspace\XLnet\xlnet.py", line 222, in __init__
) = modeling.transformer_xl(**tfm_args)
File "C:\Users\hansaizhou\workspace\XLnet\modeling.py", line 499, in transformer_xl
dtype=tf_float)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1880, in zeros
shape = ops.convert_to_tensor(shape, dtype=dtypes.int32)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\ops.py", line 1087, in convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\ops.py", line 1145, in convert_to_tensor_v2
as_ref=False)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\ops.py", line 1224, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 305, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 246, in constant
allow_broadcast=True)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 284, in _constant_impl
allow_broadcast=allow_broadcast))
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 467, in make_tensor_proto
nparray = np.array(values, dtype=np_dt)
TypeError: __int__ returned non-int (type NoneType)
| 2hard
|
Title: url "schema" should be "scheme"
Body: https://github.com/psf/requests/blob/590350f8d094c216051510ed1dd18fe871b53b72/requests/models.py#L388-L392
I don't believe the first part of a URL is ever called a "schema." Exceptions and error messages referring to an incorrect schema are confusing, especially in contexts where actual schema errors are possible. If it isn't possible to change the `MissingSchema` exception (it looks like it is slated to be fixed in 3.x) please consider at least changing the error message.
References:
https://www.w3.org/Addressing/URL/url-spec.txt
https://github.com/psf/requests/issues/4495 | 0easy
|
Title: Improve the Structure of the Metrics Table
Body: ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Improve the structure of the metrics table by reorganizing the columns/groups.
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
In the metrics explorer, the comparison of different metrics and runs is very difficult due to the grouping by metric. In the following, I illustrated an example with three runs and three metrics grouped by parameter a. `X` stands for some arbitrary value. The evaluation and comparison of the runs is very challenging. And here we actually have a very simple example with just a few runs/metrics.
| Group | Run | Group Config | Metric | Value | | | | Run Params | | | Actions |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Name| hparams.a | Name | Group Min | Mean | Group Max | ... | hparams.b | hparams.c | ... |
| Group 1 | Mixed: 3 Values | 0| loss | X | X | X
| | Run A | 0 | loss | | X | | | | | | S |
| | Run B | 0 | loss | | X | | | | | | S |
| | Run C | 0 | loss | | X | | | | | | S |
| Group 2 | Mixed: 3 Values | 0| acc| X | X | X
| | Run A | 0 | acc | | X | | | | | | S |
| | Run B | 0 | acc | | X | | | | | | S |
| | Run C | 0 | acc | | X | | | | | | S |
| Group 3 | Mixed: 3 Values | 0| val_loss | X | X | X
| | Run A | 0 | val_loss | | X | | | | | | S |
| | Run B | 0 | val_loss | | X | | | | | | S |
| | Run C | 0 | val_loss | | X | | | | | | S |
| Group 4 | Mixed: 3 Values | 1| loss | X | X | X
| | Run D | 1 | loss | | X | | | | | | S |
| | Run E | 1 | loss | | X | | | | | | S |
| | Run F | 1 | loss | | X | | | | | | S |
| Group 5 | Mixed: 3 Values | 1| acc| X | X | X
| | Run D | 1 | acc| | X | | | | | | S |
| | Run E | 1 | acc| | X | | | | | | S |
| | Run F | 1 | acc| | X | | | | | | S |
| Group 6 | Mixed: 3 Values | 1| val_loss | X | X | X
| | Run D | 1 | val_loss | | X | | | | | | S |
| | Run E | 1 | val_loss | | X | | | | | | S |
| | Run F | 1 | val_loss | | X | | | | | | S |
### Pitch
<!-- A clear and concise description of what you want to happen. -->
The structure can be improved by showing the metrics as columns instead of as separate groups (similar to the runs explorer).
Here is an example with the same data as above:
| Group | Run | Group Config | Metrics | | | | Run Params | | | Actions |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Name| hparams.a | loss |acc | val_loss | ... | hparams.b | hparams.c | ...
| Group 1 | Mixed: 3 Values | 0| X±X | X±X | X±X |
| | Run A | 0 | X | X | X | | | | | S |
| | Run B | 0 | X| X | X | | | | | S |
| | Run C | 0 | X| X | X | | | | | S |
| Group 2 | Mixed: 3 Values | 1 | X±X | X±X | X±X |
| | Run D | 1 | X | X | X | | | | | S |
| | Run E | 1 | X| X | X | | | | | S |
| | Run F | 1 | X| X | X | | | | | S |
In this way:
- The table has become much smaller and clearer
- Comparing different runs as well as different metrics is much easier
- Scales better when the number of runs and metrics increases
- The existing managing of the columns can be used
- In the group column, an aggregated value can be shown (e.g. mean±std, mean (min - max), or just the mean. Possibly selectable)
- `S` stands for show/hide. In this way, one could easily show/hide a run from all plots instead of changing all individually
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. --> | 1medium
|
Title: [Bug]: 跑llama3-8b的sft微调时,报错 KeyError: 'eval_accuracy'
Body: ### 软件环境
```Markdown
- paddlepaddle-gpu: 0.0.0.post120
- paddlenlp: 3.0.0b2
```
### 重复问题
- [X] I have searched the existing issues
### 错误描述
```Markdown
跑llama3-8b的sft微调时,报错
Traceback (most recent call last):
File "/home/LAB/huangjx/new/PaddleNLP/llm/run_finetune.py", line 730, in <module>
main()
File "/home/LAB/huangjx/new/PaddleNLP/llm/run_finetune.py", line 570, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/LAB/huangjx/.local/lib/python3.10/site-packages/paddlenlp/trainer/trainer.py", line 829, in train
return self._inner_training_loop(
File "/home/LAB/huangjx/.local/lib/python3.10/site-packages/paddlenlp/trainer/trainer.py", line 1203, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, epoch, ignore_keys_for_eval, inputs=inputs)
File "/home/LAB/huangjx/.local/lib/python3.10/site-packages/paddlenlp/trainer/trainer.py", line 1478, in _maybe_log_save_evaluate
self._save_checkpoint(model, metrics=metrics)
File "/home/LAB/huangjx/.local/lib/python3.10/site-packages/paddlenlp/trainer/trainer.py", line 2460, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_accuracy'
但如果我把config中"metric_for_best_model": "accuracy",删除,就不会报错。所以应该是不支持"metric_for_best_model": "accuracy".在这个过程中我开了pp和tp
```
### 稳定复现步骤 & 代码
1. cd PaddleNLP/llm/config/llama
2. cat sft_argument.json
{
"model_name_or_path": "meta-llama/Meta-Llama-3-8B",
"dataset_name_or_path": "./data",
"output_dir": "./checkpoints/llama_sft_ckpts",
"per_device_train_batch_size": 1,
"gradient_accumulation_steps": 1,
"per_device_eval_batch_size": 1,
"eval_accumulation_steps": 1,
"num_train_epochs": 3,
"learning_rate": 3e-05,
"warmup_steps": 30,
"max_steps": 20,
"max_evaluate_steps": 3,
"logging_steps": 1,
"evaluation_strategy": "epoch",
"save_strategy": "epoch",
"src_length": 1024,
"max_length": 200,
"do_train": true,
"do_eval": true,
"disable_tqdm": true,
"load_best_model_at_end": true,
"eval_with_do_generation": false,
"metric_for_best_model": "accuracy",
"recompute": true,
"save_total_limit": 1,
"tensor_parallel_degree": 2,
"pipeline_parallel_degree": 2,
"pipeline_parallel_config": "disable_p2p_cache_shape",
"sharding": "stage2",
"zero_padding": false,
"unified_checkpoint": false,
"use_flash_attention": false
}
3. python3 -u -m paddle.distributed.launch --gpus "0,1,2,3" run_finetune.py ./config/llama/sft_argument.json | 1medium
|
Title: [FR] Import CSV to Add Materials in One Click
Body: ### Please verify that this feature request has NOT been suggested before.
- [x] I checked and didn't find a similar feature request
### Problem statement
Hello,
First, thank you for your great work on Inventree!
I would like to know if it is possible to import a CSV file to add materials (or other items) in one click. If this feature is not available, would it be possible to implement it? It would be very useful for bulk additions instead of manually entering each item.
### Suggested solution
In my use case, I would like a feature that allows users to upload a CSV file containing item details (e.g., name, description, quantity, supplier, price, etc.), and have Inventree automatically create the entries.
A possible implementation could include:
A simple UI option under the "Add Material" section to upload a CSV file.
A standardized CSV format with predefined columns.
An option to map CSV columns to Inventree fields if needed.
A preview step before confirming the import.
### Describe alternatives you've considered
Using an API endpoint to bulk-add materials via an external script.
A spreadsheet import feature within the database interface.
However, a built-in CSV import would be the most user-friendly solution.
### Examples of other systems
this feature can be implemented by allowing users to upload a structured CSV file, which is then automatically parsed and added to the inventory. The system also provides error handling and validation before finalizing the import.
### Do you want to develop this?
- [ ] I want to develop this. | 1medium
|
Title: Intermittent long delay during connection upgrade
Body: **Summary**
We are using `python-socketio` to push out regular service status updates to browser clients, which are using the `socket.io` JS client library. Most of the time it works fine but we are intermittently seeing a situation where there is a ~30 second delay part-way through the connection upgrade process, and no messages are received during this time. Eventually the connection is closed (although the process looks a bit messy) and a reconnect occurs, after which everything works fine again.
We can reproduce this fairly easily with a unit test that repeatedly navigates to the relevant page and away again, forcing the socket to be recreated and a connection re-established each time. We reliably run into the issue after a few iterations of this.
**Versions**
`python-socketio`: 4.3.1 (have also tried 4.4.0)
`python-engineio`: 3.10.0 (have also tried 3.11.0)
`socket.io JS client`: 2.3.0 (have also tried 2.2.0)
**Code**
This is a simplified version of our server code, which is running inside a Docker container within a Kubernetes pod:
```python
from gevent import monkey, pywsgi
monkey.patch_all()
from geventwebsocket.handler import WebSocketHandler
import logging
import socketio
import time
logging.getLogger('socketio.server').setLevel(logging.INFO)
logging.getLogger('engineio.server').setLevel(logging.INFO)
sio = socketio.Server(
logger=True,
engineio_logger=True,
cors_allowed_origins="*",
async_mode='gevent'
)
app = socketio.WSGIApp(sio)
@sio.on('connect')
def connect(sid, environ):
logger.info(f'Client connected with session id: {sid}')
logger.info(f'Environment is: {environ}')
@sio.on('disconnect')
def disconnect(sid):
logger.info(f'Client disconnected from session id: {sid}')
@sio.on('join')
def join(sid, room):
sio.enter_room(sid, room)
logger.info(f'Client joining room {room} in session {sid}')
def generate_update_message():
# Do some work here to generate the right status message
# ...
def update_loop():
while True:
# Generate and emit an update every second
update = generate_update_message()
sio.emit('update', update, room='admin')
sio.sleep(0.1)
time.sleep(1.0)
def main():
sio.start_background_task(update_loop)
pywsgi.WSGIServer(('', 8080), app, handler_class=WebSocketHandler).serve_forever()
```
The relevant bit of our client code is:
```javascript
function admin_vm() {
const self = this;
self.socket = io.connect({
path: window.config.project_url + 'status/system/socket.io'
});
self.socket.on('connect', function() {
console.log('Socket connected, joining admin room');
self.socket.emit('join', 'admin');
});
self.socket.on('update', function (update) {
update = JSON.parse(update);
const u = update;
console.log(
'Update for system status left server [' + u.timestamp +
'] arrived here [' + new Date().toString() + '] update count [' + update_count++ + ']'
);
// Apply the update to the UI here...
});
}
```
**Logs**
I've captured some detailed logs of both client and server both when the connection works and when it doesn't (see attached).
[bad-client.log](https://github.com/miguelgrinberg/python-socketio/files/3940079/bad-client.log)
[bad-server.log](https://github.com/miguelgrinberg/python-socketio/files/3940080/bad-server.log)
[good-client.log](https://github.com/miguelgrinberg/python-socketio/files/3940081/good-client.log)
[good-server.log](https://github.com/miguelgrinberg/python-socketio/files/3940082/good-server.log)
When the issue occurs, the client log clearly shows the delay while it's probing for the availability of the websocket transport:
```
12:58:46.220 socket.io.js 391:131 "engine.io-client:socket probe transport \"%s\" pong +7ms" "websocket"
12:58:46.220 socket.io.js 391:131 "engine.io-client:socket pausing current transport \"%s\" +1ms" "polling"
12:58:46.220 socket.io.js 391:131 "engine.io-client:polling we are currently polling - waiting to pause +8ms"
12:59:11.356 socket.io.js 391:131 "engine.io-client:socket writing ping packet - expecting pong within %sms +25s" 60000
12:59:17.224 socket.io.js 391:131 "engine.io-client:polling polling got data %s +31s" ArrayBuffer(0)
12:59:17.224 socket.io.js 391:131 "engine.io-client:polling pre-pause polling complete +2ms"
12:59:17.224 socket.io.js 391:131 "engine.io-client:polling paused +1ms"
12:59:17.224 socket.io.js 391:131 "engine.io-client:socket changing transport and sending upgrade packet +6s"
12:59:17.224 socket.io.js 391:131 "engine.io-client:socket setting transport %s +1ms" "websocket"
12:59:17.224 socket.io.js 391:131 "engine.io-client:socket clearing existing transport %s +0ms" "polling"
12:59:17.224 socket.io.js 391:131 "engine.io-client:polling ignoring poll - transport state \"%s\" +4ms" "paused"
12:59:17.224 socket.io.js 391:131 "engine.io-client:socket flushing %d packets in socket +3ms" 1
```
On the server side, we see part of the upgrade process and then it seems to stop (while still emitting update messages)...and eventually the server gives up and closes the socket:
```
12:58:46,184 1 MainProcess DEBUG Attempting to upgrade connection - geventwebsocket.handler
12:58:46,185 1 MainProcess DEBUG WebSocket request accepted, switching protocols - geventwebsocket.handler
12:58:46,185 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Received request to upgrade to websocket - engineio.server
12:58:46,186 1 MainProcess DEBUG Initializing WebSocket - geventwebsocket.handler
12:58:46,187 1 MainProcess DEBUG Validating WebSocket request - geventwebsocket.handler
12:58:46,187 1 MainProcess DEBUG Can only upgrade connection if using GET method. - geventwebsocket.handler
12:58:46,187 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Received packet MESSAGE data 2["join","admin"] - engineio.server
12:58:46,187 1 MainProcess INFO received event "join" from 6294b24868144273b4a9bceaf0e439f9 [/] - socketio.server
12:58:46,188 1 MainProcess INFO ::ffff:10.0.9.139 - - [12:58:46] "POST /socket.io/?EIO=3&transport=polling&t=MxglQ0a&sid=6294b24868144273b4a9bceaf0e439f9 HTTP/1.1" 200 208 0.001325 - geventwebsocket.handler
12:58:46,188 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9 is entering room admin [/] - socketio.server
12:58:46,188 1 MainProcess INFO Client joining room admin in session 6294b24868144273b4a9bceaf0e439f9 - admin.status
12:58:46,190 1 MainProcess DEBUG Initializing WebSocket - geventwebsocket.handler
12:58:46,190 1 MainProcess DEBUG Validating WebSocket request - geventwebsocket.handler
12:58:46,199 1 MainProcess INFO ::ffff:10.0.9.139 - - [12:58:46] "GET /socket.io/?EIO=3&transport=polling&t=MxglQ0c&sid=6294b24868144273b4a9bceaf0e439f9 HTTP/1.1" 200 159 0.009237 - geventwebsocket.handler
12:58:46,215 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
12:58:46,217 1 MainProcess DEBUG Initializing WebSocket - geventwebsocket.handler
12:58:46,217 1 MainProcess DEBUG Validating WebSocket request - geventwebsocket.handler
12:58:47,400 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
12:58:48,576 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
...
12:59:14,619 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
12:59:15,841 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
12:59:17,045 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
12:59:17,046 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Client is gone, closing socket - engineio.server
12:59:17,046 1 MainProcess INFO Client disconnected from session id: 6294b24868144273b4a9bceaf0e439f9 - admin.status
12:59:17,046 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Client is gone, closing socket - engineio.server
12:59:17,047 1 MainProcess INFO ::ffff:10.0.9.139 - - [12:59:17] "GET /socket.io/?EIO=3&transport=polling&t=MxglQ14&sid=6294b24868144273b4a9bceaf0e439f9 HTTP/1.1" 200 155 30.829922 - geventwebsocket.handler
12:59:17,064 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Upgrade to websocket successful - engineio.server
12:59:17,065 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Received packet PING data None - engineio.server
12:59:17,065 1 MainProcess INFO Receive error -- socket is closed - engineio.server
12:59:17,068 1 MainProcess DEBUG Closed WebSocket - geventwebsocket.handler
12:59:17,069 1 MainProcess DEBUG Failed to write closing frame -> closing socket - geventwebsocket.handler
12:59:17,069 1 MainProcess DEBUG Closed WebSocket - geventwebsocket.handler
```
As you can see in the full log, a re-connection follows and it works, but we really want to eliminate this 30 second delay as it leads to a bad user experience.
**Workaround**
As a test, we tried using the websocket transport directly instead of starting with long polling (as described at https://socket.io/docs/client-api/#With-websocket-transport-only):
```javascript
self.socket = io.connect({
path: window.config.project_url + 'status/system/socket.io',
transports: ['websocket'] // Default to websocket transport first, only falling back to long-polling on connection failure
});
self.socket.on('reconnect_attempt', () => {
// On reconnection, reset the transports option, as the Websocket connection may
// have failed (caused by proxy, firewall, browser, ...)
self.socket.io.opts.transports = ['polling', 'websocket'];
});
```
This seems to solve the problem - our test case can go through hundreds of iterations without any problem.
| 2hard
|
Title: Document ID doesn't updated upon metadata update
Body: **Describe the bug**
If you assign the `meta` field post initialization to a `Document`, the id of the document doesn't get updated.
This is e.g. done in the [PyPDFConverter](https://github.com/deepset-ai/haystack/blob/28ad78c73d6c11c9b77089aba42799508178a2fa/haystack/components/converters/pypdf.py#L225).
Documents having the same ID although they have different metadata leads to issues with document stores and duplicate policy `OVERWRITE` as all documents end up as the same document then and even overwrite each other.
**Error message**
Error that was thrown (if available)
**Expected behavior**
The ID should update itself if the metadata is changed. Same applies to the other properties.
**Additional context**
Ideally we find a solution that the ID is automatically updated but also can be overridden manually?
**To Reproduce**
```python
def test_set_meta_afterwards():
doc = Document()
old_id = doc.id
doc.meta = {"test": 10}
assert doc.meta == {"test": 10}
assert doc.id != old_id
```
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number):
- DocumentStore:
- Reader:
- Retriever:
| 1medium
|
Title: Sphinx Theme
Body: Currently the Sphinx theme is Alabaster which I have always found... difficult. Any object to changing this to the RTD theme? | 1medium
|
Title: 如何修改下链接订阅源导入的文字
Body: 想修改一下一导入订阅源的网址就显示的名称,“Charles Xu”这样改成其他的名称像坚果面馆这样,不知道是那个源码配置文件;随便再问个问题,close的issue是不是没有通知提示了?因为仓库一直无人问津,所以这方面不是很清楚。
 | 1medium
|
Title: I met this error:
Body: Hi, @jtoy @hunkim @Kongsea @DingKe @JayParks
I met this error:
sgiO2:image_captioning sgi$ python build_vocab.py
loading annotations into memory...
Traceback (most recent call last):
File "build_vocab.py", line 77, in <module>
main(args)
File "build_vocab.py", line 59, in main
threshold=args.threshold)
File "build_vocab.py", line 31, in build_vocab
coco = COCO(json)
File "/Library/Python/2.7/site-packages/pycocotools/coco.py", line 84, in __init__
dataset = json.load(open(annotation_file, 'r'))
IOError: [Errno 2] No such file or directory: '/usr/share/mscoco/annotations/captions_train2014.json'
What's wrong with me?
| 1medium
|
Title: Introduction to CVAT and Datumaro
Body: I have a data set uploaded and annotated on CVAT and want to add images to the dataset. It's not obvious how to do this. Create a new project/task etc. ? But how are the images added to the existing data set? | 1medium
|
Title: Topic 2: typos
Body: In
`mlcourse.ai-master/jupyter_english/topic02_visual_data_analysis/topic2_visual_data_analysis.ipynb`
"ellpise" instead of "ellipse" | 0easy
|
Title: 在运行官方给的Advanced Usage示例时报错
Body: 运行Basic Usage没有问题,但运行Advanced Usage时出现以下报错
Traceback (most recent call last):
File "xxx/test1.py", line 10, in <module>
rand_spk = chat.sample_random_speaker()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/ChatTTS/ChatTTS/core.py", line 160, in sample_random_speaker
return self.speaker.sample_random()
^^^^^^^^^^^^
AttributeError: 'Chat' object has no attribute 'speaker'
 | 1medium
|
Title: Setting to disable new user registration (the `/register` endpoint)
Body: ### Checklist
* [x] I read [Contribution Guidelines](https://github.com/apragacz/django-rest-registration/blob/master/CONTRIBUTING.md#issues)
* [x] I searched [the documentation](https://django-rest-registration.readthedocs.io/) to ensure that the requested feature is not already implemented and described
* [x] I searched existing issues before opening this one
### Is your feature request related to a problem? Please describe.
Sometimes, we need to stop new users from registering, while still continuing to provide the rest of the functionalities from this library to existing users.
### Describe the solution you'd like
I would like to be able to disable registration of new users (the `/register` endpoint) based on a [setting](https://django-rest-registration.readthedocs.io/en/latest/detailed_configuration/all_settings.html).
### Describe alternatives you've considered
Reading through the list of existing settings (and reading the code to make sure there isn't an undocumented setting), there doesn't seem to be any :upside_down_face:
The best I can come up with (but haven't had time to test yet) is to define a custom [`REGISTER_SERIALIZER_CLASS`](https://django-rest-registration.readthedocs.io/en/latest/detailed_configuration/all_settings.html#register-serializer-class) that would be a copy of [the one already provided](https://github.com/apragacz/django-rest-registration/blob/master/rest_registration/api/serializers.py#L96) with an added exception at the beginning of `validate()`, thrown based on a django setting. | 1medium
|
Title: [k8s] Make `ports: ingress` reuse API server's nginx controller
Body: Currently if users deploy the API server with our helm chart, we create an ingress controller and expose it through a NodePort svc (or optionally, LoadBalancer svc).
We can (and should) piggyback on this existing ingress controller for exposing ports via [`ports: ingress` mode](https://docs.skypilot.co/en/latest/reference/kubernetes/kubernetes-ports.html#nginx-ingress). This would eliminate the need for users to set up another nginx ingress controller and re-use the same public facing NodePort/LoadBalancer service. | 1medium
|
Title: OperatorExtra links xcom keys should be pushed to xcom db
Body: ### Body
After #45481, we need to check if the operator extra links are being pushed to the right place and not to the custom xcom backend.
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | 1medium
|
Title: Issue running task of an asset decorated dag
Body: ### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Serialisation issue is coming when triggering a dag which is asset decorated and decorator has name attribute.
<img width="1563" alt="Image" src="https://github.com/user-attachments/assets/7fa2ba1e-69a1-4d86-9254-149023331f38" />
```
INFO: 192.168.97.1:53086 - "GET /public/dags/abcd/dagRuns/manual__2025-03-13T10%3A58%3A38.178923%2B00%3A00_vNVGhOqs/taskInstances/__main__/-1 HTTP/1.1" 200 OK
/usr/local/lib/python3.9/site-packages/pydantic/type_adapter.py:527 UserWarning: Pydantic serializer warnings:
PydanticSerializationUnexpectedValue: Expected `StructuredLogMessage` but got `StructuredLogMessage` with value `StructuredLogMessage(time..._main__/attempt=1.log'])` - serialized value may not be as expected
PydanticSerializationUnexpectedValue: Expected `str` but got `StructuredLogMessage` with value `StructuredLogMessage(time..._main__/attempt=1.log'])` - serialized value may not be as expected
```
### What you think should happen instead?
The Dag should work fine and create an asset event.
### How to reproduce
Run the below DAG:
```python
@asset(uri="s3://bucket/asset1_producer", schedule=None)
def asset1_producer():
pass
@asset(name="abcd", uri="s3://bucket/object", schedule=None)
def asset2_producer(self, context, asset1_producer):
print(self)
print(context["inlet_events"][asset1_producer])
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 1medium
|
Title: cache_data and cache_resource not working with DuckDB on Motherduck connection
Body: ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
I have this connections script that connects to DuckDB on Motherduck:
```python
import streamlit as st
import duckdb
from duckdb import DuckDBPyConnection
import polars as pl
import toml
@st.cache_resource
def motherduck_connection() -> DuckDBPyConnection:
with open("./secrets.toml", "r") as f:
secrets = toml.load(f)
motherduck_token = secrets["tokens"]["motherduck"]
conn = duckdb.connect(f"md:nba_data?motherduck_token={motherduck_token}")
return conn
@st.cache_data(ttl=600)
def standings_table_connection(conn: DuckDBPyConnection) -> pl.DataFrame:
standings_dataframe = pl.from_arrow(
conn.sql("SELECT * FROM nba_data_staging.teams")
)
return standings_dataframe
```
when running the streamlit app:
```python
import streamlit as st
from streamlit_components.standings_section import StandingsSection
from streamlit_components.connections import (
motherduck_connection,
standings_table_connection
)
conn = motherduck_connection()
standings_table = standings_table_connection(conn)
st.set_page_config(page_title="Streamlit: Premier League", layout="wide")
def app():
standings_section = StandingsSection(standings_table)
standings_section.display()
if __name__ == "__main__":
app()
```
Python unexpectedly quits with the error:
```bash
libc++abi: terminating due to uncaught exception of type std::runtime_error: instance allocation failed: new instance has no pybind11-registered base types
Abort trap: 6
```
when I remove the caching, it works.
### Reproducible Code Example
```Python
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
Error message:
```bash
libc++abi: terminating due to uncaught exception of type std::runtime_error: instance allocation failed: new instance has no pybind11-registered base types
Abort trap: 6
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: `1.42.0`
- duckdb `1.2.0`
- polars `1.22.0`
- pyarrow `19.0.0`
- Python version: `3.12.2`
- Operating System: macOS - M3 chip
- Browser: Firefox
### Additional Information
_No response_ | 1medium
|
Title: VBox widget not scrollable
Body: I am trying to build a custom widget class that I want to use for popups in a couple of ipyleaflet Markers. The custom widget consists of a main HBox, which itself contains two VBoxes.. something like this:

So far so good.. the smaller VBox on the left is meant to be scrollable. This screenshot was taken from a Jupyter notebook with the ipyleaflet Map as a solara component (see code below).
However, when I test the same code using solara, the left VBox is not scrollable and its elements are incomplete (e.g., you can't see the "C" item):

Here's the code:
```python
import solara
import markdown
import ipyleaflet
import ipywidgets as widgets
text = """## Test markdown
### Details
- **A**: a
- **B**: b
- **C**: c
"""
class Map(ipyleaflet.Map):
def __init__(self, **kwargs):
super().__init__(**kwargs)
marker = ipyleaflet.Marker(
location = (0,0),
draggable = False,
title = "My marker",
popup_max_width=1000,
popup_max_height=1000
)
self.add(marker)
main_box = widgets.HBox(
layout=widgets.Layout(
width='1000px',
height='480px',
justify_content='space-between',
)
)
panel_box = widgets.VBox(
layout = widgets.Layout(
width='25%',
height='100%',
border='solid 2px'
)
)
image_box = widgets.VBox(
layout = widgets.Layout(
width='65%',
height='100%',
border='solid 2px'
)
)
value = markdown.markdown(text)
description_box = widgets.HTML(
layout=widgets.Layout(
border='solid 1px',
flex_shrink=1
),
value=value,
)
panel_box.children = [
description_box,
description_box,
description_box,
description_box,
]
main_box.children=[
panel_box,
image_box
]
marker.popup = main_box
@solara.component
def Page():
with solara.Column() as main:
map = Map.element(
layout=widgets.Layout(height='800px')
)
return main
display(Page())
```
| 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.