text
stringlengths
20
57.3k
labels
class label
4 classes
Title: Support of Batch Input Body: Hi, it appears that the original code did not directly support the batch input. I forked the repo and created some simple modification. (you may discard the part to load my own models) https://github.com/CielAl/pytorch-grad-cam_batch/blob/master/grad_cam.py#L114 Hope this might be useful in case others try to apply your implementation :)))
1medium
Title: a02的cnn_multiple_layers问题 Body: a02下的p7_TextCNN_model.py中多层CNN函数cnn_multiple_layers中,因为第一层conv的padding是SAME,卷积后会保持图的大小,因此输出的维度是[[batch_size, sequence_length, embedding_size, num_filters]],第二层卷积层开始时136行的reshape,最后一维写成1,会造成第一维变成batch_size*embedding_size,导致内存容易溢出,其实这里的reshape可以去掉的,修复这个问题之后,显存占用回到正常水平。 另外个人感觉,第一层没必要用SAME来padding,这里不像是图像,实际上横向是完整的词向量,padding出来的是没有实际含义的,第一层直接用VALID,横向宽度上是1,只在纵向多个词之间再次卷积即可
1medium
Title: 老哥可以整合下支持本地sd生成图片? Body: ### 是否已存在类似的功能请求? - [x] 我已搜索现有的功能请求 ### 痛点 老哥可否把这个项目整合下 支持sd生成图片 https://github.com/Anning01/ComicTweets ### 建议的解决方案 老哥可否把这个项目整合下 支持sd生成图片 https://github.com/Anning01/ComicTweets ### 有用的资源 _No response_ ### 其他信息 _No response_
1medium
Title: despite cuda drivers being installed, I'm seeing this issue Body: [2023-05-01 12:17:25 +0000] [8] [INFO] Booting worker with pid: 8 2023-05-01 12:17:26.167599: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2023-05-01 12:17:26.168772: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-05-01 12:17:26.189715: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-05-01 12:17:26.189953: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-05-01 12:17:26.572682: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Directory /root /.deepface created Directory /root /.deepface/weights created
1medium
Title: improving the onboarding tutorial - your first pipeline Body: We want to improve the onboarding experience of the basic tutorials on binder. ## general observations * the objective of the initial tutorial should be to convince people to give ploomber a try. the current version tries to teach them ploomber, we want to change that. our value proposition should be clear: ploomber allows you to run more experiments faster * since our purpose is to convince, there is no need to show low-level details like the pipeline.yaml. We can mention it and add a link to open it, but it should be optional * there is no context on the dataset. the example analyzes covid data; we should make a few comments on it during the example; telling a story will make it more compelling * we should simplify and prettify the output HTML reports. hide code, make the plots prettier (use ggplot [style](https://matplotlib.org/stable/gallery/style_sheets/ggplot.html)), larger fonts, etc. * I think (?) the sample dataset has data per country, we could select a single country (or maybe continent if there's a continent column), then, at the end of the tutorial, show that they can go and change the initial task, re-run the pipeline and re-generate all outputs for the new country. we can implement this with a pipeline parameter so the outputs are stored in different folders, and the user can switch the parameter to generate the outputs for the new country * the pipeline should generate both ipynb and HTML outputs. at the end of the tutorial, [we can use this](https://sklearn-evaluation.readthedocs.io/en/latest/user_guide/NotebookCollection.html) to explore the outputs from the ipynb files * instead of using shell commands (e.g. `ploomber build`), we should using the Python API because this one has a better experience when executed on a notebook. e.g. shell commands get stuck while running, while the Python API shows a progress bar * I enabled on binder the option to open py files as notebooks with a single click, we should update the tutorial since it says that you need to do right click ## libraries I came across two projects that can help us improve the onboarding experience on binder. [ipylab](https://github.com/jtpio/ipylab) allows interacting with the frontend from Python. This can effectively demonstrate what Ploomber is doing as the user progresses in a specific tutorial. For example, in the introduction tutorial (that users might run from Binder), we could open the data preview after we run the pipeline, the HTML reports, etc. [jupyterlab-tour](https://github.com/jupyterlab-contrib/jupyterlab-tour) allows creating a "tour" on JupyterLab by highlighting specific areas in the user interface. I'm unsure how granular it is since it looks like it only allows to highlight of entire UI sections, so not sure how useful this could be. ## share your feedback while searching the docs
1medium
Title: Import broken python 3.9 Body: ## How to reproduce the behaviour ```import spacy``` ## Your Environment <!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.--> * Operating System: Windows 10.0.19045 * Python Version Used: 3.9.13 * spaCy Version Used: 3.7.4 ``` Traceback (most recent call last): File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\User\AppData\Local\Programs\Python\Python39\Scripts\spacy.exe\__main__.py", line 4, in <module> File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\__init__.py", line 13, in <module> from . import pipeline # noqa: F401 File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\pipeline\__init__.py", line 1, in <module> from .attributeruler import AttributeRuler File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\pipeline\attributeruler.py", line 8, in <module> from ..language import Language File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\language.py", line 43, in <module> from .pipe_analysis import analyze_pipes, print_pipe_analysis, validate_attrs File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\pipe_analysis.py", line 6, in <module> from .tokens import Doc, Span, Token File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\tokens\__init__.py", line 1, in <module> from ._serialize import DocBin File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\tokens\_serialize.py", line 14, in <module> from ..vocab import Vocab File "spacy\vocab.pyx", line 1, in init spacy.vocab File "spacy\tokens\doc.pyx", line 49, in init spacy.tokens.doc File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\spacy\schemas.py", line 287, in <module> class TokenPattern(BaseModel): File "pydantic\main.py", line 299, in pydantic.main.ModelMetaclass.__new__ File "pydantic\fields.py", line 411, in pydantic.fields.ModelField.infer File "pydantic\fields.py", line 342, in pydantic.fields.ModelField.__init__ File "pydantic\fields.py", line 451, in pydantic.fields.ModelField.prepare File "pydantic\fields.py", line 545, in pydantic.fields.ModelField._type_analysis File "pydantic\fields.py", line 550, in pydantic.fields.ModelField._type_analysis File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\typing.py", line 852, in __subclasscheck__ return issubclass(cls, self.__origin__) TypeError: issubclass() arg 1 must be a class ```
2hard
Title: [Bug]: v1.9.0 GFPGAN and CodeFormer not work Body: ### Checklist - [X] The issue exists after disabling all extensions - [X] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [X] The issue exists in the current version of the webui - [X] The issue has not been reported before recently - [ ] The issue has been reported before but has not been fixed yet ### What happened? После обновления версии на 1.9, сломалост восстановление лиц. В пример касается и GFPGAN и CodeFormer: - Если значение равно 0 либо 1, то генерация проходит успешно с указанным показателем. - Если значение снизить больше 0 но меньше 1, то появляется ошибка ValueError: images do not match *** Error completing request *** Arguments: ('task(23eulwtbdi5llkk)', 0.0, <PIL.Image.Image image mode=RGBA size=1024x1150 at 0x2891D69F8B0>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, True, 0.696, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {} Traceback (most recent call last): File "W:\stablediffusion v2\webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "W:\stablediffusion v2\webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 131, in run_postprocessing_webui return run_postprocessing(*args, **kwargs) File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 71, in run_postprocessing scripts.scripts_postproc.run(initial_pp, args) File "W:\stablediffusion v2\webui\modules\scripts_postprocessing.py", line 198, in run script.process(single_image, **process_args) File "W:\stablediffusion v2\webui\scripts\postprocessing_gfpgan.py", line 29, in process res = Image.blend(pp.image, res, gfpgan_visibility) File "W:\stablediffusion v2\system\python\lib\site-packages\PIL\Image.py", line 3340, in blend return im1._new(core.blend(im1.im, im2.im, alpha)) ValueError: images do not match --- ### Steps to reproduce the problem 1. Go to Extra 2. Drop image 3. Activate GFPGAN или CodeFormer 4. Set the parameter to 1 5. Click on "Generete" 6. Set the parameter to 0 7. Click on "Generete" 8. Set the parameter from 0 to 1 9. Click on "Generete" ### What should have happened? The sensitivity of facial reconstruction should change. ### What browsers do you use to access the UI ? Mozilla Firefox, Google Chrome ### Sysinfo [sysinfo-2024-04-15-12-29.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/14979457/sysinfo-2024-04-15-12-29.json) ### Console logs ```Shell Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.9.0 Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b Installing requirements [Auto-Photoshop-SD] Attempting auto-update... [Auto-Photoshop-SD] switch branch to extension branch. checkout_result: Your branch is up to date with 'origin/master'. [Auto-Photoshop-SD] Current Branch. branch_result: * master [Auto-Photoshop-SD] Fetch upstream. fetch_result: [Auto-Photoshop-SD] Pull upstream. pull_result: Already up to date. All models for DeOldify are already downloaded. Installing yt-dlp for DeOldify extension. Installing yt-dlp If submitting an issue on github, please provide the full startup log for debugging purposes. Initializing Dreambooth Dreambooth revision: 45a12fe5950bf93205b6ef2b7511eb94052a241f Checking xformers... Checking bitsandbytes... Checking bitsandbytes (ALL!) Checking Dreambooth requirements... Installed version of bitsandbytes: 0.43.0 [Dreambooth] bitsandbytes v0.43.0 is already installed. Installed version of accelerate: 0.21.0 [Dreambooth] accelerate v0.21.0 is already installed. Installed version of dadaptation: 3.2 [Dreambooth] dadaptation v3.2 is already installed. Installed version of diffusers: 0.27.2 [Dreambooth] diffusers v0.25.0 is already installed. Installed version of discord-webhook: 1.3.0 [Dreambooth] discord-webhook v1.3.0 is already installed. Installed version of fastapi: 0.94.0 [Dreambooth] fastapi is already installed. Installed version of gitpython: 3.1.32 [Dreambooth] gitpython v3.1.40 is not installed. Successfully installed gitpython-3.1.43 Installed version of pytorch_optimizer: 2.12.0 [Dreambooth] pytorch_optimizer v2.12.0 is already installed. Installed version of Pillow: 9.5.0 [Dreambooth] Pillow is already installed. Installed version of tqdm: 4.66.2 [Dreambooth] tqdm is already installed. Installed version of tomesd: 0.1.3 [Dreambooth] tomesd v0.1.2 is already installed. Installed version of tensorboard: 2.13.0 [Dreambooth] tensorboard v2.13.0 is already installed. [+] torch version 2.1.2+cu121 installed. [+] torchvision version 0.16.2+cu121 installed. [+] accelerate version 0.21.0 installed. [+] diffusers version 0.27.2 installed. [+] bitsandbytes version 0.43.0 installed. [+] xformers version 0.0.23.post1 installed. Launching Web UI with arguments: No module 'xformers'. Proceeding without it. *** Error loading script: img2img.py Traceback (most recent call last): File "W:\stablediffusion v2\webui\modules\scripts.py", line 508, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "W:\stablediffusion v2\webui\modules\script_loading.py", line 14, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "W:\stablediffusion v2\webui\scripts\img2img.py", line 16, in <module> from imwatermark import WatermarkEncoder ModuleNotFoundError: No module named 'imwatermark' --- *** Error loading script: txt2img.py Traceback (most recent call last): File "W:\stablediffusion v2\webui\modules\scripts.py", line 508, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "W:\stablediffusion v2\webui\modules\script_loading.py", line 14, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "W:\stablediffusion v2\webui\scripts\txt2img.py", line 14, in <module> from imwatermark import WatermarkEncoder ModuleNotFoundError: No module named 'imwatermark' --- python_server_full_path: W:\stablediffusion v2\webui\extensions\Auto-Photoshop-StableDiffusion-Plugin\server/python_server [-] ADetailer initialized. version: 24.4.1, num models: 10 *** Error loading script: main.py Traceback (most recent call last): File "W:\stablediffusion v2\webui\modules\scripts.py", line 508, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "W:\stablediffusion v2\webui\modules\script_loading.py", line 14, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "W:\stablediffusion v2\webui\extensions\openpose-editor\scripts\main.py", line 14, in <module> from basicsr.utils.download_util import load_file_from_url ModuleNotFoundError: No module named 'basicsr' --- CivitAI Browser+: Aria2 RPC started ControlNet preprocessor location: W:\stablediffusion v2\webui\extensions\sd-webui-controlnet\annotator\downloads 2024-04-15 15:13:39,359 - ControlNet - INFO - ControlNet v1.1.443 2024-04-15 15:13:39,516 - ControlNet - INFO - ControlNet v1.1.443 [sdwi2iextender] Developper warning: [sdwi2iextender] ./modules/img2img.py is being recompiled at run time with a patch. Your debugger will not work in this file. [sdwi2iextender] If you need debug tools in this file, disable all extensions that use the sdwi2iextender library. [sdwi2iextender] This patch is temporary and will be removed when v1.9 will be released. Loading weights [dcd690123c] from W:\stablediffusion v2\webui\models\Stable-diffusion\Stable SR\Models\v2-1_768-ema-pruned.safetensors [LyCORIS]-WARNING: LyCORIS legacy extension is now loaded, if you don't expext to see this message, please disable this extension. 2024-04-15 15:13:43,634 - ControlNet - INFO - ControlNet UI callback registered. W:\stablediffusion v2\webui\extensions\model_preset_manager\scripts\main.py:446: GradioDeprecationWarning: 'scale' value should be an integer. Using 0.1 will cause issues. with gr.Column(min_width=100, scale = 0.1): W:\stablediffusion v2\webui\extensions\model_preset_manager\scripts\main.py:463: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead. model_generation_data = gr.Textbox(label = model_generation_data_label_text(), value = "", lines = 3, elem_id = "def_model_gen_data_textbox").style(show_copy_button=True) W:\stablediffusion v2\webui\extensions\model_preset_manager\scripts\main.py:466: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead. triggerWords = gr.CheckboxGroup([], multiselect=True, label="Trigger Words", interactive = True).style(container=True, item_container=True) W:\stablediffusion v2\webui\extensions\model_preset_manager\scripts\main.py:466: GradioDeprecationWarning: The `item_container` parameter is deprecated. triggerWords = gr.CheckboxGroup([], multiselect=True, label="Trigger Words", interactive = True).style(container=True, item_container=True) W:\stablediffusion v2\webui\extensions\model_preset_manager\scripts\main.py:493: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead. output_textbox = gr.Textbox(interactive=False, label="Output").style(show_copy_button=True) W:\stablediffusion v2\webui\modules\gradio_extensons.py:25: GradioDeprecationWarning: `height` is deprecated in `Interface()`, please use it within `launch()` instead. res = original_IOComponent_init(self, *args, **kwargs) W:\stablediffusion v2\webui\extensions\stable-diffusion-webui-Prompt_Generator\scripts\prompt_generator.py:229: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead. row.style(equal_height=True) Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. COMMANDLINE_ARGS does not contain --api, API won't be mounted. Startup time: 101.2s (prepare environment: 80.5s, import torch: 5.7s, import gradio: 1.0s, setup paths: 1.4s, initialize shared: 0.2s, other imports: 1.5s, load scripts: 7.9s, create ui: 1.6s, gradio launch: 1.1s). Creating model from config: W:\stablediffusion v2\webui\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml Loading VAE weights specified in settings: W:\stablediffusion v2\webui\models\VAE\vqgan_cfw_00011_vae_only.ckpt Applying attention optimization: Doggettx... done. Model loaded in 8.5s (load weights from disk: 0.1s, find config: 3.0s, create model: 0.1s, apply weights to model: 4.1s, load VAE: 0.4s, load textual inversion embeddings: 0.1s, calculate empty prompt: 0.5s). Advanced elements visible: False Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.9.0 Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b Installing requirements Launching Web UI with arguments: No module 'xformers'. Proceeding without it. *** Error loading script: img2img.py Traceback (most recent call last): File "W:\stablediffusion v2\webui\modules\scripts.py", line 508, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "W:\stablediffusion v2\webui\modules\script_loading.py", line 14, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "W:\stablediffusion v2\webui\scripts\img2img.py", line 16, in <module> from imwatermark import WatermarkEncoder ModuleNotFoundError: No module named 'imwatermark' --- *** Error loading script: txt2img.py Traceback (most recent call last): File "W:\stablediffusion v2\webui\modules\scripts.py", line 508, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "W:\stablediffusion v2\webui\modules\script_loading.py", line 14, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "W:\stablediffusion v2\webui\scripts\txt2img.py", line 14, in <module> from imwatermark import WatermarkEncoder ModuleNotFoundError: No module named 'imwatermark' --- Loading weights [dcd690123c] from W:\stablediffusion v2\webui\models\Stable-diffusion\Stable SR\Models\v2-1_768-ema-pruned.safetensors Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Startup time: 18.4s (prepare environment: 6.7s, import torch: 5.7s, import gradio: 1.0s, setup paths: 1.4s, initialize shared: 0.1s, other imports: 0.6s, load scripts: 1.8s, create ui: 0.7s, gradio launch: 0.1s). Creating model from config: W:\stablediffusion v2\webui\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml Loading VAE weights specified in settings: W:\stablediffusion v2\webui\models\VAE\vqgan_cfw_00011_vae_only.ckpt Applying attention optimization: Doggettx... done. Model loaded in 4.7s (load weights from disk: 0.1s, find config: 1.9s, apply weights to model: 1.9s, load VAE: 0.4s, calculate empty prompt: 0.1s). *** Error completing request *** Arguments: ('task(23eulwtbdi5llkk)', 0.0, <PIL.Image.Image image mode=RGBA size=1024x1150 at 0x2891D69F8B0>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, True, 0.696, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {} Traceback (most recent call last): File "W:\stablediffusion v2\webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "W:\stablediffusion v2\webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 131, in run_postprocessing_webui return run_postprocessing(*args, **kwargs) File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 71, in run_postprocessing scripts.scripts_postproc.run(initial_pp, args) File "W:\stablediffusion v2\webui\modules\scripts_postprocessing.py", line 198, in run script.process(single_image, **process_args) File "W:\stablediffusion v2\webui\scripts\postprocessing_gfpgan.py", line 29, in process res = Image.blend(pp.image, res, gfpgan_visibility) File "W:\stablediffusion v2\system\python\lib\site-packages\PIL\Image.py", line 3340, in blend return im1._new(core.blend(im1.im, im2.im, alpha)) ValueError: images do not match --- Loading model Deliberate\Deliberate_v5.safetensors (2 out of 2) Calculating sha256 for W:\stablediffusion v2\webui\models\Stable-diffusion\Deliberate\Deliberate_v5.safetensors: 636fe404e3fd0c612ea3f2bd5d6f66fe8f005c026fac4fb54ee5c811ecd0da2c Loading weights [636fe404e3] from W:\stablediffusion v2\webui\models\Stable-diffusion\Deliberate\Deliberate_v5.safetensors Creating model from config: W:\stablediffusion v2\webui\configs\v1-inference.yaml Applying attention optimization: Doggettx... done. Model loaded in 8.7s (calculate hash: 7.1s, load config: 0.2s, create model: 0.3s, apply weights to model: 0.8s, calculate empty prompt: 0.1s). *** Error completing request *** Arguments: ('task(xq5y56sd551hk5t)', 0.0, <PIL.Image.Image image mode=RGBA size=1024x1150 at 0x289123D0910>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, True, 0.696, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {} Traceback (most recent call last): File "W:\stablediffusion v2\webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "W:\stablediffusion v2\webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 131, in run_postprocessing_webui return run_postprocessing(*args, **kwargs) File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 71, in run_postprocessing scripts.scripts_postproc.run(initial_pp, args) File "W:\stablediffusion v2\webui\modules\scripts_postprocessing.py", line 198, in run script.process(single_image, **process_args) File "W:\stablediffusion v2\webui\scripts\postprocessing_gfpgan.py", line 29, in process res = Image.blend(pp.image, res, gfpgan_visibility) File "W:\stablediffusion v2\system\python\lib\site-packages\PIL\Image.py", line 3340, in blend return im1._new(core.blend(im1.im, im2.im, alpha)) ValueError: images do not match --- *** Error completing request *** Arguments: ('task(3mdy8dmj69j1luo)', 0.0, <PIL.Image.Image image mode=RGBA size=1024x1150 at 0x289123D2200>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, True, 0.345, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {} Traceback (most recent call last): File "W:\stablediffusion v2\webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "W:\stablediffusion v2\webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 131, in run_postprocessing_webui return run_postprocessing(*args, **kwargs) File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 71, in run_postprocessing scripts.scripts_postproc.run(initial_pp, args) File "W:\stablediffusion v2\webui\modules\scripts_postprocessing.py", line 198, in run script.process(single_image, **process_args) File "W:\stablediffusion v2\webui\scripts\postprocessing_gfpgan.py", line 29, in process res = Image.blend(pp.image, res, gfpgan_visibility) File "W:\stablediffusion v2\system\python\lib\site-packages\PIL\Image.py", line 3340, in blend return im1._new(core.blend(im1.im, im2.im, alpha)) ValueError: images do not match --- *** Error completing request *** Arguments: ('task(lv65ayupoqo2u2a)', 0.0, <PIL.Image.Image image mode=RGBA size=1024x1150 at 0x289123E1810>, None, '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'None', 'None', 0, True, 0.036, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {} Traceback (most recent call last): File "W:\stablediffusion v2\webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "W:\stablediffusion v2\webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 131, in run_postprocessing_webui return run_postprocessing(*args, **kwargs) File "W:\stablediffusion v2\webui\modules\postprocessing.py", line 71, in run_postprocessing scripts.scripts_postproc.run(initial_pp, args) File "W:\stablediffusion v2\webui\modules\scripts_postprocessing.py", line 198, in run script.process(single_image, **process_args) File "W:\stablediffusion v2\webui\scripts\postprocessing_gfpgan.py", line 29, in process res = Image.blend(pp.image, res, gfpgan_visibility) File "W:\stablediffusion v2\system\python\lib\site-packages\PIL\Image.py", line 3340, in blend return im1._new(core.blend(im1.im, im2.im, alpha)) ValueError: images do not match --- ``` ### Additional information Only automatic updates to the latest version to date, version 1.9
1medium
Title: Support for loading geotiff files as a part of the ImageFolder Body: ### Feature request Request for adding rasterio support to load geotiff as a part of ImageFolder, instead of using PIL ### Motivation As of now, there are many datasets in HuggingFace Hub which are predominantly focussed towards RemoteSensing or are from RemoteSensing. The current ImageFolder (if I have understood correctly) uses PIL. This is not really optimized because mostly these datasets have images with many channels and additional metadata. Using PIL makes one loose it unless we provide a custom script. Hence, maybe an API could be added to have this in common? ### Your contribution If the issue is accepted - i can contribute the code, because I would like to have it automated and generalised.
1medium
Title: tqdm_notebook bar malformed when bar_format is specified. Body: ### System Info ```sh >>> import tqdm, sys >>> print(tqdm.__version__, sys.version, sys.platform) 4.28.1 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 14:01:38) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] darwin ``` ### Issue: With the bar format below, the progress bars are correctly rendered in the terminal but incorrectly in jupyter notebook (instead of filling up the bar, another bar to the right is constructed). <img width="945" alt="screen shot 2018-12-05 at 11 59 51 am" src="https://user-images.githubusercontent.com/1762463/49540671-8590d380-f885-11e8-8b7c-649d47e94aa2.png">
1medium
Title: random uuid with seed Body: Is there a reason why you are using UUIDs in the first place? Thinking you could just set seed and do randomization with numbers to get deterministic names. E.g line 177 in dashboard_methods.py ` if not hasattr(self, "name") or self.name is None: self.name = name or "uuid"+shortuuid.ShortUUID().random(length=5) ` _Originally posted by @carlryn in https://github.com/oegedijk/explainerdashboard/issues/38#issuecomment-758700981_
3misc
Title: Significant Increase in Computation Time When Using Attention Mask in SDPA Attention Body: ### System Info `transformers` version: 4.46.3 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.10 - Python version: 3.8.18 - Huggingface_hub version: 0.25.2 - Safetensors version: 0.4.5 - Accelerate version: 1.0.1 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - use_cpu: False - debug: False - num_processes: 8 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - enable_cpu_affinity: False - PyTorch version (GPU?): 2.4.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: False - Using GPU in script?: True - GPU type: NVIDIA A800-SXM4-40GB ### Who can help? @ylacombe, @eustlb ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Hi, I am experiencing a significant increase in computation time when using an attention mask with the WhisperSdpaAttention in the transformers library. I am not sure if this is expected behavior or a potential bug. Below is the code I used to test this: ``` import torch import time from transformers.models.whisper.modeling_whisper import WhisperSdpaAttention def build_mask(x, x_lens): batch_size = x_lens.size(0) max_seq_len = x_lens.max() # Create a sequence tensor of shape (batch_size, max_seq_len) seq_range = ( torch.arange( 0, max_seq_len, dtype=x_lens.dtype, device=x_lens.device, ) .unsqueeze(0) .expand(batch_size, max_seq_len) ) lengths_expand = x_lens.unsqueeze(1).expand(batch_size, max_seq_len) # Create mask padding_mask = seq_range >= lengths_expand audio_attention_mask_ = padding_mask.view(batch_size, 1, 1, max_seq_len).expand( batch_size, 1, max_seq_len, max_seq_len ) audio_attention_mask = audio_attention_mask_.to( dtype=x.dtype, device=x_lens.device, ) audio_attention_mask[audio_attention_mask_] = float("-inf") return audio_attention_mask device = torch.device("cuda:0") x = torch.randn(2, 200, 128).half().to(device) x_lens = torch.tensor([200, 160]).long().to(device) attn1 = WhisperSdpaAttention(embed_dim=128, num_heads=1, is_causal=False) attn1.to(device).half() with torch.no_grad(): begin = time.time() z = attn1(x) print("sdpa without mask: ", time.time() - begin) begin = time.time() mask = build_mask(x, x_lens).to(device) out = attn1(x, attention_mask=mask) print("sdpa with mask: ", time.time() - begin) ``` The output times are as follows: SDPA without mask: 0.028657197952270508 SDPA with mask: 0.13893771171569824 ### Expected behavior As you can see, the computation time increases significantly when an attention mask is used. Could you please let me know if this is expected behavior or if there might be an issue with the implementation? Thank you!
1medium
Title: How to set "Parameter content type" Body: I need to change "Parameter content type" from "application/json" to "text/plain" for a "body" type parameter. How can I do it? Thanks.
1medium
Title: TokenValidator.scope_insufficient seems wrong Body: There's been changes in [TokenValidator.scope_insufficient()](https://github.com/lepture/authlib/blob/1089d5441c8e780a5165ca859b289fc8485ec5eb/authlib/oauth2/rfc6749/resource_protector.py#L33) to support nested required scopes, but I think it introduces a bug. ``` >>> from authlib.oauth2.rfc6749.resource_protector import TokenValidator >>> TokenValidator.scope_insufficient(token_scopes=["read"], required_scopes=["read", "write"]) False ``` This seems wrong, since the token does not have all the required scopes. The reason is that now the function is looping over the required scopes, and as soon as it finds a matching scope it exists `False`. In this case it will not check the required `write` scope.
1medium
Title: We're changing database Body: ## Rollout We're gradually rolling out queries to the new database now. If you're affected, you'll see a banner like this: <img width="770" alt="Screenshot 2024-09-18 at 14 42 24" src="https://github.com/user-attachments/assets/11990bfa-f669-4ca5-bf1a-45c8359da344"> **If you notice queries taking longer or returning errors or different results, please let us know below** or [contact us via email or Slack](https://docs.pydantic.dev/logfire/help/#email). **If you need to continue querying the old database**, you can do so by right-clicking on your profile picture in the top right and setting the query engine to 'TS' (Timescale, the old database): <img width="342" alt="Screenshot 2024-09-18 at 14 44 53" src="https://github.com/user-attachments/assets/f04b2aa3-1484-4ab7-8efe-0ffd7063547e"> **To get rid of the warning banner**, set the query engine to 'TS' and then back to 'FF' (FusionFire, the new database) again. We will be increasing the percentage of users whose default query engine is FF over time and monitoring the impact. We may decrease it again if we notice problems. If you set a query engine explicitly to either TS or FF, this won't affect you. Otherwise, your query engine may switch back and forth. For most users, there shouldn't be a noticeable difference. Most queries should be *faster* with FF, especially if they aggregate lots of data over a long time period. If your dashboards were timing out before with TS, try using FF. However some specific queries that are very fast with TS are slower with FF. In particular, TS can look up trace and span IDs almost instantly without needing a specific time range. **If you click on a link to a trace/span ID in a table, it will open the live view with a time range of 30 days because it doesn't know any better. If this doesn't load, reduce the time range.** ## Summary We're changing the database that stores observability data in the Logfire platform from [Timescale](https://www.timescale.com/) to a custom database built on [Apache Datafusion](https://datafusion.apache.org/). This should bring big improvements in performance, but will lead to some SQL compatibility issues initially (details below). ## Background Timescale is great, it can be really performant when you know the kind of queries you regularly run (so you can set up continuous aggregates) and when you can enable their compression features (which both save money and make queries faster). Unfortunately we can't use either of those features: * our users can query their data however they like using SQL, so continuous aggregates aren't that helpful * Timescale's compression features are incompatible with row level permissions — in Timescale/PostgreSQL we have to have row level permissions since we're running users SQL directly against the database Earlier this year, as the volume of data the Logfire platform received increased in the beta, these limitations became clearer and clearer. The other more fundamental limitation of Timescale was their open/closed source business model. The ideal data architecture for us (and any analytics database I guess) is separated storage and compute: data is stored in S3/GCS as parquet (or equivalent), with an external index used by the query/compute nodes. Timescale has this, but it's completely closed source. So we can either get a scaleable architecture but be forced to use their SAAS, or run Timescale as a traditional "coupled storage and compute" database ourselves. For lots of companies either of those solutions would be satisfactory, but if Logfire scales as we hope it does, we'd be scuppered with either. ## Datafusion We settled on Datafusion as the foundation for our new database for a few reasons: 1. It's completely open source so we can build the separated storage and compute solution we want 2. It's all Rust, quite a few of our team are comfortable writing Rust, meaning the database isn't just a black box, we can dive in and improve it as we wish (as an example, Datafusion didn't have JSON querying support until we implemented it in [`datafusion-functions-json`](https://github.com/datafusion-contrib/datafusion-functions-json)). Since starting to use datafusion, our team has contributed 20 or 30 pull requests to datafusion, and associated projects like `arrow-rs` and `sqlparser-rs` 3. Datafusion is extremely extensible, we can adjust the SQL syntax, how queries are planned and run and build indexes exactly as we need them 4. Datafusion's [SQL parser](https://github.com/sqlparser-rs/sqlparser-rs) has pretty good compatibility with Postgres, and again, it's just Rust so we can improve it fairly easily 5. The project is excellently run, part of Apache, leverages the Arrow/Parquet ecosystem, and is used by large organizations like InfluxDB, Apple and Nvidia ## Transition For the last couple of months we've been double-writing to Timescale and Fusionfire (our cringey internal name for the new datafusion-based database), working on improving reliability and performance of Fusionfire for all types of queries. Fusionfire is now significantly (sometimes >10x) faster than timescale for most queries. There's a few low latency queries on very recent data which are still faster on timescale that we're working on improving. Currently by default the live view, explore view, dashboards and alerts use timescale by default. **You can try fusionfire now for everything except alerts by right clicking on your profile picture in the top right and selecting "FF" as the query engine.** In the next couple of weeks we'll migrate fully to Fusionfire and retire timescale. We're working hard to make Fusionfire more compatible with PostgreSQL (see https://github.com/sqlparser-rs/sqlparser-rs/pull/1398, https://github.com/sqlparser-rs/sqlparser-rs/pull/1394, https://github.com/sqlparser-rs/sqlparser-rs/pull/1360, https://github.com/apache/arrow-rs/pull/6211, https://github.com/apache/datafusion/pull/11896, https://github.com/apache/datafusion/pull/11876, https://github.com/apache/datafusion/pull/11849, https://github.com/apache/datafusion/pull/11321, https://github.com/apache/arrow-rs/pull/6319, https://github.com/apache/arrow-rs/pull/6208, https://github.com/apache/arrow-rs/pull/6197, https://github.com/apache/arrow-rs/pull/6082, https://github.com/apache/datafusion/pull/11307), but there are still a few expressions which currently don't run correctly (a lot related to intervals): * `generate_series('2024-08-28 00:00:00'::timestamptz, '2024-08-28 00:00:60'::timestamptz, INTERVAL '10 seconds')` * `3 * interval '10 seconds'` * `end_timestamp - interval '1 second' > start_timestamp` — will be fixed by https://github.com/sqlparser-rs/sqlparser-rs/pull/1398 * `extract(seconds from end_timestamp - start_timestamp)` — (`second` without the trailing `s` works thanks to https://github.com/sqlparser-rs/sqlparser-rs/pull/1394) * JSON functions like `jsonb_array_elements` aren't available yet If you notice any other issues, please let us know on this issue or a new issue, and we'll let you know how quickly we can fix it.
1medium
Title: how to create a single field filter? Body: I am trying to define a field on a type that can be filtered, the field only returns 1 object so no list. my attempt is this: ``` chat: auto = strawberry_django.field( field_name="chat_set", filters=ChatFilter, default_factory=lambda: None, ) ``` but it still returns a list, I wonder if there is a way to express, something like : `self.chat_set.get(filters**)` instead of `self.chat_set.filter(filters**)`?
1medium
Title: [🕹️]No-Code Side Quests Twitter thread Body: ### What side quest or challenge are you solving? I have publish twitter thread ### Points (🕹️ 150-500 Points) ### Description _No response_ ### Provide proof that you've completed the task Thread Link is [here](https://x.com/adil_kadival/status/1848576037954982310)
3misc
Title: DIFFERENT SCHEMA BASED ON API VERSION ON SWAGGER UI Body: I have two API versions but each with it's own schemas ![image](https://user-images.githubusercontent.com/47332486/153483331-47b9687e-0f33-46af-87e0-461b45174a93.png) The problem all schemas are displayed on both ends How can I separate each schema to be displayed on the correct version ![image](https://user-images.githubusercontent.com/47332486/153483522-c410af87-5db6-4bf9-99c8-1b04e571f119.png) ![image](https://user-images.githubusercontent.com/47332486/153483075-805558b4-31bb-4edd-a022-5a6daf806b30.png) image above shows duplicate schema with Prefix (Paged)
1medium
Title: Pydantic type `all_fields` does not include computed fields Body: ## Describe the Bug If a Pydantic model defines a computed field, those fields are excluded from the model when using the `all_fields` kwarg to the `strawberry.experimental.pydantic.type`. I would expect them to be included by default as well, or for there to be a flag like `include_computed_fields` that I could specify to ensure they're exposed by the GraphQL type. Extending the converted model to include the computed field with their proper type works. `strawberry.auto` does not work. See the following: ```python import strawberry from pydantic import BaseModel, computed_field class SomeModel(BaseModel): name: str @computed_field @property def normalized_name(self) -> str: return f"normalized:{self.name}" @strawberry.experimental.pydantic.type(SomeModel, all_fields=True) class ModelType: pass # normalized_name: str @strawberry.type class Query: @strawberry.field(graphql_type=ModelType) def model(self) -> SomeModel: return SomeModel(name="hello") res = strawberry.Schema(query=Query).execute_sync( """ query { model { name normalizedName } } """ ) print(res) ``` In the above code, `normalizedName` doesn't exist on the schema and therefore returns an error. After uncommenting the field from the type, the query returns properly. If the computed field in the converted type is typed with `strawberry.auto`, I get `TypeError: ModelType fields cannot be resolved. Unexpected type 'typing.Any'` ## System Information - Operating system: Linux - Strawberry version (if applicable): `0.235.2` ## Other information I'm not sure if this is a bug or not, but the return typing for the query is also a bit funky. I cannot type the field to return the converted model type. Instead, I have to type the field as the actual pydantic model and specify `graphql_type` in the field arguments. During runtime, both work (incorrect typing and valid typing).
1medium
Title: Get_ee_stac_list() fails Body: <!-- Please search existing issues to avoid creating duplicates. --> ### Environment Information - geemap version: 0.11.0 - Python version: 3.7 - Operating System: On Google Colab ### Description Hi, when trying to run datasets.get_ee_stac_list() it returns [] The URL https://earthengine-stac.storage.googleapis.com/catalog/catalog.json is accessible, so unsure why it returns a blank list. Thanks
1medium
Title: Discrepancy in column property with actual structure after grouping Body: **Describe the issue**: After `groupby` and `reset_index`, DataFrame `columns` property have one column missing and one with an incorrect name, while computed DataFrame have proper structure. **Minimal Complete Verifiable Example**: ```python import pandas as pd import dask.dataframe as dd data = { 'id': [1, 1, 1, 2, 2, 2], 'date': pd.to_datetime(['2023-01-01', '2023-01-04', '2023-01-05', '2023-01-01', '2023-01-04', '2023-01-05']), 'metric': [1,1,1,1,1,1] } pd_df = pd.DataFrame(data).astype({'id': 'int64', 'metric': 'int64', 'date': 'datetime64[ns]'}) df = dd.from_pandas(pd_df) df = ( df .groupby(by=['id']) .apply(lambda x: x, include_groups=False, meta={'date': 'datetime64[ns]', "metric": "int64", }) .reset_index(drop=False) .persist() ) print('Actual:') print(df.compute()) print(df.columns) pd_df = ( pd_df .groupby(by=['id']) .apply(lambda x: x, include_groups=False) .reset_index(drop=False) ) print("\n\nExpected:") print(pd_df) print(pd_df.columns) ``` ``` Actual: id level_1 date metric 0 1 0 2023-01-01 1 1 1 1 2023-01-04 1 2 1 2 2023-01-05 1 3 2 3 2023-01-01 1 4 2 4 2023-01-04 1 5 2 5 2023-01-05 1 Index(['index', 'date', 'metric'], dtype='object') <---------- extra 'index' column and missing 'id' and 'level_1' Expected: id level_1 date metric 0 1 0 2023-01-01 1 1 1 1 2023-01-04 1 2 1 2 2023-01-05 1 3 2 3 2023-01-01 1 4 2 4 2023-01-04 1 5 2 5 2023-01-05 1 Index(['id', 'level_1', 'date', 'metric'], dtype='object') ``` **Environment**: - Dask version: 2024.8.0 - Python version: 3.10 - Operating System: WSL - Install method (conda, pip, source): poetry
1medium
Title: run demo_ssd.py gets error Body: ----------Python Info---------- Version : 3.5.6 Compiler : GCC 7.3.0 Build : ('default', 'Aug 26 2018 21:41:56') Arch : ('64bit', '') ------------Pip Info----------- Version : 10.0.1 Directory : /home/z440/miniconda3/envs/mxnet/lib/python3.5/site-packages/pip ----------MXNet Info----------- Version : 1.6.0 Directory : /home/z440/miniconda3/envs/mxnet/lib/python3.5/site-packages/mxnet Commit Hash : b1932c027ba8df081ca398dd8b5d3a893c5bc61d Library : ['/home/z440/miniconda3/envs/mxnet/lib/python3.5/site-packages/mxnet/libmxnet.so']
1medium
Title: if we only use mask LM in training and disable the'next sentence', how should I modify the create_pretraining_data.py Body: in pre-training code, I figure out just disable the 'next sentence loss' code
1medium
Title: modeling_phi3 errors with AttributeError: 'DynamicCache' object has no attribute 'get_max_length' Body: ### System Info - `transformers` version: 4.49.0.dev0 (315a9f494e0e00d8652722ce950be590852a4727~1) - Platform: Windows-10-10.0.20348-SP0 - Python version: 3.11.7 - Huggingface_hub version: 0.28.1 - Safetensors version: 0.5.2 - Accelerate version: 1.3.0 - Accelerate config: not found - PyTorch version (GPU?): 2.6.0+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.10.2 (cpu) - Jax version: 0.5.0 - JaxLib version: 0.5.0 - Using distributed or parallel set-up in script?: no - Using GPU in script?: no - GPU type: NVIDIA RTX A5000 ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Use Phi3 with any cache configuration, including default (DynamicCache) I think get_max_length is probably declared on a mixin that isn't on the cache classes yet? ``` comfy_extras\nodes\nodes_language.py:361: in execute return model.generate(tokens, max_new_tokens, repetition_penalty, seed, sampler), comfy\language\transformers_model_management.py:228: in generate output_ids = transformers_model.generate( ..\..\.venv\Lib\site-packages\torch\utils\_contextlib.py:116: in decorate_context return func(*args, **kwargs) ..\..\.venv\Lib\site-packages\transformers\generation\utils.py:2224: in generate result = self._sample( ..\..\.venv\Lib\site-packages\transformers\generation\utils.py:3198: in _sample model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) C:\Users\bberman\.cache\huggingface\modules\transformers_modules\c1358f8a35e6d2af81890deffbbfa575b978c62f\modeling_phi3.py:1292: in prepare_inputs_for_generation max_cache_length = past_key_values.get_max_length() ``` ### Expected behavior related to #35168? I'm not sure why this is only coming up with phi-3 so far
1medium
Title: raise NotImplementedError in BaseDBBackend Body: ### First Check - [X] I added a very descriptive title to this issue. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to AuthX but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to AuthX but to [FastAPI](https://github.com/tiangolo/fastapi). ### Example Code ```python Following this example: https://github.com/yezz123/authx/blob/main/example/app/main.py ``` ### Description I cloned the repository, and I'm trying out AuthX as it looks to be what I want. I've run into the following issues: 1. A sqlite DB isn't generated. I can see in the `BaseDBBackend` class, there are a bunch of `raise NotImplementedError`, does this mean the BaseDBBackend isn't finished yet? 2. When starting the app and navigating to `/docs`, I can see a bunch of endpoints, but the `register` endpoint, for example, doesn't let me put it any parameters. When will the sqlite DB backend be finished? ### Operating System Linux ### Operating System Details _No response_ ### FastAPI Version 0.77.1 ### Python Version 3.10.4 ### Additional Context ![image](https://user-images.githubusercontent.com/11299982/176925284-36a61461-22a3-40c2-bb81-6ffda68c8d35.png)
1medium
Title: feature request: provider for "self" Body: I would like a provider to be able to pass a container as an argument: ```python class Container(containers.DeclarativeContainer): foo = providers.Callable(calc_foo, containers.MarkerForContainer) bar = providers.Object('hello') container = Container() container.override_providers(container=container) def calc_foo(container): print(container.bar()) container.foo() # prints "hello" - ? ``` I assume that is impossible to do directly, right now? I guess perhaps I could do: ```python class Container(containers.DeclarativeContainer): container = providers.DependenciesContainer() foo = providers.Callable(calc_foo, container) bar = providers.Object('hello') container = Container() container.override_providers(container=container) def calc_foo(container): print(container.bar()) container.foo() # prints "hello" - ? ``` But having the container work without having to put cheese in it (to use a mousetrap analogy) would be great... any chance of something like the former (if indeed it isn't possible)?
1medium
Title: Is this project still mantained? Body: @syrusakbary thank you for this great project! I noticed that there have been a lot of commits since the last release, of which the last one was 6 months ago. Are you still planning on working on this project? Best regards
3misc
Title: Unique items should be allowed as list in Pydantic v2 Body: **Describe the bug** When Pydantic v2 is used, i.e., `--output-model-type pydantic_v2.BaseModel`, any field tagged with unique items will always be created as set. Where it is understandable to use set to ensure unique items, there is a distinct difference between set and list. Some important ones are: * Set does not preserve order like list. * Set requires item hashable. As such, it is desirable to use list to store data even when unique items are used for many applications. Noted that there is a `--use-unique-items-as-set` flag, which usually means by default the list is used (It is the case when other output model types are used.) May I suggest to use list for Pydantic v2 as well? Alternatively, can we support a `--use-unique-items-as-list` flag? **To Reproduce** Example schema: ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "title": "Example", "type": "object", "properties": { "data": { "type": "array", "uniqueItems": true } } } ``` Used commandline: ``` $ datamodel-codegen --output-model-type pydantic_v2.BaseModel --input schema.json --output model.py ``` **Expected behavior** Ability to use list instead of set in the output model. **Version:** - OS: macOS - Python version: 3.11.4 - datamodel-code-generator version: 0.21.2
1medium
Title: ROC Curve widget sets a wrong prior probability Body: With apologies to self for writing such a bad bug report: I have no time to properly explore it now, but I have to write this down lest I forget. I encountered a situation (on Pima diabetes data) in which the ROC widget's target was set to 1, but the prior probability was that for class 0. I changed the target to 0 and back to 1, and the prior probabilty was reset properly. My hunch is that if the widget was loaded in the workflow, the target is retrieved from settings, but the prior probability is set before and disregarding the target. Changing the target back and forth calls the necessary callbacks and updates the prior probability. This is just a hypothesis, I don't have time to actually reproduce the bug and check the code.
1medium
Title: Using dirichlet sampler directly in Dirichlet distribution Body: After https://github.com/google/jax/pull/9906, `jax.random.dirichlet` should be robust for small concentration, so we can remove the current trick that we put in the Dirichlet sampler.
1medium
Title: Wan model is not working in MacOs if scheduler is `uni_pc` Body: ### Expected Behavior A normal video output ### Actual Behavior https://github.com/user-attachments/assets/66051ca7-ccd2-4fb9-a186-a9bf4e974772 ### Steps to Reproduce I was trying to use Wan 2.1 model in Comfy in my macbook pro (M2). And use the example workflow from [blog examples](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/example%20workflows_Wan2.1/image_to_video_wan_480p_example.json). ### Debug Logs ```powershell env PYTORCH_ENABLE_MPS_FALLBACK=1 python main.py --force-upcast-attention [START] Security scan [DONE] Security scan ## ComfyUI-Manager: installing dependencies done. ** ComfyUI startup time: 2025-03-01 13:19:21.836 ** Platform: Darwin ** Python version: 3.11.11 (main, Jan 5 2025, 06:40:04) [Clang 19.1.6 ] ** Python executable: /Users/edwin/AI/.venv/bin/python ** ComfyUI Path: /Users/edwin/AI/ComfyUI ** ComfyUI Base Folder Path: /Users/edwin/AI/ComfyUI ** User directory: /Users/edwin/AI/ComfyUI/user ** ComfyUI-Manager config path: /Users/edwin/AI/ComfyUI/user/default/ComfyUI-Manager/config.ini ** Log path: /Users/edwin/AI/ComfyUI/user/comfyui.log Prestartup times for custom nodes: 0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/rgthree-comfy 1.1 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-manager Checkpoint files will always be loaded safely. Total VRAM 65536 MB, total RAM 65536 MB pytorch version: 2.7.0.dev20250210 xformers version: 0.0.29.post3 Set vram state to: SHARED Device: mps Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention ComfyUI version: 0.3.18 [Prompt Server] web root: /Users/edwin/AI/ComfyUI/web ### Loading: ComfyUI-Manager (V3.18.1) ### ComfyUI Version: v0.3.18 | Released on '2025-02-26' (pysssss:WD14Tagger) [DEBUG] Available ORT providers: CoreMLExecutionProvider, AzureExecutionProvider, CPUExecutionProvider (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-WanVideoWrapper/wanvideo/modules/model.py:29: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead. @amp.autocast(enabled=False) /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-WanVideoWrapper/wanvideo/modules/model.py:42: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead. @amp.autocast(enabled=False) [rgthree-comfy] Loaded 42 fantastic nodes. 🎉 Total VRAM 65536 MB, total RAM 65536 MB pytorch version: 2.7.0.dev20250210 xformers version: 0.0.29.post3 Set vram state to: SHARED Device: mps [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json ------------------------------------------ Comfyroll Studio v1.76 : 175 Nodes Loaded ------------------------------------------ ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki ------------------------------------------ Import times for custom nodes: 0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/websocket_image_save.py 0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui_ipadapter_plus 0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-wd14-tagger 0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-custom-scripts 0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/rgthree-comfy 0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-GGUF 0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes 0.0 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-IPAdapter-Flux 0.1 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-WanVideoWrapper 0.2 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-manager 0.2 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-kjnodes 0.3 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-videohelpersuite 0.6 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-mvadapter 0.8 seconds: /Users/edwin/AI/ComfyUI/custom_nodes/comfyui-florence2 Starting server To see the GUI go to: http://127.0.0.1:8188 FETCH ComfyRegistry Data: 5/35 got prompt Using split attention in VAE Using split attention in VAE VAE load device: mps, offload device: cpu, dtype: torch.bfloat16 FETCH ComfyRegistry Data: 10/35 Requested to load CLIPVisionModelProjection loaded completely 9.5367431640625e+25 1208.09814453125 True Requested to load WanTEModel loaded completely 9.5367431640625e+25 6419.477203369141 True CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16 FETCH ComfyRegistry Data: 15/35 FETCH ComfyRegistry Data: 20/35 FETCH ComfyRegistry Data: 25/35 FETCH ComfyRegistry Data: 30/35 FETCH ComfyRegistry Data: 35/35 FETCH ComfyRegistry Data [DONE] [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE] [ComfyUI-Manager] All startup tasks have been completed. Requested to load WanVAE loaded completely 9.5367431640625e+25 242.02829551696777 True /Users/edwin/AI/ComfyUI/custom_nodes/ComfyUI-GGUF/loader.py:65: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:209.) torch_tensor = torch.from_numpy(tensor.data) # mmap ggml_sd_loader: 0 823 12 360 14 120 model weight dtype torch.bfloat16, manual cast: None model_type FLOW Requested to load WAN21 loaded completely 9.5367431640625e+25 10943.232666015625 True 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [58:06<00:00, 174.34s/it] Requested to load WanVAE loaded completely 9.5367431640625e+25 242.02829551696777 True Prompt executed in 3739.42 seconds ``` ### Other However, I found that I could fix it after I changed the KSampler sampler name to euler or euler-ancestral and ~KSampler scheduler to normal~ (Edit, it is not important). (Thanks for this [reddit post](https://www.reddit.com/r/comfyui/comments/1izktly/comment/mf3omam/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button))
1medium
Title: `nuts.get_extra_fields()["num_steps"]=0` after warmup Body: I have the following piece of code: ``` python nuts = MCMC( NUTS(model_logreg), num_warmup=2**13, num_samples=2**10, num_chains=2**5, chain_method="vectorized", ) nuts.warmup(jr.key(2), x_train, labels_train, extra_fields=("num_steps",)) warmup_steps = nuts.get_extra_fields()["num_steps"] print(f"num warmup steps: {warmup_steps}") ``` which returns ``` warmup: 100%|██████████| 8192/8192 [00:41<00:00, 199.13it/s] num warmup steps: [0 0 0 ... 0 0 0] ``` If I do `nuts.run(jr.key(2), x_train, labels_train, extra_fields=("num_steps",)` it works just fine and reports a non-zero number of steps (although I suspect it doesn't count the warmup steps). Also the sampling itself works as intended and results in the correct distribution, so the problem probably isn't in my code. And the warmup does indeed work, because if I set `num_warmup=0`, then the output becomes biased towards the initial value. This is quite bad because it makes it seem that NUTS can achieve good results with a very small number of gradient evaluations, giving it an unfair advantage over other samplers. Also I saw this issue mentioned in the following thread, but it apparently hasn't been addressed yet: https://forum.pyro.ai/t/how-to-calculate-effective-sample-size-per-gradient-evaluation/5398/7
1medium
Title: How to register detection in `PolygonZone` for any overlap Body: ### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Question How can I register a detection inside a `PolygonZone` when there is any overlap? Without requiring the entire bounding box to be contained inside the zone. ![image](https://github.com/roboflow/supervision/assets/3464445/65e4e5c5-928c-4bbc-9115-626e34e60706) I tried using `triggering_anchors` but it didn't work: ```python polygon_zone = sv.PolygonZone( polygon=zone_ndarray, frame_resolution_wh=(img_width, img_height), # Make a detection be considered inside the zone if there is any # overlap. triggering_anchors=[ Position.TOP_LEFT, Position.TOP_RIGHT, Position.BOTTOM_RIGHT, Position.BOTTOM_LEFT, ], ) ``` Thanks! ### Additional _No response_
1medium
Title: Add `create` method Body: If you want to create an object and save it, the current approach is: ```python band = Band(name="Pythonistas") await band.save().run() ``` We can add a `create` method, which might be preferable for some people coming from other ORMs: ```python band = await Band.objects().create(name="Pythonistas").run() ```
1medium
Title: Login immediately on register when using JWT HTTP only Body: Hi there, Is there a way to have a user logged in immediately on register? I have set `ACCOUNT_EMAIL_VERIFICATION = 'optional'` and want the flow to have a user logged in once they register (then verify email at their convinience), but the register view doesn't set the JWT cookies so the user is still required to hit the Login view separately after registering... Is there a configuration or adjustment I can make to log in a user with JWT immediately after they register? Thanks :)
1medium
Title: Webdataset data format problem Body: ### Describe the bug Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1 Error code: FileFormatMismatchBetweenSplitsError All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted as being the webdataset format? (I don't think there is currently a way, but happy to be told that I am wrong.) ### Steps to reproduce the bug ``` import datasets datasets.load_dataset("ejschwartz/idioms") ### Expected behavior The dataset loads. Alternatively, there is a YAML syntax for manually specifying the format. ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.28.1 - PyArrow version: 19.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
1medium
Title: [BUG] Param "dark" not work Body: `dark` does not work in the current version. It's worked well in older versions. I didn't change my code and just run an old notebook. I can only click the theme buttons to change the theme, and my choice won't be remembered. Name: pygwalker Version: 0.4.8.9 Python 3.9 Jupyter Lab
1medium
Title: export-schema does not include ENUM descriptions Body: <!--- Provide a general summary of the changes you want in the title above. --> allow `strawberry export-schema` to use, say, first line of docstring (or full docstring) as a comment and store it as part of the `schema.graphql` file - for entities and for attributes. <!--- This template is entirely optional and can be removed, but is here to help both you and us. --> <!--- Anything on lines wrapped in comments like these will not show up in the final text. --> ## Feature Request Type - [ ] Core functionality - [ ] Alteration (enhancement/optimization) of existing feature(s) - [x] New behavior ## Description Our Federated GQL schema requires each attribute to have comments. It seems that currently there is no way to auto-add docstrings to the exported schema file via strawberry.
1medium
Title: Verify Image button has no icon Body: "Verify Image" button has no icon in the windows binaries v1.5.2
0easy
Title: Cannot view the data in actual postgresql Body: @gunthercox I have connected my chatterbot with postgresql. And it had trained the data that I have specified in the file. So after training, it had created the db **"jer"**. Here's my code: **bot = ChatBot( "Terminal", storage_adapter="chatterbot.storage.SQLStorageAdapter", trainer='chatterbot.trainers.ListTrainer', database_uri='postgresql://postgres:root@localhost:5432/jer', database="jer" )** Can you please suggest me how to view the database in postgresql? I can see the db created but cannot see the data in the actual postgres database. Can't we have control of the database that is created after running the chatterbot python code?
1medium
Title: Add support for Jupyter widgets / ipywidgets Body: ### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [x] I added a descriptive title and summary to this issue. ### Summary Jupyter Widgets are [interactive browser controls](https://github.com/jupyter-widgets/ipywidgets/blob/main/docs/source/examples/Index.ipynb) for Jupyter notebooks. Implement support for using ipywidgets elements in a Streamlit app. ### Why? _No response_ ### How? ```python import ipywidgets as widgets widget = st.ipywidgets(widgets.IntSlider()) st.write(widget.value) ``` ### Additional Context - Related to https://github.com/streamlit/streamlit/issues/10746 - Related discussion: https://discuss.streamlit.io/t/ipywidgets-wip/3870
1medium
Title: ModuleNotFoundError: No module named 'numpy.lib.histograms' Body: **General Information:** - OS: Sonoma 14.5 - Python version: 3.9.10 - Library version: 0.12.0 **Describe the bug:** When attempting to setup a Python virtual environment, I run `make setup` per this [Contribution](https://github.com/capitalone/DataProfiler/blob/main/.github/CONTRIBUTING.md) guideline. When the Makefile executes `pre-commit run`, the `check-manifest` stage fails with an error of ``` ImportError: A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2. Traceback (most recent call last): [...] ModuleNotFoundError: No module named 'numpy.lib.histograms' ``` I believe this to be a result of the latest [NumPy 2.0.0 release](https://github.com/numpy/numpy/releases) as of three weeks ago. **To Reproduce:** Run `make setup` per this [Contribution](https://github.com/capitalone/DataProfiler/blob/main/.github/CONTRIBUTING.md) guideline. **Expected behavior:** The Python virtual environment should be successfully set up. Instead, I encounter this NumPy error. **Screenshots:** <img width="683" alt="image" src="https://github.com/capitalone/DataProfiler/assets/83050155/0e581739-a63f-44e1-b8f8-ff37a625e626"> **Additional context:** This is similar to #1154, however I encounter this issue when setting up the virtual environment rather than running a Python file that imports DataProfiler.
1medium
Title: class ZoneoutLSTMCell(tf.nn.rnn_cell.RNNCell): AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'rnn_cell' Body: class ZoneoutLSTMCell(tf.nn.rnn_cell.RNNCell): AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'rnn_cell'
1medium
Title: TensorLayer 2.0 Body: # NETWORK API REFACTORING - TO DO LIST ## [Design Docs](https://github.com/luomai/tensorlayer2-design/issues/7) ## [Refactoring Codes](https://github.com/zsdonghao/tensorlayer2) Dear Contributors, @DEKHTIARJonathan @akaraspt @luomai @lgarithm @JingqingZ @fangde etal. As we discussed previously, TensorLayer 2.0 should support both eager and graph mode. The new API design is here https://github.com/luomai/tensorlayer2-design/issues/7 To make the refactoring faster, I simply fork tensorlayer/tensorlayer into zsdonghao/tensorlayer2: https://github.com/zsdonghao/tensorlayer2 , we can merge the branch back to tensorlayer/tensorlayer when the refactoring is finished. In doing so, the contributions will be in may commits rather than only 1. # Work to be done ## Layers - [x] **core.py:** * Layer: - [x] refactored @JingqingZ 2019/01/28 - [x] tested @JingqingZ 2019/01/31 2019/03/06 - [x] documentation @JingqingZ 2019/03/06 * ModelLayer: - [x] created @JingqingZ 2019/01/28 - [x] tested @JingqingZ 2019/03/06 - [x] documentation @JingqingZ 2019/03/06 * LayerList: - [x] created @JingqingZ 2019/01/28 @ChrisWu1997 - [x] tested @JingqingZ 2019/03/06 - [x] documentation @JingqingZ 2019/03/06 * LayerNode: - [x] created @ChrisWu1997 - [x] tested @ChrisWu1997 2019/03/22 - [x] documentation @ChrisWu1997 2019/03/22 - [x] **activation.py:** * PRelu: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/20 - [x] tested @JingqingZ 2019/03/20 - [x] documentation @JingqingZ 2019/03/20 * PRelu6: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/20 - [x] tested @JingqingZ 2019/03/20 - [x] documentation @JingqingZ 2019/03/20 * PTRelu6: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/20 - [x] tested @JingqingZ 2019/03/20 - [x] documentation @JingqingZ 2019/03/20 - **convolution/** * AtrousConv1dLayer, AtrousConv2dLayer and AtrousDeConv2d are removed, use Conv1d/2d and DeConv2d with `dilation_rate` instead. (🀄️remember to change CN docs) * BinaryConv2d: - [x] refactored @zsdonghao 2018/12/05 - [x] tested @warshallrho 2019/03/16 - [x] documentation @warshallrho 2019/03/20 * Conv1d: - [x] refactored @zsdonghao 2019/01/16 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/17 * Conv2d: - [x] refactored @zsdonghao 2019/01/16 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/17 * Conv3d: - [x] add @zsdonghao 2019/01/16 : (🀄️remember to change CN docs) - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/17 * Conv1dLayer: - [x] refactored @zsdonghao 2018/12/05 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/17 * Conv2dLayer: - [x] refactored @zsdonghao 2018/12/05 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/17 * Conv3dLayer: - [x] refactored @zsdonghao 2018/12/05 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/17 * DeConv1dLayer: - [x] refactored @warshallrho 2019/03/16 - [x] tested @warshallrho 2019/03/16 - [x] documentation @warshallrho 2019/03/17 * DeConv2dLayer: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/17 * DeConv3dLayer: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/17 * DeConv2d: - [x] refactored @zsdonghao 2019/01/16 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/17 * DeConv3d: - [x] refactored @zsdonghao 2019/01/16 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/17 * DeformableConv2d: - [x] refactored @warshallrho 2019/03/18 - [x] tested @warshallrho 2019/03/18 - [x] documentation @warshallrho 2019/03/18 * DepthwiseConv2d: - [x] refactored @zsdonghao 2018/12/05 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/18 * DorefaConv2d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/17 - [x] documentation @warshallrho 2019/03/20 * GroupConv2d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/17 - [x] documentation @warshallrho 2019/03/20 * QuanConv2d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/17 - [x] documentation @warshallrho 2019/03/20 * QuanConv2dWithBN: - [ ] refactored - [ ] tested - [ ] documentation * SeparableConv1d: - [x] refactored @zsdonghao 2019/01/16 - [x] tested @warshallrho 2019/03/17 - [x] documentation @warshallrho 2019/03/18 * SeparableConv2d: - [x] refactored @zsdonghao 2019/01/16 - [x] tested @warshallrho 2019/03/17 - [x] documentation @warshallrho 2019/03/18 * SubpixelConv1d: - [x] refactored @zsdonghao 2018/12/05 @warshallrho 2019/03/18 - [x] tested @warshallrho 2019/03/18 - [x] documentation @warshallrho 2019/03/18 * SubpixelConv2d: - [x] refactored @zsdonghao 2018/12/05 @warshallrho 2019/03/18 - [x] tested @warshallrho 2019/03/18 - [x] documentation @warshallrho 2019/03/18 * TernaryConv2d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/17 - [x] documentation @warshallrho 2019/03/20 - **dense/** [WIP] @ChrisWu1997 * BinaryDense: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @ChrisWu1997 2019/04/23 _need further test by example_ - [x] documentation @ChrisWu1997 2019/04/23 * Dense: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/01/28 - [x] tested @JingqingZ 2019/01/31 2019/03/06 2019/03/15 - [x] documentation @JingqingZ 2019/03/15 * DorefaDense: - [x] refactored @zsdonghao 2018/12/04 - [x] tested @ChrisWu1997 2019/04/23 _need further test by example_ - [x] documentation @ChrisWu1997 2019/04/23 * DropconnectDense: - [x] refactored @zsdonghao 2018/12/05 - [x] tested @ChrisWu1997 2019/04/23 _need further test by example_ - [x] documentation @ChrisWu1997 2019/04/23 * QuanDense: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @ChrisWu1997 2019/04/23 _need further test by example_ - [x] documentation @ChrisWu1997 2019/04/23 * QuanDenseWithBN: - [ ] refactored - [ ] tested - [ ] documentation * TernaryDense: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @ChrisWu1997 2019/04/23 _need further test by example_ - [x] documentation @ChrisWu1997 2019/04/23 - **dropout.py** * Dropout: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/01/28 - [x] tested @JingqingZ 2019/01/31 2019/03/06 2019/03/15 - [x] documentation @JingqingZ 2019/03/15 - **extend.py** * ExpandDims: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22 - [x] tested @JingqingZ 2019/03/22 - [x] documentation @JingqingZ 2019/03/22 * Tile: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22 - [x] tested @JingqingZ 2019/03/22 - [x] documentation @JingqingZ 2019/03/22 - **image_resampling.py** * UpSampling2d: - [x] refactored @zsdonghao 2018/12/04 @ChrisWu1997 2019/04/03 - [x] tested @ChrisWu1997 2019/04/03 - [x] documentation @ChrisWu1997 2019/04/03 * DownSampling2d: - [x] refactored @zsdonghao 2018/12/04 @ChrisWu1997 2019/04/03 - [x] tested @ChrisWu1997 2019/04/03 - [x] documentation @ChrisWu1997 2019/04/03 - **importer.py** * SlimNets: - [ ] refactored - [ ] tested - [ ] documentation * Keras: - [ ] refactored - [ ] tested - [ ] documentation - **inputs.py** * Input: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/01/28 - [x] tested @JingqingZ 2019/03/06 - [x] documentation @JingqingZ 2019/03/06 - **embedding.py** * OneHotInput: --> OneHot (🀄️remember to change CN docs) - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/23 - [x] tested @JingqingZ 2019/03/19 - [x] documentation @JingqingZ 2019/03/19 * Word2vecEmbeddingInput: --> Word2vecEmbedding (🀄️remember to change CN docs) - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/21 - [x] tested @JingqingZ 2019/03/19 - [x] documentation @JingqingZ 2019/03/19 * EmbeddingInput: --> Embedding - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/22 - [x] tested @JingqingZ 2019/03/19 - [x] documentation @JingqingZ 2019/03/19 * AverageEmbeddingInput: --> AverageEmbedding (🀄️remember to change CN docs) - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/20 - [x] tested @JingqingZ 2019/03/19 - [x] documentation @JingqingZ 2019/03/19 - **lambda_layers.py** * ElementwiseLambda: - [x] refactored @JingqingZ 2019/03/24 - [x] tested @JingqingZ 2019/03/24 - [x] documentation @JingqingZ 2019/03/24 * Lambda: - [x] refactored @JingqingZ 2019/03/24 - [x] tested @JingqingZ 2019/03/24 - [x] documentation @JingqingZ 2019/03/24 - **merge.py** * Concat: - [x] refactored @zsdonghao 2018/12/04 - [x] tested @JingqingZ 2019/03/15 - [x] documentation @JingqingZ 2019/03/15 * Elementwise: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/15 - [x] tested @JingqingZ 2019/03/15 - [x] documentation @JingqingZ 2019/03/15 - **noise.py** * GaussianNoise: - [x] refactored @zsdonghao 2018/12/04 - [x] tested @warshallrho 2019/03/20 - [x] documentation @warshallrho 2019/03/20 - **normalization.py** * BatchNorm: - [x] refactored @ChrisWu1997 2019/01/22 @ChrisWu1997 2019/03/05 - [x] tested @ChrisWu1997 2019/03/22 - [x] documentation @ChrisWu1997 2019/03/22 * BatchNorm1d: - [x] refactored @ChrisWu1997 2019/03/05 - [x] tested @ChrisWu1997 2019/03/22 - [x] documentation @ChrisWu1997 2019/03/22 * BatchNorm2d: - [x] refactored @ChrisWu1997 2019/03/05 - [x] tested @ChrisWu1997 2019/03/22 - [x] documentation @ChrisWu1997 2019/03/22 * BatchNorm3d: - [x] refactored @ChrisWu1997 2019/03/05 - [x] tested @ChrisWu1997 2019/03/22 - [x] documentation @ChrisWu1997 2019/03/22 * GroupNorm: - [x] refactored @zsdonghao 2018/12/05 - [ ] tested - [ ] documentation * InstanceNorm: - [x] refactored @zsdonghao 2018/12/05 - [ ] tested - [ ] documentation * LayerNorm: - [x] refactored @ChrisWu1997 2019/01/23 - [ ] tested - [ ] documentation * LocalResponseNorm: - [x] refactored @zsdonghao 2018/12/05 - [ ] tested - [ ] documentation * SwitchNorm: - [x] refactored @zsdonghao 2018/12/05 - [ ] tested - [ ] documentation - **padding.py** * PadLayer: - [x] refactored @zsdonghao 2018/12/04 - [x] tested @warshallrho 2019/03/21 - [x] documentation @warshallrho 2019/03/21 * ZeroPad1d: - [x] refactored @zsdonghao 2018/12/04 - [x] tested @warshallrho 2019/03/21 - [x] documentation @warshallrho 2019/03/21 * ZeroPad2d: - [x] refactored @zsdonghao 2018/12/04 - [x] tested @warshallrho 2019/03/21 - [x] documentation @warshallrho 2019/03/21 * ZeroPad3d: - [x] refactored @zsdonghao 2018/12/04 - [x] tested @warshallrho 2019/03/21 - [x] documentation @warshallrho 2019/03/21 - **pooling/** * MaxPool1d: - [x] refactored @zsdonghao 2019/01/08 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/19 * MaxPool2d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/19 * MaxPool3d: - [x] refactored @zsdonghao 2019/01/08 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/19 * MeanPool1d: - [x] refactored @zsdonghao 2019/01/08 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/19 * MeanPool2d: - [x] refactored @zsdonghao 2019/01/08 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/19 * MeanPool3d: - [x] refactored @zsdonghao 2019/01/08 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/19 * GlobalMaxPool1d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/15 * GlobalMaxPool2d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/15 * GlobalMaxPool3d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/15 * GlobalMeanPool1d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/15 * GlobalMeanPool2d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/15 * GlobalMeanPool3d: - [x] refactored @zsdonghao 2018/12/06 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/15 * PoolLayer: - [x] refactored @zsdonghao 2018/12/04 - [x] tested @warshallrho 2019/03/15 - [x] documentation @warshallrho 2019/03/18 - **quantize_layers.py** * Sign: - [x] refactored - [ ] tested - [ ] documentation - **recurrent/** * BiRNN: - [x] refactored @JingqingZ 2019/04/08 - [x] tested @JingqingZ 2019/04/08 - [x] documentation @JingqingZ 2019/04/08 * ConvLSTM: - [ ] refactored - [ ] tested - [ ] documentation * RNN: - [x] refactored @JingqingZ 2019/03/31 - [x] tested @JingqingZ 2019/03/31 - [x] documentation @JingqingZ 2019/03/31 * Seq2Seq: - [ ] refactored - [ ] tested - [ ] documentation - **shape.py** * Flatten: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22 - [x] tested @JingqingZ 2019/03/22 - [x] documentation @JingqingZ 2019/03/22 * Reshape: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22 - [x] tested @JingqingZ 2019/03/22 - [x] documentation @JingqingZ 2019/03/22 * Transpose: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22 - [x] tested @JingqingZ 2019/03/22 - [x] documentation @JingqingZ 2019/03/22 - **scale.py** * Scale: - [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22 - [x] tested @JingqingZ 2019/03/22 - [x] documentation @JingqingZ 2019/03/22 - **contrib** * ROIPooling: - [ ] refactored - [ ] tested - [ ] documentation - **spatial_transformer.py** * SpatialTransformer2dAffine: see **test_layers_spatial_transformer.py** - [ ] refactored - [ ] tested - [ ] documentation - **stack.py** [WIP] @ChrisWu1997 * Stack: - [x] refactored @zsdonghao 2018/12/04 - [x] tested @ChrisWu1997 2019/04/23 - [x] documentation @ChrisWu1997 2019/04/23 * UnStack: - [x] refactored @zsdonghao 2018/12/04 - [x] tested @ChrisWu1997 2019/04/23 - [x] documentation @ChrisWu1997 2019/04/23 - **time_distribution.py** **Remove, as eager mode support this feature** (🀄️remember to change CN docs) * TimeDistributed: ## tl.models - **core.py** * Model: - [x] refactored @JingqingZ 2019/01/28 @ChrisWu1997 2019/02/16 2019/02/22 - [x] tested @ChrisWu1997 2019/03/21 - [x] documentation @ChrisWu1997 2019/03/21 - **vgg.py** * vgg: - [x] refactored @warshallrho 2019/02/19 - [ ] tested - [x] documentation @warshallrho 2019/03/21 @ChrisWu1997 2019/03/21 * vgg16: - [x] refactored @warshallrho 2019/02/19 - [ ] tested - [x] documentation @warshallrho 2019/03/21 @ChrisWu1997 2019/03/21 * vgg19: - [x] refactored @warshallrho 2019/03/09 - [ ] tested - [x] documentation @warshallrho 2019/03/21 @ChrisWu1997 2019/03/21 - **mobilenetv1.py** * MobileNet: - [x] refactored @ChrisWu1997 2019/04/23 - [x] tested @ChrisWu1997 2019/04/23 - [x] documentation @ChrisWu1997 2019/04/23 * SqueezeNet: - [x] refactored @ChrisWu1997 2019/04/23 - [x] tested @ChrisWu1997 2019/04/23 - [x] documentation @ChrisWu1997 2019/04/23 ## Examples - basic_tutorials Too many basic tutorials, some codes can be removed. - [x] Static model example MNIST @JingqingZ 2019/01/28 2019/03/24 - [x] Dynamic model example MNIST @JingqingZ 2019/01/28 2019/03/24 - [x] Static model example CIFAR10 (with dataset API) @ChrisWu1997 2019/03/24 - [x] Siamese example MNIST @ChrisWu1997 2019/03/26 - tutorial_mnist_float16.py removed by @ChrisWu1997 - tutorial_mnist_simple.py removed by @ChrisWu1997 - data_process - tutorial_fast_affine_transform.py - [x] refactored @ChrisWu1997 2019/04/11 - [x] tested @ChrisWu1997 2019/04/11 - tutorial_image_preprocess.py removed by @zsdonghao - tutorial_tf_dataset_voc.py - [x] refactored @ChrisWu1997 2019/04/11 - [x] tested @ChrisWu1997 2019/04/11 - tutorial_tfrecord.py - [x] refactored @ChrisWu1997 2019/04/11 - [x] tested @ChrisWu1997 2019/04/11 - tutorial_tfrecord2.py - [x] refactored @ChrisWu1997 2019/04/11 - [x] tested @ChrisWu1997 2019/04/11 - tutorial_tfrecord3.py - [ ] refactored - [ ] tested - database - [ ] refactored - [ ] tested - distributed_training - tutorial_cifar10_distributed_trainer.py - [ ] refactored - [ ] tested - tutorial_mnist_distributed_trainer.py - [ ] refactored - [ ] tested - keras_tfslim - tutorial_keras.py - [x] refactored @ChrisWu1997 2019/04/11 - [x] tested @ChrisWu1997 2019/04/11 - tutorial_tfslim.py removed by @ChrisWu1997 - pretrained_cnn - tutorial_inceptionV3_tfslim.py - tutorial_mobilenet.py removed by @ChrisWu1997 2019/04/23 - tutorial_models_mobilenetv1.py - [x] refactored @ChrisWu1997 2019/04/23 - [x] tested @ChrisWu1997 2019/04/23 - tutorial_models_squeezenetv1.py - [x] refactored @ChrisWu1997 2019/04/23 - [x] tested @ChrisWu1997 2019/04/23 - tutorial_models_vgg.py - [x] refactored @warshallrho 2019/04/30 - [ ] tested - tutorial_models_vgg_static.py - [x] refactored @warshallrho 2019/04/30 - [ ] tested - tutorial_models_vgg16.py - [x] refactored @warshallrho 2019/02/19 - [ ] tested - tutorial_models_vgg19.py - [x] refactored @warshallrho 2019/03/09 - [ ] tested - tutorial_squeezenet.py removed by @ChrisWu1997 2019/04/23 - tutorial_vgg16.py removed by @warshallrho 2019/04/30 - tutorial_vgg19.py removed by @warshallrho 2019/04/30 - quantized_net - tutorial_binarynet_cifar10_tfrecord.py - [x] refactored - [x] tested - tutorial_binarynet_mnist_cnn.py - [x] refactored - [x] tested - tutorial_dorefanet_cifar10_tfrecord.py - [x] refactored - [x] tested - tutorial_dorefanet_mnist_cnn.py - [x] refactored - [x] tested - tutorial_quanconv_cifar10.py - [x] refactored - [x] tested - tutorial_quanconv_mnist.py - [x] refactored - [x] tested - tutorial_ternaryweight_cifar10_tfrecord.py - [x] refactored - [x] tested - tutorial_ternaryweight_mnist_cnn.py - [x] refactored - [x] tested - reinforcement_learning - tutorial_atari_pong.py @zsdonghao 2019/01/21 - [x] refactored - [x] tested - tutorial_bipedalwalker_a3c_continuous_action.py - [ ] refactored - [ ] tested - tutorial_cartpole_ac.py @zsdonghao 2019/02/17 - [x] refactored - [x] tested - tutorial_frozenlake_dqn.py @zsdonghao 2019/02/16 - [x] refactored - [x] tested - tutorial_frozenlake_q_table.py @zsdonghao 2019/02/16 - [x] refactored - [x] tested - text_classification - tutorial_imdb_fasttext.py @JingqingZ 2019/03/14 - [x] refactored - [x] tested - text_generation - tutorial_generate_text.py - [ ] refactored - [ ] tested - text_ptb Are they duplicated? - tutorial_ptb_lstm_state_is_tuple.py - [ ] refactored - [ ] tested - tutorial_ptb_lstm.py - [ ] refactored - [ ] tested - text_word_embedding - tutorial_word2vec_basic.py @JingqingZ 2019/02/21 2019/03/19 - [x] refactored - [x] tested ## Others - tl.activation.py - [x] refactored @JingqingZ 2019/03/06 - [x] tested @JingqingZ 2019/03/06 - [x] documentation @JingqingZ 2019/03/06 - tl.cli - [x] refactored _no update needed_ @ChrisWu1997 2019/04/12 - tl.decorators - [x] refactored _no update needed_ @ChrisWu1997 2019/04/12 - tl.logging - [x] refactored _no update needed_ @ChrisWu1997 2019/04/12 - tl.optimizers - [ ] refactored - tl.third_party - [ ] refactored - tl.array_ops - [x] refactored _no update needed_ @ChrisWu1997 2019/04/12 - tl.cost - [x] refactored @ChrisWu1997 2019/04/12 - [x] documentation @ChrisWu1997 2019/04/12 - tl.db [WIP] @ChrisWu1997 - [ ] refactored - tl.distributed - [ ] refactored - tl.initializers - [x] refactored @ChrisWu1997 2019/04/12 - [x] tested @ChrisWu1997 2019/04/12 - [x] documentation @ChrisWu1997 2019/04/12 - tl.iterate - [x] refactored _no update needed_ @ChrisWu1997 2019/04/12 - tl.lazy_imports - [x] refactored _no update needed_ @ChrisWu1997 2019/04/12 - tl.nlp @OliverZijia @JingqingZ - [x] refactored - tl.package_info - [ ] refactored - tl.prepro - [x] refactored @ChrisWu1997 2019/04/11 - tl.rein - [ ] refactored - tl.utils - [x] refactored @ChrisWu1997 2019/04/17 - [x] tested _by `tutorial_mnist_simple.py`_ @ChrisWu1997 2019/04/17 - [x] documentation @ChrisWu1997 2019/04/17 - tl.visualize - [x] refactored _no update needed_ @ChrisWu1997 2019/04/12 ## Unittests Status: - performance_test - VGG @JingqingZ @ChrisWu1997 @warshallrho 2019/03/20 - layers - test_layernode.py @ChrisWu1997 2019/03/22 - test_layers_activation.py @JingqingZ 2019/03/20 - test_layers_convolution.py (1d, 2d, 3d) @warshallrho 2019/03/20 - test_layers_core_basedense_dropout.py @JingqingZ 2019/03/06 - test_layers_convolution_deformable.py @warshallrho 2019/03/18 - test_layers_embedding.py @JingqingZ 2019/03/19 - test_layers_extend.py @JingqingZ 2019/03/22 - test_layers_lambda.py @JingqingZ 2019/03/24 - test_layers_merge.py @JingqingZ 2019/03/15 - test_layers_noise.py @warshallrho 2019/03/21 - test_layers_padding.py @warshallrho 2019/03/21 - test_layers_pooling.py @warshallrho 2019/03/18 - test_layers_recurrent.py @JingqingZ 2019/03/06 - test_layers_scale.py @JingqingZ 2019/03/22 - test_layers_shape.py @JingqingZ 2019/03/22 - test_activations.py @JingqingZ 2019/03/06 - models - test_model_save_graph.py @warshallrho 2019/04/30 ## Unittests Status (Pending): Some testing codes can be removed. - test_array_ops.py - test_decorators.py - test_documentation.py - test_layers_basic.py - test_layers_flow_control.py **removed** in favour of eager mode @zsdonghao 2018/12/04 (🀄️remember to change CN docs) - test_layers_importer.py - test_layers_normalization.py - test_layers_padding.py - test_layers_spatial_transformer.py - test_layers_stack.py - test_layers_super_resolution.py - test_layers_time_distributed.py - test_logging.py - test_logging_hyperdash.py - test_mnist_simple.py - test_model_compilednetwork.py - test_models.py - test_network_custom_2d.py - test_network_custom_input_layers.py - test_network_custom_multiple_inputs.py - test_network_custom_multiple_outputs.py - test_network_sequential_1d.py - test_network_sequential_2d.py - test_network_sequential_3d.py - test_network_sequential_rnn.py - test_optimizer_amsgrad.py - test_pydocstyle.py - test_reuse_mlp.py - test_tf_layers.py - test_timeout.py - test_utils_predict.py - test_yapf_format.py ## tl.files All save/load methods are also wrapped as class method in model core. - save_hdf5_graph - [x] created @warshallrho 2019/04/27 - [x] tested @warshallrho 2019/04/27 - [x] documentation @warshallrho 2019/04/27 - load_hdf5_graph - [x] created @warshallrho 2019/04/27 - [x] tested @warshallrho 2019/04/27 - [x] documentation @warshallrho 2019/04/27 - save_weights_to_hdf5 - [x] created - [x] tested @ChrisWu1997 2019/03/26 - [x] documentation @ChrisWu1997 2019/03/26 - load_hdf5_to_weights_in_order - [x] created - [x] tested @ChrisWu1997 2019/03/26 - [x] documentation @ChrisWu1997 2019/03/26 - load_hdf5_to_weights - [x] created - [x] tested @ChrisWu1997 2019/03/26 - [x] documentation @ChrisWu1997 2019/03/26 - save_npz([save_list, name, sess]) @ChrisWu1997 2019/02/21 --> save_npz([save_list, name]) @ChrisWu1997 2019/03/21 - [x] refactored - [x] tested @ChrisWu1997 2019/03/26 - [x] documentation @ChrisWu1997 2019/03/26 - load_npz([path, name]) @ChrisWu1997 2019/02/21 - [x] refactored - [x] tested @ChrisWu1997 2019/03/26 - [x] documentation @ChrisWu1997 2019/03/26 - assign_params(sess, params, network) --> assign_weights (🀄️remember to change CN docs) @ChrisWu1997 2019/02/22 - [x] refactored - [ ] tested - load_and_assign_npz([sess, name, network]) @ChrisWu1997 2019/02/21 --> load_and_assign_npz([name, network]) @ChrisWu1997 2019/03/21 - [x] refactored - [x] tested @ChrisWu1997 2019/03/26 - [x] documentation @ChrisWu1997 2019/03/26 - save_npz_dict([save_list, name, sess]) @ChrisWu1997 2019/02/22 --> save_npz_dict([save_list, name]) @ChrisWu1997 2019/03/21 - [x] refactored - [x] tested @ChrisWu1997 2019/03/26 - [x] documentation @ChrisWu1997 2019/03/26 - load_and_assign_npz_dict([name, sess]) --> ([name, network]) @ChrisWu1997 2019/03/21 - [x] refactored - [x] tested @ChrisWu1997 2019/03/26 - [x] documentation @ChrisWu1997 2019/03/26 - save_ckpt([sess, mode_name, save_dir, …]) @ChrisWu1997 2019/02/22 - [x] refactored - [ ] tested - load_ckpt([sess, mode_name, save_dir, …]) @ChrisWu1997 2019/02/22 - [x] refactored - [ ] tested
2hard
Title: [Migrated] Zappa Deploy FileExistsError Body: Originally from: https://github.com/Miserlou/Zappa/issues/1839 by [enotuniq](https://github.com/enotuniq) I am very new to Zappa and AWS. I successfully installed zappa and managed to go through zappa init. However, when I try to deploy it with zappa deploy, I keep getting this error below. I cleared the temp directory and tried again and again but nothing changed. **Error** ``` Traceback (most recent call last): File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 70, in mkpath os.mkdir(head, mode) FileExistsError: [WinError 183] File exists 'C:\\Users\\xx\\AppData\\Local\\Temp\\zappa-project_jcpoxaq\\hjson' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 2779, in handle sys.exit(cli.handle()) File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 509, in handle self.dispatch_command(self.command, stage) File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 546, in dispatch_command self.deploy(self.vargs['zip']) File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 718, in deploy self.create_package() File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 2267, in create_package disable_progress=self.disable_progress File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\core.py", line 629, in create_lambda_zip copy_tree(temp_package_path, temp_project_path, update=True) File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 159, in copy_tree verbose=verbose, dry_run=dry_run)) File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 135, in copy_tree mkpath(dst, verbose=verbose) File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 74, in mkpath "could not create '%s': %s" % (head, exc.args[-1])) distutils.errors.DistutilsFileError: could not create 'C:\Users\xx\AppData\Local\Temp\zappa-project_jcpoxaq\hjson' File exists ```
1medium
Title: Custom batch selection for logging Body: ### Description & Motivation Need to be able to select the same batch in every logging cycle. For generation pipelines similar to stable diffusion it is very hard to gauge the performance over training if we continue to choose random batches. ### Pitch User should have selective ability to choose the batch to log which will be constant for all the logging cycles. ### Alternatives Its possible to load the data again in train_btach_end() or validation_batch_end(), and call logging. ### Additional context _No response_ cc @borda
1medium
Title: VWAP indicator is calculating wrong vwap values for past days in dataframe Body: @twopirllc Thanks for creating this wonderful python module. I'm extensively using this module for my algos. I found an issue with VWAP indicator when I ran it with my backtesting data. As per definition, VWAP should be calculated on daily data(Intraday). Since, we pass series data, which contains past dates data as well. It calculates the cumulative sum incorrectly in that case. Each day's opening volume and hlc price will be different for sure. Thus, the calculation for vwap should start with fresh data of each day. eg cumsum() Note: Calculation is absolutely correct in a case of series data contains only 1-day data. Maybe we could give a try to group a series data according to date and performing a calculation. It's just my thought. But, I would be happy to hear from you. As it's important for cross-checking strategies with backtesting data.
1medium
Title: Reversing tiktok API. Body: I have question. Which endpoint I must reverse for find endpoint which used in your code? Now this endpoint is not working, and you are not responding to issues, if I knew how to find a new endpoint, I would make a pull request. How i can find the same endpoint but working?
2hard
Title: Why does gevent affect the asyncio usage of child thread? Body: * gevent version: 20.10.2 * Python version: cPython 3.9 * Operating System: macOS 14.3.1(M3) ### Description: I use gevent patch for my program, but in the program, I need to execute asyncio related code in sub threads When multiple sub threads execute the same coroutine, it triggers "RuntimeError: This event loop is already running" They are different threads, I generated its own event loop for each sub thread,I cannot understand this issue When I commented out the monkey patch, the program executed as I expected ```python-traceback ERROR:root:This event loop is already running Traceback (most recent call last): File "/Users/computer1/pytest/test.py", line 30, in func1 loop.run_until_complete(asyncf1()) File "/opt/homebrew/Cellar/[email protected]/3.9.18_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 623, in run_until_complete self._check_running() File "/opt/homebrew/Cellar/[email protected]/3.9.18_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 583, in _check_running raise RuntimeError('This event loop is already running') RuntimeError: This event loop is already running /Users/computer1/pytest/test.py:32: RuntimeWarning: coroutine 'asyncf1' was never awaited logging.error(e, exc_info=True) RuntimeWarning: Enable tracemalloc to get the object allocation traceback ERROR:root:This event loop is already running Traceback (most recent call last): File "/Users/computer1/pytest/test.py", line 30, in func1 loop.run_until_complete(asyncf1()) File "/opt/homebrew/Cellar/[email protected]/3.9.18_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 623, in run_until_complete self._check_running() File "/opt/homebrew/Cellar/[email protected]/3.9.18_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 583, in _check_running raise RuntimeError('This event loop is already running') RuntimeError: This event loop is already running ``` ### What I've run: ```python import logging import gevent.monkey gevent.monkey.patch_all() import concurrent.futures import threading import time import asyncio pool = concurrent.futures.ThreadPoolExecutor() async def asyncf1(): print("aa") def func1(): # print(f"thread:{threading.get_ident()},gevent:{id(gevent.getcurrent())}") try: try: loop = asyncio.get_event_loop() except RuntimeError: loop = asyncio.new_event_loop() # asyncio.set_event_loop(loop) loop.run_until_complete(asyncf1()) except Exception as e: logging.error(e, exc_info=True) print(threading.current_thread()) time.sleep(3) for i in range(3): pool.submit(func1) time.sleep(10) ```
2hard
Title: Edges of plot disappear after first loop Body: I am using celluloid to plot a function over 17 years and i love it so far, it works great! I have one small problem though, the edges of my plot disappear after the first loop. I have attached images of how this looks. First loop: ![wborder](https://user-images.githubusercontent.com/61883982/76109337-8d857900-5fdc-11ea-8ba2-628d6f499982.JPG) Second loop: ![woborder](https://user-images.githubusercontent.com/61883982/76109357-937b5a00-5fdc-11ea-8b1e-7716d594de59.JPG) I am using cartopy and matplotlib in jupyter notebook and this is my code for the animation: ` import matplotlib matplotlib.use('Agg') from IPython.display import HTML from celluloid import Camera fig=plt.figure(figsize=(9,5)) cmap=matplotlib.cm.RdBu_r norm=matplotlib.colors.Normalize(vmin=0, vmax=50) ax=plt.axes(projection=ccrs.PlateCarree(),extent=[-180, 180, -90, 90]) ax.set_xticks([-180, -120, -60, 0, 60, 120, 180], crs=ccrs.PlateCarree()) ax.set_yticks([-90, -60, -30, 0, 30, 60, 90], crs=ccrs.PlateCarree()) plt.xlabel('Longitude [deg]') plt.ylabel('Latitude [deg]') camera = Camera(fig) for i in range(0,(stop-start)+1): ax.coastlines() plt.scatter(nphi[i], nthe[i], c=mag[i], s=40, norm=norm, cmap=cmap, edgecolor="k") ax.text(0, 1.05, 'Global Observatory Plot of SV magnitude from target year ' + str(start+i) + ' in the dB_' + out + '-direction', fontsize=9, transform=ax.transAxes) camera.snap() cbar=plt.colorbar() cbar.set_label('Magnitude of SV [nT/yr$^2$]') animation = camera.animate(interval=800) animation.save('Figures/GlobalSVMag.mp4') HTML(animation.to_html5_video()) ` Is there a way to make the edge appear all the way through the animation?
1medium
Title: PuDB does not update for terminal size changes [Urwid issue] Body: When I resize the terminal (gnome-terminal) the view stays the same size although the window becomes bigger. After I move the cursor it becomes fullsize. It is weird as this only happens with pudb, not with other terminal programs ( I use ubuntu and i3 wm). ![pudb](https://user-images.githubusercontent.com/15639804/104251131-54752b00-546f-11eb-8a6d-901793b48d67.png) _Originally posted by @makrobios in https://github.com/inducer/pudb/issues/410#issuecomment-758295335_
2hard
Title: add arrow when mouse enters app card Body:
0easy
Title: TabularPredictor. Shuffle=False?? Body: Hi everyone, First of all, thank you for the well-documented library. I have a question regarding the use of TabularPredictor for creating a stacking ensemble model. I’m unsure how AutoGluon handles hyperparameter tuning when both tuning_data and train_data are provided. Specifically, does AutoGluon perform hyperparameter tuning using K-Fold cross-validation? If so, is there a way to configure it to set shuffle=False? I’d appreciate any clarification on this point.
1medium
Title: Results differ when using cv2 vs pillow Body: ### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report. ### Bug From [this comment](https://github.com/roboflow/supervision/issues/1038#issuecomment-2018147877), I understand supervision doesn't change channel order, and the issue I highlight here is likely addressed by documentation. I observe that if I open an image with cv2 or with pillow the predictions are different. The model was trained using ultraltics which I believe also uses cv2, so when I use pillow the channels order is changed. I suggest adding a note to the docs to check which library was used in training, then use that with supervision. Comparisons below: cv2: ![image](https://github.com/user-attachments/assets/fd2692aa-d869-4a9a-abfa-d304c342f38a) pillow: ![Pasted Graphic 5](https://github.com/user-attachments/assets/73a0e0e7-096f-46e3-805f-15798d7c7bd9) ### Environment _No response_ ### Minimal Reproducible Example ```python image_path = "my.png" # change image = cv2.imread(image_path) # image = np.array(Image.open(image_path)) def callback(image_slice: np.ndarray) -> sv.Detections: result = model(image_slice)[0] return sv.Detections.from_ultralytics(result) slicer = sv.InferenceSlicer(callback = callback) detections = slicer(image) detections = detections[detections.class_id == 1] box_annotator = sv.BoxAnnotator() label_annotator = sv.LabelAnnotator() annotated_image = box_annotator.annotate(scene=image, detections=detections) annotated_image = label_annotator.annotate(scene=annotated_image, detections=detections) ``` ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
0easy
Title: VGGmodel structure Body: Hi, I am using the VGG model. I find the dense layers have 4096 neural cells online, but your code writes 2048 cells.
1medium
Title: Bearer Token Auth type support Body: I want to propose a Bearer Token auth type to `Auth` menu. Like Postman does: <img width="1082" alt="Screenshot 2567-11-13 at 10 54 05" src="https://github.com/user-attachments/assets/3bf21341-8420-4c2c-9a27-1c7059484c54"> Currently, I work around this by manually setting the `Authorization` header in the' Headers' menu. Having this support would shorten the typing flow a little.
1medium
Title: Table row sep argument not included in tikzpicture Body: The `row sep` is not included in the tikz picture. How I save my figure: ```python matplotlib2tikz.save(out, figure=fig, textsize=8, extra_axis_parameters=extra_axis_param, float_format="{:.5f}", table_row_sep=r"\\") ``` Results in: ```latex \addplot [semithick, color0] table{% 4.00000 0.00000\\5.00000 0.00000\\6.00000 0.00000\\7.00000 0.00000\\8.00000 0.00000\\9.00000 0.00000\\10.00000 0.00000\\11.00000 0.00000\\12.00000 0.00000\\13.00000 0.00000\\14.00000 0.00000\\15.00000 0.00000\\16.00000 0.00000\\17.00000 0.00000\\18.00000 0.00000\\19.00000 0.00000\\20.00000 0.00000\\21.00000 0.00000\\22.00000 0 ``` But is should be rendered/ouputed as: ```latex \addplot [semithick, color0] table[row sep=\\] {% 4.00000 0.00000\\5.00000 0.00000\\6.00000 0.00000\\7.00000 0.00000\\8.00000 0.00000\\9.00000 0.00000\\10.00000 0.00000\\11.00000 0.00000\\12.00000 0.00000\\13.00000 0.00000\\14.00000 0.00000\\15.00000 0.00000\\16.00000 0.00000\\17.00000 0.00000\\18.00000 0.00000\\19.00000 0.00000\\20.00000 0.00000\\21.00000 0.00000\\22.00000 0 ```
1medium
Title: API服务是否还需要 upload接口 Body: {StatusCode: 404, ReasonPhrase: 'Not Found', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers: { Date: Tue, 17 Sep 2024 07:20:55 GMT Server: uvicorn Content-Length: 22 Content-Type: application/json }} 直接调用/idphoto报错 404
3misc
Title: imap_migration generates traceback Body: # Impacted versions * OS Type: Ubuntu * OS Version: 22.04 LTS * Database Type: MySQL * Database version: 8 * Modoboa: 2.1.2 * installer used: Yes * Webserver: Nginx # Steps to reproduce - have offlineimap installed and configured for a domain - I double checked: all migrations are applied successfully - got to the shell and run the following command: - `python manage.py generate_offlineimap_config` # Current behavior I get the fopllowing error message: ``` Traceback (most recent call last): File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 829, in _resolve_lookup current = current[bit] TypeError: 'Migration' object is not subscriptable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/fernet.py", line 133, in _verify_signature h.verify(data[-32:]) File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/hazmat/primitives/hmac.py", line 72, in verify ctx.verify(signature) File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/hazmat/backends/openssl/hmac.py", line 85, in verify raise InvalidSignature("Signature did not match digest.") cryptography.exceptions.InvalidSignature: Signature did not match digest. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/srv/modoboa/instance/manage.py", line 22, in <module> main() File "/srv/modoboa/instance/manage.py", line 18, in main execute_from_command_line(sys.argv) File "/srv/modoboa/env/lib/python3.10/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line utility.execute() File "/srv/modoboa/env/lib/python3.10/site-packages/django/core/management/__init__.py", line 413, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/srv/modoboa/env/lib/python3.10/site-packages/django/core/management/base.py", line 354, in run_from_argv self.execute(*args, **cmd_options) File "/srv/modoboa/env/lib/python3.10/site-packages/django/core/management/base.py", line 398, in execute output = self.handle(*args, **options) File "/srv/modoboa/env/lib/python3.10/site-packages/modoboa/imap_migration/management/commands/generate_offlineimap_config.py", line 41, in handle content = render_to_string( File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/loader.py", line 62, in render_to_string return template.render(context, request) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/backends/django.py", line 61, in render return self.template.render(context) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 170, in render return self._render(context) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 162, in _render return self.nodelist.render(context) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 938, in render bit = node.render_annotated(context) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 905, in render_annotated return self.render(context) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/defaulttags.py", line 214, in render nodelist.append(node.render_annotated(context)) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 905, in render_annotated return self.render(context) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 988, in render output = self.filter_expression.resolve(context) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 671, in resolve obj = self.var.resolve(context) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 796, in resolve value = self._resolve_lookup(context) File "/srv/modoboa/env/lib/python3.10/site-packages/django/template/base.py", line 837, in _resolve_lookup current = getattr(current, bit) File "/srv/modoboa/env/lib/python3.10/site-packages/modoboa/imap_migration/models.py", line 50, in password return decrypt(self._password) File "/srv/modoboa/env/lib/python3.10/site-packages/modoboa/lib/cryptutils.py", line 42, in decrypt return smart_text(_get_fernet().decrypt(smart_bytes(encrypted_value))) File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/fernet.py", line 90, in decrypt return self._decrypt_data(data, timestamp, time_info) File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/fernet.py", line 151, in _decrypt_data self._verify_signature(data) File "/srv/modoboa/env/lib/python3.10/site-packages/cryptography/fernet.py", line 135, in _verify_signature raise InvalidToken cryptography.fernet.InvalidToken ``` # Expected behavior getting the input files for offlineimap.
2hard
Title: allure 和 pytest.mark 不应该混淆 Body: [//]: # ( . Note: for support questions, please use Stackoverflow or Gitter**. . This repository's issues are reserved for feature requests and bug reports. . . In case of any problems with Allure Jenkins plugin** please use the following repository . to create an issue: https://github.com/jenkinsci/allure-plugin/issues . . Make sure you have a clear name for your issue. The name should start with a capital . letter and no dot is required in the end of the sentence. An example of good issue names: . . - The report is broken in IE11 . - Add an ability to disable default plugins . - Support emoji in test descriptions ) #### I'm submitting a ... - [x] bug report - [ ] feature request - [ ] support request => Please do not submit support request here, see note at the top of this template. #### What is the current behavior? 不应该出现的标签 #### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem @allure.link("https://www.baidu.com/", name="baidu") @allure.issue("https://www.baidu.com/", "BUG") @allure.testcase("YHZ-123") @pytest.mark.repeat(1) @pytest.mark.parametrize("coupons_type", [1, 2]) def xxx(): pass ![image](https://github.com/allure-framework/allure-python/assets/29268369/7266c4b2-f2d7-49c4-b77f-3c29acb7e635) #### What is the expected behavior? 预期结果,allure 和 pytest.mark 不应该 混淆 #### What is the motivation / use case for changing the behavior? #### Please tell us about your environment: Python = 3.9.0 allure = 2.17.2 allure-pytest = 2.13.2 #### Other information [//]: # ( . e.g. detailed explanation, stacktraces, related issues, suggestions . how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc )
1medium
Title: TypeError: unsupported operand type(s) for //: 'NoneType' and 'int' Body: # Bug Report ### Describe the bug I am trying to convert Nvidia NeMo's FilterbankFeaturesTA class to ONNX. Here is my code - ``` from nemo.collections.asr.parts.preprocessing.features import ( FilterbankFeatures, FilterbankFeaturesTA, make_seq_mask_like, ) _model = FilterbankFeaturesTA( sample_rate= 16000, # window_size = 0.02, # window_stride = 0.01, n_window_size = None, n_window_stride = None, window = "hann", normalize = "per_feature", n_fft = None, preemph = 0.97, # features = 64, lowfreq = 0, highfreq = None, log = True, log_zero_guard_type = "add", log_zero_guard_value = 2 ** -24, dither = 1e-5, pad_to = 16, frame_splicing = 1, exact_pad = False, pad_value = 0, mag_power = 2.0, rng = None, nb_augmentation_prob = 0.0, nb_max_freq = 4000, # use_torchaudio = False, mel_norm = "slaney", stft_exact_pad = False, stft_conv = False, ) _model.eval() example_input_1 = torch.randn(1, 18432) # Input for x1 example_input_2 = torch.randn(18432) # Input for x2 # _model(example_input_1, example_input_2) example_out = _model.forward(example_input_1, example_input_2,) # example_out onnx_file_path = "preprocessor.onnx" args = (example_input_1, example_input_2) # kwargs = {"seq_len": example_input_2} onnx_model, _ = torch.onnx.dynamo_export( _model, # Model to export *args, # **kwargs, export_options=torch.onnx.ExportOptions( dynamic_shapes=True, ), ) # Save the ONNX model to file onnx_model.save(onnx_file_path) ``` Running this code gives me the following error - ``` { "name": "TypeError", "message": "unsupported operand type(s) for //: 'NoneType' and 'int'", "stack": "--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[66], line 9 1 # trying to export features.py FilterbankFeatures to onnx for web inference 2 # from nemo.collections.asr.parts.preprocessing import FilterbankFeatures 3 from nemo.collections.asr.parts.preprocessing.features import ( 4 FilterbankFeatures, 5 FilterbankFeaturesTA, 6 make_seq_mask_like, 7 ) ----> 9 _model = FilterbankFeaturesTA( 10 sample_rate= 16000, 11 # window_size = 0.02, 12 # window_stride = 0.01, 13 n_window_size = None, 14 n_window_stride = None, 15 window = \"hann\", 16 normalize = \"per_feature\", 17 n_fft = None, 18 preemph = 0.97, 19 # features = 64, 20 lowfreq = 0, 21 highfreq = None, 22 log = True, 23 log_zero_guard_type = \"add\", 24 log_zero_guard_value = 2 ** -24, 25 dither = 1e-5, 26 pad_to = 16, 27 frame_splicing = 1, 28 exact_pad = False, 29 pad_value = 0, 30 mag_power = 2.0, 31 rng = None, 32 nb_augmentation_prob = 0.0, 33 nb_max_freq = 4000, 34 # use_torchaudio = False, 35 mel_norm = \"slaney\", 36 stft_exact_pad = False, 37 stft_conv = False, 38 ) 40 _model.eval() 42 example_input_1 = torch.randn(1, 18432) # Input for x1 File ~/Documents/aakhor/asr/NeMo/nemo/collections/asr/parts/preprocessing/features.py:555, in __init__(self, sample_rate, n_window_size, n_window_stride, normalize, nfilt, n_fft, preemph, lowfreq, highfreq, log, log_zero_guard_type, log_zero_guard_value, dither, window, pad_to, pad_value, mel_norm, use_grads, max_duration, frame_splicing, exact_pad, nb_augmentation_prob, nb_max_freq, mag_power, rng, stft_exact_pad, stft_conv) 553 self.dither = dither 554 self.pad_to = pad_to --> 555 self.pad_value = pad_value 556 self.n_fft = n_fft 557 self._mel_spec_extractor: torchaudio.transforms.MelSpectrogram = torchaudio.transforms.MelSpectrogram( 558 sample_rate=self._sample_rate, 559 win_length=self.win_length, (...) 568 wkwargs={\"periodic\": False}, 569 ) File ~/miniconda3/envs/nemo/lib/python3.11/site-packages/torchaudio/transforms/_transforms.py:587, in MelSpectrogram.__init__(self, sample_rate, n_fft, win_length, hop_length, f_min, f_max, pad, n_mels, window_fn, power, normalized, wkwargs, center, pad_mode, onesided, norm, mel_scale) 585 self.n_fft = n_fft 586 self.win_length = win_length if win_length is not None else n_fft --> 587 self.hop_length = hop_length if hop_length is not None else self.win_length // 2 588 self.pad = pad 589 self.power = power TypeError: unsupported operand type(s) for //: 'NoneType' and 'int'" } ``` ### System information PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.5.119 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Nvidia driver version: 550.120 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Vendor ID: GenuineIntel Model name: 12th Gen Intel(R) Core(TM) i5-12400F CPU family: 6 Model: 151 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 Stepping: 5 CPU max MHz: 4400.0000 CPU min MHz: 800.0000 BogoMIPS: 4992.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 288 KiB (6 instances) L1i cache: 192 KiB (6 instances) L2 cache: 7.5 MiB (6 instances) L3 cache: 18 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-11 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.24.4 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] onnx==1.17.0 [pip3] onnxruntime==1.20.1 [pip3] onnxscript==0.1.0.dev20241218 [pip3] open_clip_torch==2.29.0 [pip3] pytorch-lightning==2.4.0 [pip3] pytorch-triton==3.2.0+git0d4682f0 [pip3] torch==2.5.1 [pip3] torchaudio==2.5.1 [pip3] torchdiffeq==0.2.5 [pip3] torchmetrics==1.6.0 [pip3] torchsde==0.2.6 [pip3] torchvision==0.20.1 [pip3] triton==3.1.0 [conda] numpy 1.24.4 py311h64a7726_0 conda-forge [conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi [conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi [conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi [conda] open-clip-torch 2.29.0 pypi_0 pypi [conda] pytorch-lightning 2.4.0 pypi_0 pypi [conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi [conda] torch 2.5.1 pypi_0 pypi [conda] torchaudio 2.5.1 pypi_0 pypi [conda] torchdiffeq 0.2.5 pypi_0 pypi [conda] torchmetrics 1.6.0 pypi_0 pypi [conda] torchsde 0.2.6 pypi_0 pypi [conda] torchvision 0.20.1 pypi_0 pypi [conda] triton 3.1.0 pypi_0 pypi ### Reproduction instructions 1. Clone the NeMo github repo. 2. Run the code from above. ### Expected behavior The model should export to onnx.
1medium
Title: How to achieve concurrency? Body: Multiple threads have been opened, and when concurrent, the result is empty.
1medium
Title: [BUG] Basic code from documentation does not work Body: I tried both the codes described below from the [official docs](https://einops.rocks/api/repeat/) and it does not seem to work ``` # change it to RGB format by repeating in each channel >>> repeat(image, 'h w -> h w c', c=3).shape (30, 40, 3) # repeat image 2 times along height (vertical axis) >>> repeat(image, 'h w -> (repeat h) w', repeat=2).shape (60, 40) ``` I get the following errors ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<__array_function__ internals>", line 198, in repeat TypeError: repeat() got an unexpected keyword argument 'c' ``` and ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<__array_function__ internals>", line 198, in repeat TypeError: repeat() got an unexpected keyword argument 'repeat' ``` Since it is the official documentation code, I thought it is better to ask here and get a quick answer instead of asking stackoverflow (none of the chatbot fixes worked). Below is the version of einops I have: ``` pip show einops Name: einops Version: 0.8.0 Summary: A new flavour of deep learning operations Home-page: Author: Alex Rogozhnikov Author-email: License: MIT Location: /Users/myname/.pyenv/versions/3.8.16/lib/python3.8/site-packages Requires: Required-by: ``` Please let me know how to fix this, I'm in a bit of hurry.
1medium
Title: Error when testing latest ONNX commit on ORT Body: # Ask a Question ### Question <!-- Explain your question here. --> It seems there are updates about `onnx::OpSchema` after 1.17 which would cause ORT build failure. Is this expected? ```c++ ... /onnxruntime/onnxruntime/core/graph/contrib_ops/contrib_defs.cc: In function ‘void onnxruntime::contrib::RegisterContribSchemas()’: /onnxruntime/onnxruntime/core/graph/contrib_ops/contrib_defs.cc:2904:46: error: conversion from ‘onnx::OpSchema’ to non-scalar type ‘onnx::OpSchemaRegistry::OpSchemaRegisterOnce’ requested 2904 | .SetContextDependentFunctionBodyBuilder( ... ``` Btw, here's the [onnx.patch](https://github.com/microsoft/onnxruntime/blob/yifanl/oss/cmake/patches/onnx/onnx.patch) that synced to latest onnx commit, and deps.txt pinned to latest as well. ### Further information - Relevant Area: <!--e.g., model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators, IR, ONNX Hub, data preprocessing, CI pipelines. --> - Is this issue related to a specific model? **Model name**: <!-- *e.g. mnist* --> **Model opset**: <!-- *e.g. 17* --> ### Notes <!-- Any additional information, code snippets. -->
2hard
Title: [Minor issue] [App Icon rendering] Using the macro navbar_block Body: Hello Team, Thank you for this amazing Framework. Just to notify a minor issue that can be solved quicly (sorry for not using the usual issue template....) The code inside the macro navbar_block have some missing html attributes compared to the navbar.html file This is related to the app_icon rendering. **Line 17 : `<img src="{{appbuilder.app_icon}}" height="100%" width="auto">`** from source -> https://github.com/dpgaspar/Flask-AppBuilder/blob/98b1be8b3390cd592dc20f215062e55d27e08eec/flask_appbuilder/templates/appbuilder/navbar.html and **Line 93 : `<img src="{{appbuilder.app_icon}}" >`** from source -> https://github.com/dpgaspar/Flask-AppBuilder/blob/1e900bba85452de6d988f7da191f9a26fec62226/flask_appbuilder/templates/appbuilder/baselib.html As result, the app_icon is rendered in differrent ways depending on if we use the macro or if we extends direclty from baselayout.html. Thank you all again for your great work.
0easy
Title: Why is tflearn.data_utils.shuffle() not used in all CIFAR-10 Examples? Body: In the **covnet_cifar10.py** and **network_in_network.py** examples the CIFAR-10 data is shuffled after its loaded using the `tflearn.data_utils.shuffle()` function: `from tflearn.datasets import cifar10` `(X, Y), (X_test, Y_test) = cifar10.load_data()` `X, Y = shuffle(X, Y)` However, in the **residual_network_cifar10.py** and **resnext_cifar10.py** examples this step is not taken after the data is loaded. Is there a reason why this shuffle step is not included in these examples? Is it just that the data is not required to be shuffled for these models to work? Or, is the shuffling of the data taking place during the `.fit()` training where the shuffle parameter is set to true `shuffle=True`?
3misc
Title: [ENH]: Set list in marker parameter when plotting using matplotlib.pyplot.plot Body: ### Problem Hi all, It would be great to have the option to set the marker as a list, so that it each point gets a different value. That way instead of having to do this: ``` import matplotlib.pyplot as plt x = [0, 1, 2, 3, 4] y = [20, 13, 25, 36, 74] markers = ['o', 's', '^', 'D', 'x'] for i in range(len(x)): plt.plot(x[i], y[i], marker=markers[i]) plt.show() ``` We could directly do: ``` import matplotlib.pyplot as plt x = [0, 1, 2, 3, 4] y = [20, 13, 25, 36, 74] markers = ['o', 's', '^', 'D', 'x'] plt.plot(x, y, marker=markers) plt.show() ``` Which results in: `ValueError: Unrecognized marker style ['o', 's', '^', 'D', 'x']`. The reason why I need this is because I could then store the plot result in a variable as one Line2D and use it easily elsewhere, for example to do hover annotations. The same should work for other parameters (color, linewidth, etc.) in principle. Thanks, Alba ### Proposed solution _No response_
2hard
Title: file_dropper extension not loaded, but no file_dropper extension after adding Body: ```python 2024-08-15 15:43:37,814 pn.extension was initialized but 'file_dropper' extension was not loaded. In order for the required resources to be initialized ensure the extension is loaded with the following argument(s): pn.extension('file_dropper') ``` The correct one is `filedropper`, but I think this message is auto formatted somewhere
0easy
Title: help with this issue Body: `ImportError: cannot import name 'Chrome' from partially initialized module 'seleniumwire.undetected_chromedriver.webdriver'` `from .webdriver import Chrome, ChromeOptions # noqa: F401`
1medium
Title: Augment image as if somebody took a photo of the same image Body: We have a situation where we want to distinguish "real" photos from photos taken of other photos. Wonder if there's a way to simulate taking a photo of a photo. Perhaps even taking photos from monitor screens. Screen glare/spectral effects/monitor pixel effects/matte effect... etc. All of this could be useful as image augmentations. Any ideas? Has anybody worked on anything like this? Perhaps as an academic paper on this.
2hard
Title: Incorrect status reported for incomplete logs Body: In latest master branch, if a MIP log is incomplete (i.e. cut off with no termination message for whatever reason), we might report optimal status incorrectly. For example: ``` Variable types: 23522 continuous, 2343 integer (0 binary) Root barrier log... Barrier solved model in 50 iterations and 72.71 seconds (53.24 work units) Optimal objective -1.76339641e+08 Solved with barrier Root relaxation: objective -1.763396e+08, 104343 iterations, 108.23 seconds (79.42 work units) Nodes | Current Node | Objective Bounds | Work Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time ``` here we get 'OPTIMAL' status from `ContinuousParser`, but no termination message from `NodeLogParser`. grblogtoolsv1 would give an 'incomplete log' warning in this situation (and report unknown status? I'm not sure). We should check for this with some custom logic for Status and Runtime, something like: - If the model is continuous, we can get Runtime and Status from ContinuousParser - If the model is (a) a MIP or (b) a continuous model solved as a MIP, we should ignore Runtime and Status from ContinuousParser - (a) We can check using model type in `SingleLogParser` - (b) Look for the message `Solving as a MIP` in header or presolve - If TerminationParser reports runtime or status, it should take precedence (this already happens)
1medium
Title: Duvida sobre implemtanção de um teste Body:
3misc
Title: `intensity_limits` would be a better name for `dtype_limits` Body: At a first glance I thought, `dtype_limits(...)` would give me the largest and lowest representable number of the dtype of a given image (like `np.finfo()` only for integers and floats). Though, the function actually returns our intensity conventions for a given dtype. So I propose to refactor the function: ```python @deprecate_function(...) def dtype_limits(image, clip_negative=False): ... # to def intensity_limits(dtype, *, clip_negative=False): ... ``` I'm guessing that this function isn't used too much in our user base and the new name should make the functions purpose a lot clearer. I don't think this refactor needs to be a high priority but I'd like to eventually get to it. At the very latest for skimage2.
1medium
Title: Is that possible to pass the same factory dependency to all dependants? Body: For example, I have 1 use case, it has 3 dependencies - Session, ProductRepository, and UserRepository; repositories depend on session. Could I pass single SQLAlchemy session to all them? When I create second use case, session should be different.
1medium
Title: Plugin: 阿瓦隆 Body: ### PyPI 项目名 nonebot-plugin-avalon ### 插件 import 包名 nonebot_plugin_avalon ### 标签 [{"label":"game","color":"#ea5252"}] ### 插件配置项 _No response_
3misc
Title: `load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError` Body: ### Describe the bug When loading a large dataset (>1000GB) from S3 I run into the following error: ``` Traceback (most recent call last): File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper return await func(*args, **kwargs) File "/home/alp/.local/lib/python3.10/site-packages/aiobotocore/client.py", line 383, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the GetObject operation: The difference between the request time and the current time is too large. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/alp/phoneme-classification.monorepo/aws_sagemaker/data_processing/inspect_final_dataset.py", line 13, in <module> dataset = load_from_disk("s3://speech-recognition-processed-data/whisper/de/train_data/", storage_options=storage_options) File "/home/alp/.local/lib/python3.10/site-packages/datasets/load.py", line 1902, in load_from_disk return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) File "/home/alp/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1686, in load_from_disk fs.download(src_dataset_path, [dest_dataset_path.as](http://dest_dataset_path.as/)_posix(), recursive=True) File "/home/alp/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1480, in download return self.get(rpath, lpath, recursive=recursive, **kwargs) File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 121, in wrapper return sync(self.loop, func, *args, **kwargs) File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 106, in sync raise return_result File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 61, in _runner result[0] = await coro File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 604, in _get return await _run_coros_in_chunks( File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 257, in _run_coros_in_chunks await asyncio.gather(*chunk, return_exceptions=return_exceptions), File "/usr/lib/python3.10/asyncio/tasks.py", line 408, in wait_for return await fut File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1193, in _get_file body, content_length = await _open_file(range=0) File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1184, in _open_file resp = await self._call_s3( File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 348, in _call_s3 return await _error_wrapper( File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 140, in _error_wrapper raise err PermissionError: The difference between the request time and the current time is too large. ``` The usual problem for this error is that the time on my local machine is out of sync with the current time. However, this is not the case here. I checked the time and even reset it with no success. See resources here: - https://stackoverflow.com/questions/4770635/s3-error-the-difference-between-the-request-time-and-the-current-time-is-too-la - https://stackoverflow.com/questions/25964491/aws-s3-upload-fails-requesttimetooskewed The error does not appear when loading a smaller dataset (e.g. our test set) from the same s3 path. ### Steps to reproduce the bug 1. Create large dataset 2. Try loading it from s3 using: ``` dataset = load_from_disk("s3://...", storage_options=storage_options) ``` ### Expected behavior Load dataset without running into this error. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.3 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
1medium
Title: Generation2 model files does not work with Pytorch 1.4 Body: Trying to load generation 2 models with `reader = easyocr.Reader(['en'], download_enabled=False)` yields the following error with PyTorch 1.4: `RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:132)`
1medium
Title: Add "Open with Google Colab" feature in every notebook Body: The notebooks in this book do not have the feature to run on Google Colab. This feature will be very helpful for those who are just beginning with deep learning and will help us familiarize ourselves with the code in a better way.
0easy
Title: password_scheme [ '"sha512crypt" is not a valid choice.' ] Body: # Impacted versions * OS Type: Debian * OS Version: 12 * Database Type: postgres * Database version: 15.6 * Modoboa: 2.2.4 * installer used: yes * Webserver: nginx # Steps to reproduce 1. login into a new-admin 2. go to /new-admin/parameters/core (go to settings > general) 3. ctrl+shift+k (open debug console in your browser) 4. no need to change anything, just click on the green floppy disk icon in bottom right corner and then see response from the server indicating failure # Current behavior this was installed yesterday, almost everything is in default settings (except maybe for importing users and domains CSVs) I believe that this might be caused by the users csv import, but i tried to remove those. <!-- Explain the behavior you're seeing that you think is a bug, and explain how you think things should behave instead. --> ## Response ```XHRPUT XHRPUT https://mail.<blablabla>/api/v2/parameters/core/ [HTTP/2 400 112ms] password_scheme [ '"sha512crypt" is not a valid choice.' ] ``` # Expected behavior status 200 # Video/Screenshot link (optional) ![Screenshot_20240411_133221](https://github.com/modoboa/modoboa/assets/120217643/547de2c7-8cf0-4780-8677-ba744f72d584)
1medium
Title: Is there a way to query for widgets? Body: Does pytest-qt provide some way to query for widgets in the "tree" of the UI without making them "public" attributes of some parent widget? I'm thinking of something similar to the `queryByTestId()` facility that some testing libraries use in the context of javascript frontend applications.
1medium
Title: line buffering (buffering=1) warning on Python 3.8 Body: Since the recent change made in #1014, I got a warning like below when launching images based on Python 3.8 (`python=3.8` in environment.yml). ``` /srv/conda/envs/notebook/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used ``` I believe this issue is related to [the change](https://bugs.python.org/issue32236) in Python 3.8 where it started complaining whereas previous versions silently ignored `buffering=1` for binary mode. I guess the related code is below: https://github.com/jupyterhub/repo2docker/blob/a5f5bbbb75a9945d1f8fe8f8ff4844dfd4481742/repo2docker/buildpacks/repo2docker-entrypoint#L40-L46 Not sure if related, I also noticed texts on the console (mostly) lost colors after this recent change; Jupyter log messages used to have colored headings for warning/info/etc, but they are now all monochrome. Yet some texts are still printed in colors (i.e. Julia banner lost colors, but its prompt still has a color).
1medium
Title: Support Numpy 2 varlen strings Body: ### Description of new feature This is not a feature-request per-se, rather it's tracking the future possibility of ingesting / exporting NumPy 2 varlen strings. I took a brief glance at this again today (it's amazing how quickly this stuff fades once you're not doing it every day), and it's clear that right now we have some work ahead of us if we want to ingest these strings into Awkward. NumPy's choice to have each string be its own arena-allocated object means that there's no trivial way to ask for a single flat buffer of UTF8 code-units. I only spent a few minutes to look at this, and so far it seems we probably can use the NumPy C API to avoid needing to convert the string into UTF-32 in order to produce a flat buffer. This conversion would need to iterate over every string object and fill a buffer. In the return direction, I don't _think_ we can lean in to the simple slice-based view that we have internally. The C API for NumPy varlen strings is opaque w.r.t the allocators, so we would need to exactly reverse the ingest method (i.e. write each substring using the C API).
2hard
Title: Use Griffe's public API Body: Any reason you're importing from the internal API? https://github.com/pydantic/pydantic-ai/blob/b9ec73fe8d47d7859dbf7eefbade198f3cc0eb34/pydantic_ai_slim/pydantic_ai/_griffe.py#L7-L8 You're exposing yourself to breakages if I change these internals :sweat_smile: Public equivalent: ```python from griffe import DocstringSectionKind, Docstring, Object as GriffeObject ``` If it's to avoid loading too many things, note that `_griffe.models` imports a lot of stuff anyway: ```python from _griffe.c3linear import c3linear_merge from _griffe.docstrings.parsers import DocstringStyle, parse from _griffe.enumerations import Kind, ParameterKind, Parser from _griffe.exceptions import AliasResolutionError, BuiltinModuleError, CyclicAliasError, NameResolutionError from _griffe.expressions import ExprCall, ExprName from _griffe.logger import logger from _griffe.mixins import ObjectAliasMixin ```
0easy
Title: Add a ranker component that uses an LLM to rerank documents Body: **Describe the solution you'd like** I’d like to add a new ranker component that leverages a LLM to rerank retrieved documents based on their relevance to the query. This would better assess the quality of the top-ranked documents, helping ensure that only relevant results are given to the LLM to answer the question. Additionally, having an ability for the LLM to choose how many documents to keep would also be nice. A sort of dynamic top-k if you will. **Additional context** We have started to employ this for some clients especially in situations where we need to provide extensive references. Basically for a given answer we need to provide all relevant documents that support the answer text. Having one reference in these situations is not enough. As a result in these situations we are willing to pay the extra cost to use an LLM to rerank and only keep the most relevant documents.
2hard
Title: Custom Scheduler? Body: > <a href="https://github.com/cliffburdick"><img align="left" height="50" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> An issue by [cliffburdick](https://github.com/cliffburdick) at _2020-01-28 23:22:41+00:00_ > Original URL: https://github.com/zalando-incubator/kopf/issues/301 > &nbsp; Great project, and I enjoyed your talk at kubecon 2019! ## Problem Kubernetes 1.17 added a new scheduler framework, but at the same time, are slowly deprecating the ability to use Python as a scheduler language. Most of the new hooks added must be written in Go, and the old scheduler extension framework that was language agnostic is going away. Since custom schedulers are similar to operators, but just watch for the scheduler-name field to appear in the spec, kopf might be able to fit that need. ## Proposal I don't know enough about the internals of kopf, but if the framework of callbacks when a pod using a custom scheduler appeared or disappeared could be reused, that would be ideal. Instead of dealing with a CRD (or in addition to), kopf could provide the scheduler framework the ability to bind to particular nodes. ## Checklist - [X ] Many users can benefit from this feature, it is not a one-time case - [X ] The proposal is related to the K8s operator framework, not to the K8s client libraries Edit: Maybe this is already possible and I'm overthinking it. If kopf is registered to handle a CRD that always has the scheduler-name field set, the default scheduler won't touch it. Will kopf.on.create be enough to trigger the scheduler to know it's there? --- > <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-01-31 09:35:03+00:00_ > &nbsp; Can you please give some links with a description of this new scheduler? — To better understand the change. --- > <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-01-31 10:51:59+00:00_ > &nbsp; I only found https://kubernetes.io/docs/concepts/configuration/scheduling-framework/ — but it is about the pod scheduling, i.e. assigning them to the nodes. I'm not sure this was ever doable with operators (both Go- & Python-based). Technically, Kopf is able to handle built-in resources now, including pods. But this handling is limited to watching over them and patching their fields (either spec or status or metadata). If this is enough for scheduling, then it can be done now. Can you provide some more detailed description of the idea? E.g. with some hypothetical code samples and a step-by-step flow of events explained? To the level of my low knowledge of Kubernetes internals, the plugins are only possible when embedded into the Kubernetes itself. We can probably write a Go-based "mediator" plugin that will communicate with the scheduler framework internally, but with no own logic implemented, just by getting the statuses/values from the pod's fields — and by putting them back. And then, a regular controller/operator can actually do the job and "talk" to the plugin via these fields, which implies "talking" to the scheduler. Is that what you mean? --- > <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-01-31 15:31:00+00:00_ > &nbsp; Hi [nolar](https://github.com/nolar), thanks for the response. Let me give you a bit more background. We have a custom Python scheduler we are developing here: https://github.com/Viasat/nhd, that we presented at Kubecon last year. Right now the scheduler does its own reconciliation loop, where the only difference (I believe) from a normal operator is it looks for the "schedulerName" field in the pod spec, and is allowed to bind the pod to a node if that field is set for it. The main feature of the scheduler is it makes decisions based on available hardware resources on the node. As you can imagine, that involves watching for new pods to show up, taking resources from a node when they are scheduled, and freeing those resources when the pod dies. The piece where it watches for new pods to show up and be deleted seem to be the same as a kopf.on.create/delete. I had to write my own (poor) logic to do this, even though it's been done a million times before. It's no problem, and even preferable, to require a CRD for these pods, since they are very similar, but not quite the same, as a StatefulSet. You mentioned kopf is limited to watching over pods and patching, and I think other than calling the client API bind command, a scheduler is nothing more than that (at least a simple one). With a CRD, I don't think we have to worry about the "scheduler" part of it, because the CRD would be the first object to come in, and that being created gives kopf the ability to deploy the pods. So the work flow would be: 1. Watch for CRD type ABC to show up 2. on.create handler for ABC creates pods with the scheduler-name set to scheduler-ABC 3. on.create pod handler with filter on scheduler-name or some other identifier sees pod come in and binds it to a node // This is the one I'm not sure is possible. Would on.create be triggered if the pod is in a pending state without a node bound to it? Does this handler only get triggered when the pod is successfully deployed? 4. on.delete pod handlers for that same type, and schedulers frees appropriate internal resources. My idea is that the kopf framework is running an operator that looks for this CRD to show up, launches an appropriate number of pods as owned by that CRD, and sees when the pods (or CRD) are deleted. Because much of the difficulty is in writing the reconciliation loop, which you've already solved, I figured the scheduler could simply be a wrapper around on.create/delete. I saw a lot of issues related to watching pods under CRDs, and it wasn't entirely clear to me if that's supported/working yet, since it seemed you were still coming up with ideas on how to handle that. Also, after the CRD is deployed, I wanted to have another kopf operator silently watching the child pods of these CRDs as well, since it's in charge of doing things like sending messages to these pods when they come up, providing them configuration, etc. For more context on Kubernetes, what I really wanted to do was make our scheduler into a [scheduler extension](https://kubernetes.io/docs/concepts/extend-kubernetes/extend-cluster/#scheduler-extensions). In the past, this was simply making a webhook that was called after the main scheduler was run, and it acted as a pre-filter step for other schedulers by doing the normal things like removing dead nodes/unhealthy nodes, etc. Unfortunately, it seems scheduler extensions via a webhook is going to be deprecated at some point in the future in favor of the scheduler framework added recently. The reason is the scheduler framework allows more flexibility as to where you want to plug in to compared to the webhook. The scheduler framework, at least my understanding, requires your scheduler to be written in Go, so this effectively will make writing a scheduler extension in Python impossible in the future without hacking in a Go module. But I digress. I hope this helps describe the goal, and maybe you can say whether that's possible or not, since I really like the idea of kopf and think it would be a great application. --- > <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-01-31 21:13:57+00:00_ > &nbsp; **For the first part,** as I understand, you want something like this: ```python import kopf import pykube @kopf.on.create('zalando.org', 'v1', 'kopfexamples') def spawn_kexy_pods(**_): pod_body = {'spec': {'scheduler-name': 'scheduler-ABC', ...}, ...} # or parse a yaml template kopf.adapt(pod_body) # for cascaded deletions # kopf.label(pod_body, {'mykexypod': 'yes'}) # perhaps, not needed api = pykube.HTTPClient() pod = pykube.Pod(api, pod_body) pod.create() def _designated_for_us(spec, **_): return spec.get('scheduler-name') == 'scheduler-ABC' @kopf.on.create('', 'v1', 'pods', # labels={'mykexypod': 'yes'}, # perhaps, not needed when=_designated_for_us) def bind_pod_to_node(namespace, name, patch, **_): node_name = call_api_to_bind_it(namespace, name) patch.setdefault('metadata', {}).setdefault('labels', {})['node'] = node_name # The code below is optional: def _assigned_to_a_node(old, new, **_): old_node = old.get('metadata', {}).get('labels', {}).get('node') new_node = new.get('metadata', {}).get('labels', {}).get('node') return new_node is not None and old_node != new_node @kopf.on.update('', 'v1', 'pods', when=_assigned_to_a_node) def notice_node_assigned(**_): pass # congrats! ``` Specifically: > Would on.create be triggered if the pod is in a pending state without a node bound to it? Does this handler only get triggered when the pod is successfully deployed? On-creation handlers are triggered when the pod is seen for the first time. I.e. when it is created. Usually, you can expect the handling to be done near instantly, much before the pod is actually started by any schedulers (but it already exists). You might want to take a look into the `@on.update` handlers, or `@on.event` low-level handlers — to track when the pod is assigned/bound to the nodes. If you know the field where this information is stored, you can also use `@kopf.on.field(..., field="metadata.labels.node")`. on-create, on-update, on-delete, on-field handlers will be retried until succeeded. on-event handler is fire-and-forget: if it fails, it will not be retried (or until the new event arrives some time later). There are no special events for the pod's conditions (yet), and there is no special treatment for the pods above other resource kinds (yet). But that can be expressed via the `when=` filters. Or do I miss something? --- > <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-02-01 19:47:35+00:00_ > &nbsp; [nolar](https://github.com/nolar) thanks! I think this sounds very promising, and I'll do some prototyping over the coming weeks. I really appreciate your comments, and I'll let you know when I update our project to use it. By the way, I know nodes are not objects necessarily, but the ability to watch node status and be alerted is also really important for schedulers. This is likely outside the scope of kopf, though, and can still be done by separate code. --- > <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-08-19 14:59:21+00:00_ > &nbsp; Marking this closed as this is now integrated into NHD: https://github.com/Viasat/nhd
1medium
Title: Excel files do not work in the `parse_contents()` function provided in the sample code for dcc.Upload Body: Thank you so much for helping improve the quality of Dash! We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through. **Describe the bug** I noticed this in my own code, then referred to the sample code provided and found the same issue. When I click the 'Drag and Drop or Select Files' button in the documentation's sample code for [dcc.Upload](https://dash.plotly.com/dash-core-components/upload) and select an Excel file, I receive the error 'There was an error processing this file.'. **Expected behavior** I expect the file to upload, the contents to be parsed, and a table to be generated. **Screenshots** ![image](https://github.com/user-attachments/assets/4ae6ef47-0362-481a-ab96-769d91a2a5ec)
1medium
Title: onHover event in ComboBox changes selected_item Body: I am trying to extract the selected value from a ComboBox with `combo_box.selected_text()` through the `win32` backend, with no success as each time my mouse is hovering on another **not selected** item, this function returns the hovered item's text and not my selected item's text. Am I missing something or is this intentional behavior on `onHover` event? I've tried numerous ways to extract the selected value + tried to understand if the combo box list is visible (which means the user might hover over some items) but with no success at all. I noticed (through Accessibility Insights) that there is a certain property in `ComboBox` item indicating the `selectedValue`, how do I extract it? <img width="414" alt="Screen Shot 2022-12-18 at 10 19 54" src="https://user-images.githubusercontent.com/5401999/208290452-633c13d6-6263-4681-a59f-3de2bf09ff81.png">
1medium
Title: mask2former模型使用pytorch2torchscript.py,导出为gpu上的torchscript模型后,精度损失很厉害,而cpu的不会 Body: 我也尝试了直接使用下面代码来导出模型,和上面一样都没有报错,但结果依然没有变化 `model = init_model(config_path, checkpoint_path, device='cuda:0') verify = True imgs = torch.randn(1,3,512,512).to("cuda") traced_model = torch.jit.trace(model,example_inputs=imgs,check_trace=verify,) traced_model.save(output_file)` 而使用result = mmseg.apis.inference_model(model, img)来推理,精度是正常的,**使用cpu的torchscript模型监督也正常** **是不是mmsegmentation对于导出gpu上的torchscript模型有bug,不适配** 我想使用traced_model = torch.jit.script(model)来导出,有些报错一下解决不了
2hard
Title: [BUG] Ray task backend no progress Body: <!-- Thank you for your contribution! Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue. --> **Describe the bug** Ray task mode doesn't update progress until whole task finished: ![image](https://user-images.githubusercontent.com/12445254/183634801-331c03f6-5a5d-4ba5-b11f-9645c0840e93.png) **To Reproduce** To help us reproducing this bug, please provide information below: 1. Your Python version: 3.7.9 2. The version of Mars you use: master 3. Versions of crucial packages, such as numpy, scipy and pandas 4. Full stack of the error. 5. Minimized code to reproduce the error. ```python def test_groupby(n=10): from datetime import datetime start = datetime.now() df = md.DataFrame( mt.random.rand(n * 500, 4, chunk_size=500), columns=list('abcd')) # print(df.sum().execute()) result = df.groupby(['a']).apply(lambda pdf: pdf).execute() duration = datetime.now() - start return result, duration mars.new_session(n_worker=10, n_cpu=10*2, backend="ray") test_groupby(200) ``` **Expected behavior** A clear and concise description of what you expected to happen. **Additional context** Add any other context about the problem here.
1medium
Title: Number of features remains disabled when Suggest features is closed during search Body: ![suggest features](https://github.com/biolab/orange3/assets/5299789/8fd3572d-3564-45b0-b4ea-a0a23dea0cfa) How to reproduce: 1. open _Suggest features_ and run 2. close while running 3. choose another mode (circular, LDA or PCA) 4. open _Suggest features_: the _Number of variables_ field is disabled despite no search running OS: Windows 10 x64 Orange: 3.35.0
1medium
Title: How to detect the noise, breaks and multi-person speak in a audio? Body: Hello, I am not in the audio field. I would like to ask, for a reference audio, I have removed BGM and reverberation to a certain extent, but the effect of inputting it into the sound cloning is still not good. Is there any better way to detect whether there is noise, distortion, and multiple people speaking in the reference audio?
1medium
Title: Faker.generate should allow calling without any kwargs Body: `Faker.generate` currently requires kwargs, but doesn't need them i.e. `Flaker(...).generate({})` will work just fine with most providers. The kwargs attribute should be optional. It would also be good if the kwargs were really kwargs, not just a passed-in dict.
1medium
Title: Use Arrow PyCapsule Interface instead of Dataframe Interchange Protocol Body: This is something I've chatted with @MarcoGorelli offline about. At the time it was implemented in seaborn, the Dataframe Interchange Protocol was the best option for exchanging dataframe-like data. However, since that was implemented in seaborn, the [PyArrow Capsule Interface](https://arrow.apache.org/docs/format/CDataInterface/PyCapsuleInterface.html) has come along and solved many of the issues that the DataFrame Interchange Protocol left open. Without knowing the current state of the interchange implementation of seaborn, switching to the PyArrow Capsule Interface should solve at least the following issues: - It will add support for polars and other dataframe libraries (https://github.com/mwaskom/seaborn/issues/3277 and https://github.com/mwaskom/seaborn/issues/3188) - It will use the Arrow type system, which supports aggregate types (https://github.com/mwaskom/seaborn/issues/3533) - The wonkiness of pandas' type system won't be inherited by seaborn (potentially solving https://github.com/mwaskom/seaborn/issues/3519) The interface has been adopted by a good deal of projects already, some of which are being tracked in https://github.com/apache/arrow/issues/39195
1medium
Title: ValueError: We need an `offload_dir` to dispatch this model according to this `device_map`... Body: 大家好! 我今天试着跑了下这个 Huatuo-Llama-Med-Chinese (整个过程见下文说明),然后遇到了这个错误: ``` $ bash scripts/infer.sh /usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (2.0.4) or chardet (4.0.0) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 5.0 CUDA SETUP: Detected CUDA version 122 /home/tcmai/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU! warn(msg) CUDA SETUP: Loading binary /home/tcmai/.local/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda122_nocublaslt.so... /home/tcmai/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. The class this function is called from is 'LlamaTokenizer'. The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function. Loading checkpoint shards: 100%|█████████████████████████████████████| 33/33 [01:52<00:00, 3.40s/it] using lora ./lora-llama-med Traceback (most recent call last): File "/data/source/medical-llm/Huatuo-Llama-Med-Chinese-git/infer.py", line 125, in <module> fire.Fire(main) File "/home/tcmai/.local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/tcmai/.local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/home/tcmai/.local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "/data/source/medical-llm/Huatuo-Llama-Med-Chinese-git/infer.py", line 47, in main model = PeftModel.from_pretrained( File "/home/tcmai/.local/lib/python3.10/site-packages/peft/peft_model.py", line 181, in from_pretrained model.load_adapter(model_id, adapter_name, **kwargs) File "/home/tcmai/.local/lib/python3.10/site-packages/peft/peft_model.py", line 406, in load_adapter dispatch_model( File "/home/tcmai/.local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 345, in dispatch_model raise ValueError( ValueError: We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules need to be offloaded: base_model.model.model.layers.4, base_model.model.model.layers.5, base_model.model.model.layers.6, base_model.model.model.layers.7, base_model.model.model.layers.8, base_model.model.model.layers.9, base_model.model.model.layers.10, base_model.model.model.layers.11, base_model.model.model.layers.12, base_model.model.model.layers.13, base_model.model.model.layers.14, base_model.model.model.layers.15, base_model.model.model.layers.16, base_model.model.model.layers.17, base_model.model.model.layers.18, base_model.model.model.layers.19, base_model.model.model.layers.20, base_model.model.model.layers.21, base_model.model.model.layers.22, base_model.model.model.layers.23, base_model.model.model.layers.24, base_model.model.model.layers.25, base_model.model.model.layers.26, base_model.model.model.layers.27, base_model.model.model.layers.28, base_model.model.model.layers.29, base_model.model.model.layers.30, base_model.model.model.layers.31, base_model.model.model.norm, base_model.model.lm_head. $ ``` 整个过程大致如下: (1)clone 这个 Huatuo-Llama-Med-Chinese repo ,下载 readme 中提到的四个模型权重数据,pip 安装好依赖; (2)运行 ` $ bash scripts/infer.sh `,根据错误提示,手动编译安装 bitsandbytes 支持我显卡的 cuda122 版; (3)再运行 ` $ bash scripts/infer.sh `,根据错误提示,clone 下载 https://huggingface.co/decapoda-research/llama-7b-hf 中的基础模型数据; (4)再运行 ` $ bash scripts/infer.sh `,便出现了上面的 "ValueError: We need an `offload_dir` to dispatch this model according to this `device_map`" 错误。 目前就中断在第(4)步里了,不知道这个错误是什么原因。是最近的 llama-7b-hf 数据不兼容,还是? 谢谢!
1medium
Title: Unreachable, WinRT, GattSessionStatus.CLOSED possible Solution Body: * bleak version: 0.21.0a1 * bleak-winrt version: 1.2.0 * Python version: 3.8.10 * Operating System: Win 11 22H ### Description I am working on a program to read Samsung gearVRC (remote controller). Its a device that does not have windows drivers but one can directly read it with a custom program (https://github.com/rdady/gear-vr-controller-linux) I was able to scan for devices, services and characteristics on the device using the standard service_explorer.py example, however I was getting the "Unreachable" and not connected errors. I found that a reliable way to solve the problem is by removing the device from the system and starting with a system that has no prior knowledge of the bluetooth device: ``` - Settings - Bluetooth & devices - Devices - Click on 3 dots next to the device name - Remove Device - Bluetooth Slider to Off and back to On - If device reappears, repeat removal (above two steps), after one repetition device should be permanently removed ``` The unreachable problem is likely Windows related as it also occurs with BLEConsole (https://github.com/sensboston/BLEConsole) Hypothesis is that when bluetooth connection is closed, Windows retains device information that triggers a disconnection (either by Windows or the device itself). Its also possible that the incorrect device information is related to pairing. Since there is no BLE command line tool for pairing in Windows, its difficult to test. ### Request It would be useful if the documentation section **Troubleshooting** would list **how to completely remove a Bluetooth device from a Windows system**, and suggesting this as **troubleshooting step** before investigating the device with Wireshark. ### What I Did I made my own program to read the battery status of my device and below, in LOG file section, is the sample output. One can see that the GattService is getting closed before its possible to read the characteristic. The same behaviour can be seen with examples/service_explorer.py: ``` py -3 .\service_explorer.py --name 'Gear VR Controller(17DB)' ``` I attempted changing options for ``` BleakClient(myDevice, winrt=dict(address_type='public', use_cached_services=True)) ``` which did NOT solve the unreachable issue. I removed battery from the device to reset it. It did NOT solve the issue. I attempted pairing with: ``` paired = await client.pair(protection_level=2) print("{} is {}".format(DEVICE_NAME, 'paired' if paired else 'not paired')) ``` regardless of the protection_level, bleak reports device is NOT paired. I removed the device from the system as described above and it SOLVED the issue. I need to remove the device manually each time before I can use it though. Therefore, I attempted creating a powershell script for automatic device removal (see below) but so far I have NOT been able to mimic the manual steps listed above. ``` $device = Get-PnpDevice -FriendlyName "Gear VR Controller(17DB)" pnputil.exe /remove-device $device.InstanceId Restart-Service -Name bthserv -Force Update-HostStorageCache ``` ### Logs ``` Reading Value... Gear VR Controller(17DB) is connected DEBUG:bleak.backends.winrt.client:session_status_changed_event_handler: id: BluetoothLE#BluetoothLEe0:d4:64:23:f6:1c-2c:ba:ba:2e:17:db, error: BluetoothError.SUCCESS, status: GattSessionStatus.CLOSED DEBUG:bleak.backends.winrt.client:closing requester DEBUG:bleak.backends.winrt.client:closing session 00002a19-0000-1000-8000-00805f9b34fb (Handle: 10): Battery Level: (['read', 'notify']), Error Could not read characteristic handle 10: Unreachable ```
1medium
Title: DOC: pandas getting started Body:
0easy
Title: Installing error on windows for latest installer Body: (check) CPU Supports AVX Instructions (check) CPU Supports SSE4 Instructions (check) Completed check for installed applications (check) Setting up for: nvidia Downloading Miniconda3... Installing Miniconda3. This will take a few minutes... Error Installing Miniconda3 Install Aborted here is the full error message. Have tried to install miniconda3 separately, but still got the same error.
1medium
Title: Paste code snippet from chat UI into editor with drag and drop Body: ### Problem Only ways to put text from chat UI back into editor are manual copy-and-paste and "replace selection" option on message submission ### Proposed Solution Allow to paste code text / chat messages / code snippets from chat UI into editor with drag and drop. Figma concept: ![image](https://github.com/jupyterlab/jupyter-ai/assets/26686070/abfc8324-5fd6-40ab-9162-1b1f4f411b35)
1medium
Title: MemoryError while reading a document using NLP.pipe(text, disable=["tagger", "parser"] Body: In this use case I am trying to create noun chunks by making use of the spacy, and I am doing this for a batch (Batch size can vary from 20 to 100) Need to process 2.8Million documents overall. Using Large spacy english model In a for loop I am doing NLP on each of the document using NLP.pipe(text, disable=["tagger", "parser"]. Most of the times it is working fine but for some batches I start getting MemoryError, what is the reason for this error, is the infrastructure issue like insufficient cpu/RAM/Memory while processing that batch or is there a problem with the way I am using spacy in my code ## How to reproduce the behaviour texts = python list of Large documents Note: many of these documents have character length of 5 Million , average size of a document is 1.5 million character. ``` texts = data_df[CONTENTS].to_list() with multiprocessing.Pool(processes=no_of_cores) as pool: noun_chunks = create_noun_chunks(texts) def create_noun_chunks(text: Iterable[str]) -> List[Sequence[str]]: """ Create noun chunks for a given text, after remove entities :param text: text for which noun chunk is required :return: strings, Each noun chunk in the string is delimited by \n """ global NLP if NLP is None: NLP = spacy.load(settings.NLP_LIB) NLP.max_length = 5000000 all_chunks = [] for txt, doc in zip(text, NLP.pipe(text, disable=["tagger", "parser"])): ``` It is while loading this _for_ loop line that I get the memory error Is it because each element in the _texts_ is very large text of million of characters and there are 20 or 100 such elements in this list _texts_ that I am running into memory error? Trace back: ``` File "/home/user/projects/my_project/10.6.12/src/core/utilities/text_utility.py", line 174, in create_noun_chunks for txt, doc in zip(text, NLP.pipe(text, disable=["tagger", "parser"])): File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/language.py", line 1583, in pipe for doc in docs: File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1611, in _pipe yield from proc.pipe(docs, **kwargs) File "spacy/pipeline/transition_parser.pyx", line 230, in pipe File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1560, in minibatch batch = list(itertools.islice(items, int(batch_size))) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1611, in _pipe yield from proc.pipe(docs, **kwargs) File "spacy/pipeline/pipe.pyx", line 53, in pipe File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1611, in _pipe yield from proc.pipe(docs, **kwargs) File "spacy/pipeline/pipe.pyx", line 53, in pipe File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1611, in _pipe yield from proc.pipe(docs, **kwargs) File "spacy/pipeline/trainable_pipe.pyx", line 79, in pipe File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1630, in raise_error raise e File "spacy/pipeline/trainable_pipe.pyx", line 75, in spacy.pipeline.trainable_pipe.TrainablePipe.pipe File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/pipeline/tok2vec.py", line 125, in predict tokvecs = self.model.predict(docs) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 315, in predict return self._func(self, X, is_train=False)[0] File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/chain.py", line 54, in forward Y, inc_layer_grad = layer(X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/with_array.py", line 40, in forward return _list_forward(cast(Model[List2d, List2d], model), Xseq, is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/with_array.py", line 76, in _list_forward Yf, get_dXf = layer(Xf, is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/chain.py", line 54, in forward Y, inc_layer_grad = layer(X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/residual.py", line 40, in forward Y, backprop_layer = model.layers[0](X, is_train) File "/home/`File "/home/user/projects/my_project/10.6.12/src/core/utilities/text_utility.py", line 174, in create_noun_chunks for txt, doc in zip(text, NLP.pipe(text, disable=["tagger", "parser"])): File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/language.py", line 1583, in pipe for doc in docs: File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1611, in _pipe yield from proc.pipe(docs, **kwargs) File "spacy/pipeline/transition_parser.pyx", line 230, in pipe File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1560, in minibatch batch = list(itertools.islice(items, int(batch_size))) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1611, in _pipe yield from proc.pipe(docs, **kwargs) File "spacy/pipeline/pipe.pyx", line 53, in pipe File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1611, in _pipe yield from proc.pipe(docs, **kwargs) File "spacy/pipeline/pipe.pyx", line 53, in pipe File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1611, in _pipe yield from proc.pipe(docs, **kwargs) File "spacy/pipeline/trainable_pipe.pyx", line 79, in pipe File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/util.py", line 1630, in raise_error raise e File "spacy/pipeline/trainable_pipe.pyx", line 75, in spacy.pipeline.trainable_pipe.TrainablePipe.pipe File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/spacy/pipeline/tok2vec.py", line 125, in predict tokvecs = self.model.predict(docs) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 315, in predict return self._func(self, X, is_train=False)[0] File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/chain.py", line 54, in forward Y, inc_layer_grad = layer(X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/with_array.py", line 40, in forward return _list_forward(cast(Model[List2d, List2d], model), Xseq, is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/with_array.py", line 76, in _list_forward Yf, get_dXf = layer(Xf, is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/chain.py", line 54, in forward Y, inc_layer_grad = layer(X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/residual.py", line 40, in forward Y, backprop_layer = model.layers[0](X, is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/chain.py", line 54, in forward Y, inc_layer_grad = layer(X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/ptexts = data_df[CONTENTS].to_list() with multiprocessing.Pool(processes=no_of_cores) as pool: noun_chunks = create_noun_chunks(texts)ython3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/chain.py", line 54, in forward Y, inc_layer_grad = layer(X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/chain.py", line 54, in forward Y, inc_layer_grad = layer(X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/maxout.py", line 49, in forward Y = model.ops.gemm(X, W, trans2=True) File "thinc/backends/numpy_ops.pyx", line 93, in thinc.backends.numpy_ops.NumpyOps.gemm File "blis/py.pyx", line 72, in blis.py.gemm MemoryErroruser/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/chain.py", line 54, in forward Y, inc_layer_grad = layer(X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/ptexts = data_df[CONTENTS].to_list() with multiprocessing.Pool(processes=no_of_cores) as pool: noun_chunks = create_noun_chunks(texts)ython3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/chain.py", line 54, in forward Y, inc_layer_grad = layer(X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/chain.py", line 54, in forward Y, inc_layer_grad = layer(X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/model.py", line 291, in __call__ return self._func(self, X, is_train=is_train) File "/home/user/anaconda3/envs/10.6.12/lib/python3.7/site-packages/thinc/layers/maxout.py", line 49, in forward Y = model.ops.gemm(X, W, trans2=True) File "thinc/backends/numpy_ops.pyx", line 93, in thinc.backends.numpy_ops.NumpyOps.gemm File "blis/py.pyx", line 72, in blis.py.gemm MemoryError ``` ## Your Environment <!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.--> * Operating System: RedHat in case of server and Linux machine for development * Python Version Used: 3.7.* * spaCy Version Used:3.3.* * Environment Information: Conda environment
1medium
Title: Lost all the default import and export options after upgrading to V4 Body: We have lost all the default file import export options in new version: Below is what was working previously: ![image](https://github.com/django-import-export/django-import-export/assets/71445383/4701c424-a582-4811-b82e-dfdd0ccda300) Below is the steps to reproduce: **requirements.txt** ``` django-import-export==4.0.1 django==5.0.6 openpyxl==3.1.2 ``` **models.py** ``` from django.db import models class Division(models.Model): id = models.AutoField(primary_key=True) name = models.CharField(max_length=100) description = models.TextField() created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def __str__(self): return self.name class Meta: verbose_name_plural = "Divisions" class Site(models.Model): division = models.ForeignKey(Division, on_delete=models.CASCADE) code = models.CharField(max_length=10, primary_key=True) name = models.CharField(max_length=100) description = models.TextField() created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def __str__(self): return self.name class Meta: verbose_name_plural = "Sites" ``` **admin.py** ``` from django.contrib import admin from import_export import resources from import_export.admin import ImportExportMixin from .models import Division, Site class DivisionResource(resources.ModelResource): class Meta: model = Division fields = ("id", "name", "description", "created_at", "updated_at") @admin.register(Division) class DivisionAdmin(ImportExportMixin, admin.ModelAdmin): resource_class = DivisionResource list_display = ("id", "name", "description", "created_at", "updated_at") search_fields = ("name", "description") class SiteResource(resources.ModelResource): class Meta: model = Site fields = ("division", "code", "name", "description", "created_at", "updated_at") import_id_fields = ("code",) @admin.register(Site) class SiteAdmin(ImportExportMixin, admin.ModelAdmin): resource_class = SiteResource list_display = ( "division", "code", "name", "description", "created_at", "updated_at", ) search_fields = ("code", "name", "description") ``` Now all the options we have is csv, tsv and json. No settings changed. ![image](https://github.com/django-import-export/django-import-export/assets/71445383/42231ab2-e155-4d79-adec-6c686567543a)
1medium
Title: 人像抠图有小的发丝如何去除? Body: ![Image](https://github.com/user-attachments/assets/5bc0f276-2a79-45fb-a320-5c41f59b5c85) 上面的图片生成证件照后,会生成下面的图片 ![Image](https://github.com/user-attachments/assets/e71257d7-1690-481e-b6ac-d11389421151) 想去除边缘那些小的发丝,有人会吗?
1medium