text
stringlengths
20
57.3k
labels
class label
4 classes
Title: Public release of model weights Body: Congratulations on the fine-tune! We have observed some fantastic performance through the provided web interface. AFAIK the original Llama model was released under GNU/GPL, you should be able to distribute derivative work respecting this original license, correct? (Even if the original model weights have not officially been distributed to the public yet) Will you provide some sort of wait-list to notify us when the model weights are made available? Interested in as much information as you may share on this, again, congratulations and thank your impressive work! https://github.com/facebookresearch/llama/blob/main/LICENSE
3misc
Title: 是否有必要记录每期生成的时间 Body: ## 项目推荐 - 项目地址:仅收录 GitHub 的开源项目,请填写 GitHub 的项目地址 - 类别:请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Swift、其它、书籍、机器学习) - 项目后续更新计划: - 项目描述: - 必写:这是个什么项目、能用来干什么、有什么特点或解决了什么痛点 - 可选:适用于什么场景、能够让初学者学到什么 - 描述长度(不包含示例代码): 10 - 256 个字符 - 推荐理由:令人眼前一亮的点是什么?解决了什么痛点? - 示例代码:(可选)长度:1-20 行 - 截图:(可选)gif/png/jpg ## 提示(提交时请删除以下内容) > 点击上方 “Preview” 更方便地阅读以下内容, 提高项目收录的概率方法如下: 1. 到 HelloGitHub 网站首页:https://hellogithub.com 搜索要推荐的项目地址,查看准备推荐的项目是否被推荐过。 2. 根据 [项目审核标准说明](https://github.com/521xueweihan/HelloGitHub/issues/271) 修改项目 3. 如您推荐的项目收录到《HelloGitHub》月刊,您的 GitHub 帐号将展示在 [贡献人列表](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md),**同时会在本 issues 中通知您**。 再次感谢您对 HelloGitHub 项目的支持!
3misc
Title: ResourceProtector decorator doesn't work with class-based Django views Body: **Describe the bug** When using the ResourceProtector decorator (as documented [here](https://docs.authlib.org/en/latest/django/2/resource-server.html)) on a Django REST Framework **class-based view**'s method: ```python class MyView(APIView): @require_oauth("order") def post(self, request, *args, **kwargs): return super().post(request, *args, **kwargs) ``` I get the following error: > 'MyView' object has no attribute 'get_raw_uri' This is because in this case, the first parameter in the [decorator's `decorated` function](https://github.com/lepture/authlib/blob/ffeeaa9fd7b5bc4ea7cae9fcf0c2ad9d7f5cf22a/authlib/integrations/django_oauth2/resource_protector.py#L36), will be the **view object**, rather than the request. Adding a `view` parameter as the first parameter in the function fixes this. ```python def __call__(self, scopes=None, optional=False): def wrapper(f): @functools.wraps(f) def decorated(view, request, *args, **kwargs): # <= Change here try: token = self.acquire_token(request, scopes) request.oauth_token = token ``` **Error Stacks** ``` File "/.venv/lib/python3.6/site-packages/rest_framework/views.py", line 502, in dispatch response = handler(request, *args, **kwargs) File "/.venv/lib/python3.6/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 39, in decorated token = self.acquire_token(request, scopes) File "/.venv/lib/python3.6/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 25, in acquire_token url = request.get_raw_uri() AttributeError: 'MyView' object has no attribute 'get_raw_uri' ``` **To Reproduce** See code example in the bug description above. **Expected behavior** The decorator to work the same way as it does for function-based views. **Environment:** - OS: OSX - Python Version: 3.6.9 - Authlib Version: 1.0.0.dev0 **Additional context** I'm available to create a PR to fix this if you tell me the approach you want to take here.
1medium
Title: .transform() does not generate probability distribution despite calculate_probabilites=True Body: When I fit BERTopic on my documents and embeddings with fit_transform(), I get a list of topic assignments and a 2d array of soft clustering probability distributions out. But if I then take that fitted model and feed it new or the same data again using .transform, I get a list of topic assignments, but no soft clustering probabilities - instead I get a 1d array with a single value per document (0 for data points considered noise). How do I get the distributions for new data?
1medium
Title: test FAILED QuantizeLinearOpMLFloat16Test.Float8 Body: # Bug Report When executing the build command given below, the build failed with a failed test case. Also, there appear to be memory leaks detected. The build.log is attached. ### Describe the bug [ FAILED ] QuantizeLinearOpMLFloat16Test.Float8 [----------] Global test environment tear-down [==========] 4054 tests from 285 test suites ran. (2197772 ms total) [ PASSED ] 4053 tests. [ FAILED ] 1 test, listed below: [ FAILED ] QuantizeLinearOpMLFloat16Test.Float8 1 FAILED TEST YOU HAVE 15 DISABLED TESTS ### System information Edition Windows 10 Pro for Workstations Version 22H2 Installed on ‎6/‎29/‎2022 OS build 19045.3930 Experience Windows Feature Experience Pack 1000.19053.1000.0 Microsoft Visual Studio Community 2022 Version 17.6.5 VisualStudio.17.Release/17.6.5+33829.357 Microsoft .NET Framework Version 4.8.09037 Installed Version: Community Visual C++ 2022 00482-90000-00000-AA907 Microsoft Visual C++ 2022 ASP.NET and Web Tools 17.6.326.62524 ASP.NET and Web Tools Azure App Service Tools v3.0.0 17.6.326.62524 Azure App Service Tools v3.0.0 Azure Functions and Web Jobs Tools 17.6.326.62524 Azure Functions and Web Jobs Tools C# Tools 4.6.0-3.23259.8+c3cc1d0ceeab1a65da0217e403851a1e8a30086a C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used. Common Azure Tools 1.10 Provides common services for use by Azure Mobile Services and Microsoft Azure Tools. Cookiecutter 17.0.23087.1 Provides tools for finding, instantiating and customizing templates in cookiecutter format. GitExtensions 1.0 Git Extensions is a graphical user interface for Git that allows you to control Git without using the command-line GitHub Copilot 1.100.0.0 (v1.100.0.0@6ff082509) GitHub Copilot is an AI pair programmer that helps you write code faster and with less work. GitHub Copilot Agent 1.100.306 (v1.100.0) Linux Core Dump Debugging 1.0.9.33801 Enables debugging of Linux core dumps. Microsoft JVM Debugger 1.0 Provides support for connecting the Visual Studio debugger to JDWP compatible Java Virtual Machines NuGet Package Manager 6.6.0 NuGet Package Manager in Visual Studio. For more information about NuGet, visit https://docs.nuget.org/ NVIDIA CUDA 11.7 Wizards 11.7 Wizards to create new NVIDIA CUDA projects and source files. NVIDIA Nsight Visual Studio Edition 2022.2.0.22095 NVIDIA Nsight Visual Studio Edition provides tools for GPGPU and graphics development. Copyright © NVIDIA 2010 - 2022. •Direct3D® and DirectX® are registered trademarks of Microsoft Corporation in the United States and/or other countries. •Microsoft Detours is used under the Professional license (http://research.microsoft.com/en-us/projects/detours/). •Gardens Point Parser Generator Copyright 2005 Queensland University of Technology (QUT). All rights reserved. •Icons from Axialis Software used under the licensing terms found here: www.axialis.com •NLog Copyright © 2004-2006 Jaroslaw Kowalski ([email protected]) •zlib and libpng used under the zlib/libpnc license (http://opensource.org/licenses/Zlib) •Breakpad Copyright ©2006, Google Inc. All rights reserved. •The OpenGL Extension Wrangler Library Copyright ©2008-2016, Nigel Stewart ([email protected]), Copyright ©2002-2008, Milan Ikits ([email protected]), Copyright ©2002-2008, Marcelo E. Magallon ([email protected]), Copyright ©2002, Lev Povalahev. All rights reserved. •LIBSSH2 Copyright ©2004-2007 Sara Golemon ([email protected]), Copyright ©2005,2006 Mikhail Gusarov ([email protected]),Copyright ©2006-2007 The Written Word, Inc.,Copyright ©2007 Eli Fant ([email protected]),Copyright ©2009-2014 Daniel Stenberg., Copyright ©2008, 2009 Simon Josefsson. All rights reserved. •Protobuf Copyright ©2014, Google Inc. All rights reserved. •xxHASH Library Copyright ©2012-2014, Yann Collet. All rights reserved. •FMT Copyright ©2012 - 2016, Victor Zverovich •Font Awesome Copyright 2018 Fonticons, Inc. •ELF Definitions Copyright (c) 2010 Joseph Koshy, All rights reserved. Warning: This computer program is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this program, or any portion of it, may result in severe civil and criminal penalties, and will be prosecuted to the maximum extent possible under the law. NVIDIA Nsight Visual Studio Edition - CUDA support 2022.2.0.22095 NVIDIA Nsight Visual Studio Edition - CUDA support provides tools for CUDA development and debugging. Python - Django support 17.0.23087.1 Provides templates and integration for the Django web framework. Python - Profiling support 17.0.23087.1 Profiling support for Python projects. Python with Pylance 17.0.23087.1 Provides IntelliSense, projects, templates, debugging, interactive windows, and other support for Python developers. Razor (ASP.NET Core) 17.6.0.2327201+a6a61fdfa748eaa65aab53dab583276e26af4a3e Provides languages services for ASP.NET Core Razor. SQL Server Data Tools 17.6.13.0 Microsoft SQL Server Data Tools Test Adapter for Boost.Test 1.0 Enables Visual Studio's testing tools with unit tests written for Boost.Test. The use terms and Third Party Notices are available in the extension installation directory. Test Adapter for Google Test 1.0 Enables Visual Studio's testing tools with unit tests written for Google Test. The use terms and Third Party Notices are available in the extension installation directory. TypeScript Tools 17.0.20329.2001 TypeScript Tools for Microsoft Visual Studio Visual Basic Tools 4.6.0-3.23259.8+c3cc1d0ceeab1a65da0217e403851a1e8a30086a Visual Basic components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used. Visual C++ for Cross Platform Mobile Development (Android) 17.0.33606.364 Visual C++ for Cross Platform Mobile Development (Android) Visual C++ for Linux Development 1.0.9.33801 Visual C++ for Linux Development Visual F# Tools 17.6.0-beta.23174.5+0207bea1afae48d9351ac26fb51afc8260de0a97 Microsoft Visual F# Tools Visual Studio IntelliCode 2.2 AI-assisted development for Visual Studio. - windows : - ONNX version (*e.g. 1.13*): - Python version: - GCC/Compiler version (if compiling from source): - CMake version: - Protobuf version: - Visual Studio version (if applicable):--> ONNX source git hash commit 2ac381c55397dffff327cc6efecf6f95a70f90a1 (HEAD, tag: v1.16.3, origin/rel-1.16.3) ### Reproduction instructions .\build.bat --use_cuda --cudnn_home "C:\Program Files\NVIDIA\CUDNN\v8.9" --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7"
2hard
Title: [BUG] Unable to work with Autogluon Object Detection Body: **Bug Report Checklist** <!-- Please ensure at least one of the following to help the developers troubleshoot the problem: --> - [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install --> - [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred --> - [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked --> **Describe the bug** Unable to work with Autogluon in Kaggle env **Expected behavior** Code able to run without any error **To Reproduce** !pip install autogluon from autogluon.multimodal import MultiModalPredictor predictor = MultiModalPredictor(label=label_col).fit( train_data=train_data, time_limit=120 ) OSError Traceback (most recent call last) Cell In[10], line 1 ----> 1 from autogluon.multimodal import MultiModalPredictor 3 predictor = MultiModalPredictor(label=label_col).fit( 4 train_data=train_data, 5 time_limit=120 6 ) File /opt/conda/lib/python3.10/site-packages/autogluon/multimodal/__init__.py:6 3 except ImportError: 4 pass ----> 6 from . import constants, data, learners, models, optimization, predictor, problem_types, utils 7 from .predictor import MultiModalPredictor 8 from .utils import download File /opt/conda/lib/python3.10/site-packages/autogluon/multimodal/data/__init__.py:2 1 from . import collator, infer_types, randaug, utils ----> 2 from .datamodule import BaseDataModule 3 from .dataset import BaseDataset 4 from .dataset_mmlab import MultiImageMixDataset File /opt/conda/lib/python3.10/site-packages/autogluon/multimodal/data/datamodule.py:4 1 from typing import Dict, List, Optional, Union 3 import pandas as pd ----> 4 from lightning.pytorch import LightningDataModule 5 from torch.utils.data import DataLoader, Dataset 7 from ..constants import PREDICT, TEST, TRAIN, VALIDATE File /opt/conda/lib/python3.10/site-packages/lightning/__init__.py:25 23 from lightning.fabric.fabric import Fabric # noqa: E402 24 from lightning.fabric.utilities.seed import seed_everything # noqa: E402 ---> 25 from lightning.pytorch.callbacks import Callback # noqa: E402 26 from lightning.pytorch.core import LightningDataModule, LightningModule # noqa: E402 27 from lightning.pytorch.trainer import Trainer # noqa: E402 File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/__init__.py:26 23 _logger.propagate = False 25 from lightning.fabric.utilities.seed import seed_everything # noqa: E402 ---> 26 from lightning.pytorch.callbacks import Callback # noqa: E402 27 from lightning.pytorch.core import LightningDataModule, LightningModule # noqa: E402 28 from lightning.pytorch.trainer import Trainer # noqa: E402 File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/callbacks/__init__.py:14 1 # Copyright The Lightning AI team. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); (...) 12 # See the License for the specific language governing permissions and 13 # limitations under the License. ---> 14 from lightning.pytorch.callbacks.batch_size_finder import BatchSizeFinder 15 from lightning.pytorch.callbacks.callback import Callback 16 from lightning.pytorch.callbacks.checkpoint import Checkpoint File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/callbacks/batch_size_finder.py:24 21 from typing import Optional 23 import lightning.pytorch as pl ---> 24 from lightning.pytorch.callbacks.callback import Callback 25 from lightning.pytorch.tuner.batch_size_scaling import _scale_batch_size 26 from lightning.pytorch.utilities.exceptions import _TunerExitException, MisconfigurationException File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/callbacks/callback.py:22 19 from torch.optim import Optimizer 21 import lightning.pytorch as pl ---> 22 from lightning.pytorch.utilities.types import STEP_OUTPUT 25 class Callback: 26 r"""Abstract base class used to build new callbacks. 27 28 Subclass this class and override any of the relevant hooks 29 30 """ File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/utilities/types.py:40 38 from torch import Tensor 39 from torch.optim import Optimizer ---> 40 from torchmetrics import Metric 41 from typing_extensions import NotRequired, Required 43 from lightning.fabric.utilities.types import _TORCH_LRSCHEDULER, LRScheduler, ProcessGroup, ReduceLROnPlateau File /opt/conda/lib/python3.10/site-packages/torchmetrics/__init__.py:14 11 _PACKAGE_ROOT = os.path.dirname(__file__) 12 _PROJECT_ROOT = os.path.dirname(_PACKAGE_ROOT) ---> 14 from torchmetrics import functional # noqa: E402 15 from torchmetrics.aggregation import ( # noqa: E402 16 CatMetric, 17 MaxMetric, (...) 22 SumMetric, 23 ) 24 from torchmetrics.audio._deprecated import _PermutationInvariantTraining as PermutationInvariantTraining # noqa: E402 File /opt/conda/lib/python3.10/site-packages/torchmetrics/functional/__init__.py:14 1 # Copyright The Lightning team. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); (...) 12 # See the License for the specific language governing permissions and 13 # limitations under the License. ---> 14 from torchmetrics.functional.audio._deprecated import _permutation_invariant_training as permutation_invariant_training 15 from torchmetrics.functional.audio._deprecated import _pit_permutate as pit_permutate 16 from torchmetrics.functional.audio._deprecated import ( 17 _scale_invariant_signal_distortion_ratio as scale_invariant_signal_distortion_ratio, 18 ) File /opt/conda/lib/python3.10/site-packages/torchmetrics/functional/audio/__init__.py:14 1 # Copyright The Lightning team. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); (...) 12 # See the License for the specific language governing permissions and 13 # limitations under the License. ---> 14 from torchmetrics.functional.audio.pit import permutation_invariant_training, pit_permutate 15 from torchmetrics.functional.audio.sdr import ( 16 scale_invariant_signal_distortion_ratio, 17 signal_distortion_ratio, 18 source_aggregated_signal_distortion_ratio, 19 ) 20 from torchmetrics.functional.audio.snr import ( 21 complex_scale_invariant_signal_noise_ratio, 22 scale_invariant_signal_noise_ratio, 23 signal_noise_ratio, 24 ) File /opt/conda/lib/python3.10/site-packages/torchmetrics/functional/audio/pit.py:22 19 from torch import Tensor 20 from typing_extensions import Literal ---> 22 from torchmetrics.utilities import rank_zero_warn 23 from torchmetrics.utilities.imports import _SCIPY_AVAILABLE 25 # _ps_dict: cache of permutations 26 # it's necessary to cache it, otherwise it will consume a large amount of time File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/__init__.py:14 1 # Copyright The Lightning team. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); (...) 12 # See the License for the specific language governing permissions and 13 # limitations under the License. ---> 14 from torchmetrics.utilities.checks import check_forward_full_state_property 15 from torchmetrics.utilities.distributed import class_reduce, reduce 16 from torchmetrics.utilities.prints import rank_zero_debug, rank_zero_info, rank_zero_warn File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/checks.py:25 22 import torch 23 from torch import Tensor ---> 25 from torchmetrics.metric import Metric 26 from torchmetrics.utilities.data import select_topk, to_onehot 27 from torchmetrics.utilities.enums import DataType File /opt/conda/lib/python3.10/site-packages/torchmetrics/metric.py:30 27 from torch import Tensor 28 from torch.nn import Module ---> 30 from torchmetrics.utilities.data import ( 31 _flatten, 32 _squeeze_if_scalar, 33 dim_zero_cat, 34 dim_zero_max, 35 dim_zero_mean, 36 dim_zero_min, 37 dim_zero_sum, 38 ) 39 from torchmetrics.utilities.distributed import gather_all_tensors 40 from torchmetrics.utilities.exceptions import TorchMetricsUserError File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/data.py:22 19 from torch import Tensor 21 from torchmetrics.utilities.exceptions import TorchMetricsUserWarning ---> 22 from torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _XLA_AVAILABLE 23 from torchmetrics.utilities.prints import rank_zero_warn 25 METRIC_EPS = 1e-6 File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py:50 48 _GAMMATONE_AVAILABEL: bool = package_available("gammatone") 49 _TORCHAUDIO_AVAILABEL: bool = package_available("torchaudio") ---> 50 _TORCHAUDIO_GREATER_EQUAL_0_10: Optional[bool] = compare_version("torchaudio", operator.ge, "0.10.0") 51 _SACREBLEU_AVAILABLE: bool = package_available("sacrebleu") 52 _REGEX_AVAILABLE: bool = package_available("regex") File /opt/conda/lib/python3.10/site-packages/lightning_utilities/core/imports.py:77, in compare_version(package, op, version, use_base_version) 68 """Compare package version with some requirements. 69 70 >>> compare_version("torch", operator.ge, "0.1") (...) 74 75 """ 76 try: ---> 77 pkg = importlib.import_module(package) 78 except (ImportError, pkg_resources.DistributionNotFound): 79 return False File /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package) 124 break 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) File /opt/conda/lib/python3.10/site-packages/torchaudio/__init__.py:1 ----> 1 from . import ( # noqa: F401 2 _extension, 3 compliance, 4 datasets, 5 functional, 6 io, 7 kaldi_io, 8 models, 9 pipelines, 10 sox_effects, 11 transforms, 12 utils, 13 ) 14 from ._backend.common import AudioMetaData # noqa 16 try: File /opt/conda/lib/python3.10/site-packages/torchaudio/_extension/__init__.py:45 43 _IS_ALIGN_AVAILABLE = False 44 if _IS_TORCHAUDIO_EXT_AVAILABLE: ---> 45 _load_lib("libtorchaudio") 47 import torchaudio.lib._torchaudio # noqa 49 _check_cuda_version() File /opt/conda/lib/python3.10/site-packages/torchaudio/_extension/utils.py:64, in _load_lib(lib) 62 if not path.exists(): 63 return False ---> 64 torch.ops.load_library(path) 65 torch.classes.load_library(path) 66 return True File /opt/conda/lib/python3.10/site-packages/torch/_ops.py:643, in _Ops.load_library(self, path) 638 path = _utils_internal.resolve_library_path(path) 639 with dl_open_guard(): 640 # Import the shared library into the process, thus running its 641 # static (global) initialization code in order to register custom 642 # operators with the JIT. --> 643 ctypes.CDLL(path) 644 self.loaded_libraries.add(path) File /opt/conda/lib/python3.10/ctypes/__init__.py:374, in CDLL.__init__(self, name, mode, handle, use_errno, use_last_error, winmode) 371 self._FuncPtr = _FuncPtr 373 if handle is None: --> 374 self._handle = _dlopen(self._name, mode) 375 else: 376 self._handle = handle OSError: /opt/conda/lib/python3.10/site-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZN3c10ltERKNS_6SymIntEi ```python INSTALLED VERSIONS ------------------ date : 2024-02-12 time : 18:42:30.368085 python : 3.10.13.final.0 OS : Linux OS-release : 5.15.133+ Version : #1 SMP Tue Dec 19 13:14:11 UTC 2023 machine : x86_64 processor : x86_64 num_cores : 4 cpu_ram_mb : 32110.140625 cuda version : None num_gpus : 0 gpu_ram_mb : [] avail_disk_size_mb : 19933 accelerate : 0.21.0 async-timeout : 4.0.3 autogluon : 1.0.0 autogluon.common : 1.0.0 autogluon.core : 1.0.0 autogluon.features : 1.0.0 autogluon.multimodal : 1.0.0 autogluon.tabular : 1.0.0 autogluon.timeseries : 1.0.0 boto3 : 1.26.100 catboost : 1.2.2 defusedxml : 0.7.1 evaluate : 0.4.1 fastai : 2.7.13 gluonts : 0.14.4 hyperopt : 0.2.7 imodels : None jinja2 : 3.1.2 joblib : 1.3.2 jsonschema : 4.17.3 lightgbm : 4.1.0 lightning : 2.0.9.post0 matplotlib : None mlforecast : 0.10.0 networkx : 3.2.1 nlpaug : 1.1.11 nltk : 3.8.1 nptyping : 2.4.1 numpy : 1.26.3 nvidia-ml-py3 : 7.352.0 omegaconf : 2.2.3 onnxruntime-gpu : None openmim : 0.3.9 orjson : 3.9.10 pandas : 2.1.4 Pillow : 10.2.0 psutil : 5.9.7 PyMuPDF : None pytesseract : 0.3.10 pytorch-lightning : 2.0.9.post0 pytorch-metric-learning: 1.7.3 ray : 2.6.3 requests : 2.31.0 scikit-image : 0.20.0 scikit-learn : 1.4.0 scikit-learn-intelex : 2024.1.0 scipy : 1.11.4 seqeval : 1.2.2 setuptools : 69.0.3 skl2onnx : None statsforecast : 1.4.0 statsmodels : 0.14.1 tabpfn : None tensorboard : 2.15.1 text-unidecode : 1.3 timm : 0.9.12 torch : 2.0.1 torchmetrics : 1.1.2 torchvision : 0.15.2 tqdm : 4.66.1 transformers : 4.31.0 utilsforecast : 0.0.10 vowpalwabbit : 9.9.0 xgboost : 2.0.3 </details>
1medium
Title: [Doc]: Multipage PDF: unclear which backend supports and which does not support attach_note() Body: ### Documentation Link https://matplotlib.org/stable/gallery/misc/multipage_pdf.html ### Problem The issue is in the first two paragraphs of the page. > This is a demo of creating a pdf file with several pages, as well as adding metadata and annotations to pdf files. > > If you want to use a multipage pdf file using LaTeX, you need to use from matplotlib.backends.backend_pgf import PdfPages. This version however does not support [attach_note](https://matplotlib.org/stable/api/backend_pdf_api.html#matplotlib.backends.backend_pdf.PdfPages.attach_note). Reading, it is unclear whether "this" in the last sentence refers to `pdf` backend (as suggested by it being on the page for it) or `pgf` (as suggested by it being in the paragraph about `pgf`). Only after clickking on the hyperlinked `attach_note` do I have to notice that it is a documentation page for a specific backend (which isn't obvious, being sent to an anchor in the middle of the document without header visible). From there, seeing that `pdf` has `attach_note()` I can infer that `pgf` doesn't. That string of logic is quite long for something that should have been clearly stated. ### Suggested improvement Change "this" to "that" in the last quoted sentence would probably make it clearer it is referring to the backend *not* used in the example. Or perhaps a different change in wording.
0easy
Title: github actions on ubuntu 18.04 fail to start because the image was removed Body: github actions on ubuntu 18.04 fail to start because the image was removed: https://github.blog/changelog/2022-08-09-github-actions-the-ubuntu-18-04-actions-runner-image-is-being-deprecated-and-will-be-removed-by-12-1-22/ this might be a challenge because the tests fail with SSLError: https://github.com/psf/requests/issues/5662
1medium
Title: Add EfficientLoFTR model Body: ### Model description EfficientLoFTR is an image matching model performing dense matching (contrary to SuperPoint + SuperGlue). It is a variant to LoFTR that runs in realtime (at most 40ms for inference). The base model performances are ok, but with the soon release of [MatchAnything](https://github.com/zju3dv/MatchAnything) version of EfficientLoFTR, we should have an even better model. ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation https://github.com/zju3dv/EfficientLoFTR
1medium
Title: Why does SlimPruner utilize the WeightTrainerBasedDataCollector instead of the WeightDataCollector before model compressing? Body:
1medium
Title: How to import evaluate_cy? Body: try: from torchreid.metrics.rank_cylib.rank_cy import evaluate_cy IS_CYTHON_AVAI = True except ImportError: IS_CYTHON_AVAI = False warnings.warn( 'Cython evaluation (very fast so highly recommended) is ' 'unavailable, now use python evaluation.' )
0easy
Title: Widgets with return type annotation `List[LayerDataTuple]` don't get their layers added to the `Viewer` Body: ### 🐛 Bug Report In `0.5.0` we dropped support for python 3.8, which included some typing changes. Notably, the `list` type no longer needed importing `from typing import List`, but could be used directly. This led to a change in the types we register with `magicgui` - notably, we now use the builtin `list` type for `LayerDataTuple` [here](https://github.com/napari/napari/pull/6738/files#diff-0d09c2e8083dd5acfebfeffc8e34e4c44d781e246d736d6e7db79feae78194d6R160). Now, any magicgui widget that is return type annotated with the imported `List[LayerDataTuple]` no longer works i.e. the layers are not added to the viewer. ### 💡 Steps to Reproduce 1. Run the following script ```python import numpy as np import napari from magicgui import magic_factory from napari.types import LayerDataTuple from typing import List @magic_factory def layer_return( first_layer: 'napari.types.ImageData', # ) -> list[LayerDataTuple]: ) -> List[LayerDataTuple]: layer_tuple = (first_layer, {}, 'image') layer_tuple_list = [layer_tuple] return layer_tuple_list viewer = napari.Viewer() viewer.add_image(np.random.rand(20, 20)) viewer.window.add_dock_widget(layer_return()) napari.run() ``` 2. Click the `Run` button on the widget 3. Nothing happens. 4. Swap the uncommented return statement 5. Run the script 6. Click `Run` button on the widget 7. Layer gets added ### 💡 Expected Behavior I expected the layer to be added to the viewer regardless of whether the builtin `list` type is used or whether we import `from typing import List`. ### 🌎 Environment ``` napari: 0.5.0 Platform: macOS-10.16-x86_64-i386-64bit System: MacOS 14.5 Python: 3.10.14 (main, May 6 2024, 14:47:20) [Clang 14.0.6 ] Qt: 5.15.2 PyQt5: 5.15.10 NumPy: 1.26.4 SciPy: 1.14.0 Dask: 2024.7.1 VisPy: 0.14.3 magicgui: 0.8.3 superqt: 0.6.7 in-n-out: 0.2.1 app-model: 0.2.8 npe2: 0.7.6 OpenGL: - GL version: 2.1 INTEL-22.5.11 - MAX_TEXTURE_SIZE: 16384 - GL_MAX_3D_TEXTURE_SIZE: 2048 Screens: - screen 1: resolution 1440x900, scale 2.0 - screen 2: resolution 3840x2160, scale 1.0 Optional: - numba: 0.60.0 - triangle not installed Settings path: - /Users/ddoncilapop/Library/Application Support/napari/stardist_d7f2585946fc58f34534dbaf8ce99a60b9039489/settings.yaml Plugins: - napari: 0.5.0 (81 contributions) - napari-console: 0.0.9 (0 contributions) - napari-svg: 0.2.0 (2 contributions) - stardist-napari: 2022.12.6 (8 contributions) ``` ### 💡 Additional Context We can bandaid fix this by changing line #160 in [this file](https://github.com/napari/napari/blob/main/napari/types.py#L160) to ```python for type_ in (LayerDataTuple, list[LayerDataTuple], List[LayerDataTuple]): ``` But it's not clear that this should be the final solution - maybe we should be doing some disambiguating in magicgui? I also haven't checked whether other types are affected.
1medium
Title: bound_sympy() produces incorrect result for mod Body: ### 🐛 Describe the bug `bound_sympy(s0 - (s0 % 8))` produces an incorrect range of [-5, inf], when the correct answer is [0, inf] (s0 has a bound of [2, inf]. My guess is this happens because each term is evaluated individually, with s0 resolving to [2, inf], and -(s0 % 8) resolving to [-7, 0], combining for a range of [-5, inf]. Not sure what the efficient fix is. xref: https://fb.workplace.com/groups/pytorch.edge2.team/posts/1163036018285582/?comment_id=1163038158285368&reply_comment_id=1164412728147911 ``` from torch.utils._sympy.value_ranges import bound_sympy class Foo(torch.nn.Module): def forward(self, x): expr = x.shape[0] - (x.shape[0] % 8) # s0 - (s0 % 8) return torch.empty(expr) ep = export( Foo(), (torch.randn(13),), dynamic_shapes={"x": (Dim("dim", min=2),)}, ) val = [node for node in ep.graph.nodes][-2].meta["val"] expr = val.shape[0].node.expr var_to_ranges = val.shape[0].node.shape_env.var_to_range print(bound_sympy(val.shape[0], var_to_ranges)) # [-5, inf], should be [0, inf] ``` ### Versions . cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
1medium
Title: How to show all x-tick labels with seaborn.objects? Body: How do I make it so that it shows all x ticks from 0 to 9? ``` import pandas as pd import seaborn.objects as so diff_df = pd.DataFrame({'bin': [0,1,9,3,4,2,3,4,7,5,6,7,8,9], 'diff': [1,0,1,1,1,3,2,4,1,2,3,0,2,1]}) ( so.Plot(x='bin', y='diff', data=diff_df) .theme({**axes_style("whitegrid"), "grid.linestyle": ":"}) .add(so.Dots()) .add(so.Range(color='orange'), so.Est()) .add(so.Dot(color='orange'), so.Agg()) .add(so.Line(color='orange'), so.Agg()) .label( x="Image Similarity Bin", y="Difference", color=str.capitalize, ) ) ``` I tried to set xticks in .label, but it doesn't do anything. SO: https://stackoverflow.com/questions/77137092/how-to-show-all-x-tick-labels-with-seaborn-objects
1medium
Title: [Bug]: Error when using --precision full Body: ### Checklist - [X] The issue exists after disabling all extensions - [X] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [X] The issue exists in the current version of the webui - [X] The issue has not been reported before recently - [ ] The issue has been reported before but has not been fixed yet ### What happened? A1111 report error on generation. ### Steps to reproduce the problem - Add `--precision full` to command line arg - Load a half precision checkpoint - Click generate - Observe error message ### What should have happened? Generation without error. ### What browsers do you use to access the UI ? Google Chrome ### Sysinfo [sysinfo-2024-05-16-19-47.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15340122/sysinfo-2024-05-16-19-47.json) ### Console logs ```Shell 0%| | 0/20 [00:00<?, ?it/s] *** Error completing request *** Arguments: ('task(1ztcgh7sjo0if7m)', <gradio.routes.Request object at 0x0000017608058AF0>, '', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[]), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[]), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[]), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[]), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[]), False, 1, False, False, 3, 0.1, 0, 0, '', 0, 25, False, False, False, 'BREAK', '-', 0.2, 10, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {} Traceback (most recent call last): File "D:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "D:\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "D:\stable-diffusion-webui\modules\txt2img.py", line 109, in txt2img processed = processing.process_images(p) File "D:\stable-diffusion-webui\modules\processing.py", line 845, in process_images res = process_images_inner(p) File "D:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "D:\stable-diffusion-webui\modules\processing.py", line 981, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "D:\stable-diffusion-webui\modules\processing.py", line 1328, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 218, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\stable-diffusion-webui\modules\sd_samplers_common.py", line 272, in launch_sampling return func() File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 218, in <lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "D:\stable-diffusion-webui\modules\sd_models_xl.py", line 44, in apply_model return self.model(x, t, cond) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 18, in <lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 32, in __call__ return self.__orig_func(*args, **kwargs) File "D:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward return self.diffusion_model( File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, *args, **kwargs) File "D:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 984, in forward emb = self.time_embed(t_emb) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 215, in forward input = module(input) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 503, in network_Linear_forward return originals.Linear_forward(self, input) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half ``` ``` ### Additional information _No response_
1medium
Title: Permissions: applying IsAuthenticated directive to entire schema Body: Hi there, Instead of applying the `IsAuthenticated()` directive to individual fields, I'm looking to apply this to an entire schema. I'd wondered if this might work, but it doesn't: ```python authenticated_schema = gql.Schema( query=AuthenticatedQueries, mutation=AuthenticatedMutations, extensions=[SchemaDirectiveExtension], directives=[IsAuthenticated()] ) ``` I get a type error: `Expected type 'Iterable[StrawberryDirective]', got 'list[IsAuthenticated]' instead` And an actual error: `AttributeError: 'IsAuthenticated' object has no attribute 'arguments'`. Any thoughts?
1medium
Title: [Bug]: LinearSegmentedColormap returns different results for int/float when used as a function Body: ### Bug summary When invoking a `LinearSegmentedColormap`  object as a function, the output can differ based on whether you pass an integer or a float. For example, in the code snippet below, `cmap(1)` returns a completely different result than `cmap(1.0)`. While this behavior might be expected given how the colormap is implemented, it feels unintuitive. IMO the provided reprex demonstrates the issue clearly, but please let me know if more details are needed. ### Code for reproduction ```Python from matplotlib.colors import LinearSegmentedColormap cmap = LinearSegmentedColormap.from_list(name="reprex", colors=["red", "blue"]) print("cmap(0):", cmap(0)) print("cmap(1):", cmap(1)) print("cmap(1.0):", cmap(1.0)) ``` ### Actual outcome `cmap(0): (np.float64(1.0), np.float64(0.0), np.float64(0.0), np.float64(1.0))` (red) `cmap(1): (np.float64(0.996078431372549), np.float64(0.0), np.float64(0.00392156862745098), np.float64(1.0))` (red) `cmap(1.0): (np.float64(0.0), np.float64(0.0), np.float64(1.0), np.float64(1.0))` (blue) ### Expected outcome `cmap(0): (np.float64(1.0), np.float64(0.0), np.float64(0.0), np.float64(1.0))` (red) `cmap(1): (np.float64(0.0), np.float64(0.0), np.float64(1.0), np.float64(1.0))` (blue) `cmap(1.0): (np.float64(0.0), np.float64(0.0), np.float64(1.0), np.float64(1.0))` (blue) ### Additional information _No response_ ### Operating system MacOS Sonoma 14.6.1 ### Matplotlib Version 3.10.0 ### Matplotlib Backend module://positron_ipykernel.matplotlib_backend ### Python version Python 3.13.1 ### Jupyter version / ### Installation pip
1medium
Title: [Bug]:我添加了个随机抽样的操作符,并且已经测试成功,但是在配置文件中使用的时候报错如下,是为什么呢? Body: ### Before Reporting 报告之前 - [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。 - [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问) ### Search before reporting 先搜索,再报告 - [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。 ### OS 系统 ubuntu ### Installation Method 安装方式 source ### Data-Juicer Version Data-Juicer版本 _No response_ ### Python Version Python版本 3.9 ### Describe the bug 描述这个bug <img width="903" alt="截屏2023-12-14 下午6 45 04" src="https://github.com/alibaba/data-juicer/assets/116297296/9158252c-308e-432d-879d-cee63deded36"> 上述是测试结果输出。 然后在配置文件中如下: <img width="416" alt="截屏2023-12-14 下午6 45 57" src="https://github.com/alibaba/data-juicer/assets/116297296/649ec6d2-0cdb-446d-a8d6-0c715a86ee69"> 显示错误,输出结果为: <img width="1022" alt="截屏2023-12-14 下午6 46 25" src="https://github.com/alibaba/data-juicer/assets/116297296/e1657b5f-a7d2-49ec-af31-583c91a8337e"> ### To Reproduce 如何复现 import sys import random # 新添加的模块 from jsonargparse.typing import PositiveFloat # 修改导入 from data_juicer.utils.availability_utils import AvailabilityChecking from data_juicer.utils.constant import Fields, StatsKeys from data_juicer.utils.model_utils import get_model, prepare_model from ..base_op import OPERATORS, Filter from ..common import get_words_from_document @OPERATORS.register_module('random_sample_filter') class RandomSampleFilter(Filter): """Filter to randomly sample a percentage of samples.""" def __init__(self, tokenization: bool = False, sample_percentage: PositiveFloat = 0.1, # 修改参数 *args, **kwargs): """ Initialization method. :param hf_tokenizer: the tokenizer name of Hugging Face tokenizers. :param sample_percentage: The percentage of samples to keep. :param args: extra args :param kwargs: extra args """ super().__init__(*args, **kwargs) self.sample_percentage = sample_percentage self.model_key = None def compute_stats(self, sample): # 不再计算标记数 return sample def process(self, sample): # 根据随机概率决定是否保留样本 if random.uniform(0, 1) <= self.sample_percentage: return True else: return False 这是我的random_sample_filter.py文件。 ### Configs 配置信息 _No response_ ### Logs 报错日志 _No response_ ### Screenshots 截图 _No response_ ### Additional 额外信息 _No response_
1medium
Title: JSONDecodeError: Expecting value: line 1 column 1 (char 0) while opening RequestQueue Body: ### Issue description Hi crawlee team. Thank you for the great work. I encounter the following error while I try to run the crawler for the second time: ``` Traceback (most recent call last): File "/home/sadaf/store_crawler/stores_crawler/d/dookcollection.py", line 401, in <module> asyncio.run(main()) File "/usr/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/home/sadaf/store_crawler/stores_crawler/d/dookcollection.py", line 377, in main request_queue = await RequestQueue.open(name="dookcollection") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storages/_request_queue.py", line 165, in open return await open_storage( ^^^^^^^^^^^^^^^^^^^ File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storages/_creation_management.py", line 170, in open_storage storage_info = await resource_collection_client.get_or_create(name=name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storage_clients/_memory/_request_queue_collection_client.py", line 35, in get_or_create resource_client = await get_or_create_inner( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storage_clients/_memory/_creation_management.py", line 143, in get_or_create_inner found = find_or_create_client_by_id_or_name_inner( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storage_clients/_memory/_creation_management.py", line 102, in find_or_create_client_by_id_or_name_inner storage_path = _determine_storage_path(resource_client_class, memory_storage_client, id, name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storage_clients/_memory/_creation_management.py", line 412, in _determine_storage_path metadata = json.load(metadata_file) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/__init__.py", line 293, in load return loads(fp.read(), ^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ``` I removed the related directory in the storage/request_queues and re-ran it but I still have the same problem. I appreciate if you guys can help! Thanks! ### Package version crawlee==0.5.0
1medium
Title: Is there a sample I can use to paint an image without cutting it? Body: I more or less understood the test, but is there any way to paint images (I trained a small model with references on how to do it) without having to lower the quality so much? if the image is 256 you can hardly see anything even if you raise the quality.
1medium
Title: Add type hints for mypy Body: ### Description I would like to propose adding type hints to cartopy so that mypy can be used with the project. #### Code to reproduce Using the following code (adapted from the [global map](https://scitools.org.uk/cartopy/docs/latest/gallery/lines_and_polygons/global_map.html) tutorial): ```python import matplotlib.pyplot as plt import cartopy.crs as ccrs fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot(1, 1, 1, projection=ccrs.Robinson()) ax.set_global() ax.stock_img() ax.coastlines() ax.plot(-0.08, 51.53, 'o', transform=ccrs.PlateCarree()) ax.plot([-0.08, 132], [51.53, 43.17], transform=ccrs.PlateCarree()) ax.plot([-0.08, 132], [51.53, 43.17], transform=ccrs.Geodetic()) plt.show() ``` If you run: ```console > mypy --strict plot.py ``` you'll see several errors. #### Traceback ``` test.py:2: error: Skipping analyzing "cartopy.crs": module is installed, but missing library stubs or py.typed marker [import-untyped] test.py:2: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports test.py:2: error: Skipping analyzing "cartopy": module is installed, but missing library stubs or py.typed marker [import-untyped] test.py:8: error: "Axes" has no attribute "set_global" [attr-defined] test.py:9: error: "Axes" has no attribute "stock_img" [attr-defined] test.py:10: error: "Axes" has no attribute "coastlines" [attr-defined] Found 5 errors in 1 file (checked 1 source file) ``` <details> <summary>Full environment definition</summary> <!-- fill in the following information as appropriate --> ### Operating system macOS and Linux ### Cartopy version 0.22.0 ### conda list N/A ### pip list ``` Package Version ----------------------------- ----------- absl-py 1.4.0 aenum 3.1.12 affine 2.1.0 aiohttp 3.8.4 aiosignal 1.2.0 alabaster 0.7.13 altgraph 0.17.2 antlr4-python3-runtime 4.9.3 anyio 4.0.0 appdirs 1.4.4 appnope 0.1.3 argon2-cffi 21.3.0 argon2-cffi-bindings 21.2.0 arrow 1.2.3 asttokens 2.4.0 astunparse 1.6.3 async-lru 1.0.3 async-timeout 4.0.2 attrs 23.1.0 Babel 2.12.1 backcall 0.2.0 beautifulsoup4 4.12.2 black 23.11.0 bleach 6.0.0 Bottleneck 1.3.7 build 1.0.3 cachetools 5.2.0 Cartopy 0.22.0 certifi 2023.5.7 cffi 1.15.1 cftime 1.0.3.4 charset-normalizer 3.1.0 click 8.1.3 click-plugins 1.1.1 cligj 0.7.2 cmocean 3.0.3 colorama 0.4.6 comm 0.1.3 contourpy 1.0.7 coverage 7.2.6 cycler 0.11.0 debugpy 1.6.7 decorator 5.1.1 defusedxml 0.7.1 docstring-parser 0.15 docutils 0.18.1 editables 0.3 efficientnet-pytorch 0.7.1 einops 0.7.0 et-xmlfile 1.0.1 executing 1.2.0 fastjsonschema 2.16.3 filelock 3.12.4 Fiona 1.9.4 flake8 6.1.0 fonttools 4.39.4 fqdn 1.5.1 frozenlist 1.3.1 fsspec 2023.1.0 future 0.18.2 GDAL 3.8.0 geocube 0.3.2 geopandas 0.11.1 gevent 23.7.0 google-auth 2.20.0 google-auth-oauthlib 0.5.2 greenlet 2.0.2 grpcio 1.52.0 h5py 3.8.0 hatch-jupyter-builder 0.8.3 hatchling 1.18.0 huggingface-hub 0.14.1 hydra-core 1.3.1 idna 3.4 imageio 2.30.0 imagesize 1.4.1 importlib-metadata 6.6.0 importlib-resources 5.12.0 iniconfig 2.0.0 ipykernel 6.23.1 ipython 8.14.0 ipywidgets 8.0.2 isoduration 20.11.0 isort 5.12.0 jaraco.classes 3.2.3 jedi 0.18.2 Jinja2 3.0.3 joblib 1.2.0 json5 0.9.14 jsonargparse 4.25.0 jsonpointer 2.0 jsonschema 4.17.3 jupyter_client 8.2.0 jupyter_core 5.3.0 jupyter-events 0.6.3 jupyter-lsp 2.2.0 jupyter_server 2.6.0 jupyter_server_terminals 0.4.4 jupyterlab 4.0.1 jupyterlab-pygments 0.2.2 jupyterlab_server 2.22.1 jupyterlab-widgets 3.0.3 keyring 23.13.1 kiwisolver 1.4.4 kornia 0.7.0 laspy 2.2.0 lazy_loader 0.1 lightly 1.4.18 lightly-utils 0.0.2 lightning 2.1.2 lightning-utilities 0.8.0 macholib 1.15.2 Markdown 3.4.1 markdown-it-py 3.0.0 MarkupSafe 2.1.3 matplotlib 3.8.2 matplotlib-inline 0.1.6 mccabe 0.7.0 mdurl 0.1.2 mistune 2.0.5 more-itertools 9.1.0 mpmath 1.2.1 multidict 6.0.4 munch 2.5.0 mypy 1.7.0 mypy-extensions 1.0.0 nbclient 0.6.7 nbconvert 7.4.0 nbformat 5.8.0 nbmake 1.4.3 nbsphinx 0.8.8 nest-asyncio 1.5.6 netCDF4 1.6.2 networkx 3.1 notebook_shim 0.2.3 numexpr 2.8.4 numpy 1.26.2 oauthlib 3.2.1 odc-geo 0.1.2 omegaconf 2.3.0 openpyxl 3.1.2 overrides 7.3.1 packaging 23.1 pandas 2.1.3 pandocfilters 1.5.0 parso 0.8.3 pathspec 0.11.1 pexpect 4.8.0 pickleshare 0.7.5 Pillow 10.0.0 pip 21.2.4 pkginfo 1.9.6 planetary-computer 0.4.9 platformdirs 3.10.0 pluggy 1.0.0 pooch 1.7.0 pretrainedmodels 0.7.4 prometheus-client 0.17.0 prompt-toolkit 3.0.38 protobuf 3.20.3 psutil 5.9.5 ptyprocess 0.7.0 pure-eval 0.2.2 pyasn1 0.4.8 pyasn1-modules 0.2.8 pybind11 2.11.0 pycocotools 2.0.6 pycodestyle 2.11.0 pycparser 2.21 pydantic 1.10.9 pydocstyle 6.2.1 pyflakes 3.1.0 pygeos 0.14 Pygments 2.16.1 pyparsing 3.0.9 pyproj 3.2.1 pyproject_hooks 1.0.0 pyrsistent 0.19.3 pyshp 2.1.0 pystac 1.4.0 pystac-client 0.5.1 pytest 7.3.2 pytest-cov 4.0.0 python-dateutil 2.8.2 python-dotenv 0.19.2 python-json-logger 2.0.7 pytorch-lightning 2.0.0 pytorch-sphinx-theme 0.0.24 pytz 2023.3 pyupgrade 3.3.1 pyvista 0.42.3 PyWavelets 1.4.1 PyYAML 6.0 pyzmq 25.0.2 radiant-mlhub 0.3.1 rarfile 4.1 rasterio 1.3.8 readme-renderer 37.3 requests 2.31.0 requests-oauthlib 1.3.1 requests-toolbelt 1.0.0 rfc3339-validator 0.1.4 rfc3986 2.0.0 rfc3986-validator 0.1.1 rich 13.4.2 rioxarray 0.4.1.post0 rsa 4.9 Rtree 1.1.0 safetensors 0.3.1 scikit-image 0.20.0 scikit-learn 1.3.2 scipy 1.11.4 scooby 0.5.7 segmentation-models-pytorch 0.3.3 Send2Trash 1.8.0 setuptools 63.4.3 Shapely 1.8.1 six 1.16.0 sniffio 1.3.0 snowballstemmer 2.2.0 snuggs 1.4.1 soupsieve 2.4.1 Sphinx 5.3.0 sphinx-copybutton 0.2.12 sphinx_design 0.4.1 sphinx-rtd-theme 1.2.2 sphinxcontrib-applehelp 1.0.4 sphinxcontrib-devhelp 1.0.2 sphinxcontrib-htmlhelp 2.0.1 sphinxcontrib-jquery 4.1 sphinxcontrib-jsmath 1.0.1 sphinxcontrib-programoutput 0.15 sphinxcontrib-qthelp 1.0.3 sphinxcontrib-serializinghtml 1.1.9 stack-data 0.6.2 sympy 1.11.1 tensorboard 2.14.1 tensorboard-data-server 0.7.0 terminado 0.17.1 threadpoolctl 3.1.0 tifffile 2023.8.30 timm 0.9.2 tinycss2 1.2.1 tokenize-rt 4.2.1 torch 2.1.1 torchmetrics 1.2.0 torchvision 0.16.1 tornado 6.3.3 tqdm 4.66.1 traitlets 5.9.0 trove-classifiers 2023.8.7 twine 4.0.2 typeshed-client 2.1.0 typing_extensions 4.8.0 tzdata 2023.3 uri-template 1.2.0 urllib3 1.26.12 vermin 1.5.2 wcwidth 0.2.7 webcolors 1.11.1 webencodings 0.5.1 websocket-client 1.6.3 Werkzeug 3.0.0 wheel 0.41.2 widgetsnbextension 4.0.3 xarray 2023.7.0 yarl 1.9.2 zipfile-deflate64 0.2.0 zipp 3.17.0 zope.event 4.6 zope.interface 5.4.0 ``` </details>
2hard
Title: iterable gets refiltered by resolve_queryset but iterable might be promise Body: I'm trying to use DataLoader but I got a problem in DjangoConnectionField. According to the comment, does that means I can't DataLoader here? My iterable here is Promise. https://github.com/graphql-python/graphene-django/blob/0da06d4d54d3e73d43d88534259f55733ab7609b/graphene_django/fields.py#L176
1medium
Title: SyntaxWarning with python 3.8 Body: Hello, a SyntaxWarning occurs when using SAlib with python 3.8 ``` \lib\site-packages\SALib\util\_ _init__.py:222: SyntaxWarning: "is" with a literal. Did you mean "=="? elif row['group'] is 'NA': \lib\site-packages\SALib\util\r esults.py:15: SyntaxWarning: "is not" with a literal. Did you mean "!="? return pd.DataFrame({k: v for k, v in self.items() if k is not 'names'}, ```
0easy
Title: Docker build horovod-nvtabular fails Body: `pip` installing `cudf-cu11` results in an error: ``` #12 [ 7/37] RUN pip install --no-cache-dir cudf-cu11 dask-cudf-cu11 --extra-index-url=https://pypi.ngc.nvidia.com/ #12 1.247 Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com/ #12 2.285 Collecting cudf-cu11 #12 2.391 Downloading cudf_cu11-23.2.0.tar.gz (6.5 kB) #12 2.525 ERROR: Command errored out with exit status 1: #12 2.525 command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-lwkt7jbg/cudf-cu11/setup.py'"'"'; __file__='"'"'/tmp/pip-install-lwkt7jbg/cudf-cu11/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-lwkt7jbg/cudf-cu11/pip-egg-info #12 2.525 cwd: /tmp/pip-install-lwkt7jbg/cudf-cu11/ #12 2.525 Complete output (5 lines): #12 2.525 Traceback (most recent call last): #12 2.525 File "<string>", line 1, in <module> #12 2.525 File "/tmp/pip-install-lwkt7jbg/cudf-cu11/setup.py", line 137, in <module> #12 2.525 raise RuntimeError(open("ERROR.txt", "r").read()) #12 2.525 FileNotFoundError: [Errno 2] No such file or directory: 'ERROR.txt' #12 2.525 ---------------------------------------- ``` https://github.com/horovod/horovod/actions/runs/4192929617/jobs/7277248027
1medium
Title: timeline in interactive html view Body: Hi! I love the timeline feature, it really helps me understand the order of operation and get my mind wrapped around execution flow. I get an error when using the `timeline=True` from inside any of the output / open functions except for text output. I guess if this is the place to put a feature request then this is it. I would be happy to submit a PR if it doesn't exist, but I'd want to know what level of effort you would you estimate for this.
1medium
Title: [BUG] Passing empty dataframe from Data Transformer to Data Exporter clears/removes the columns (headers) Body: ### Mage version v0.9.74 ### Describe the bug We have a use case where in the Data Transformer, it maps an incoming list to a pandas dataframe. In some cases, the incoming list is empty resulting in an empty dataframe to be output, but we still want the `columns` part of the "dataframe object" to be part of the output. The resulting dataframe object is then passed to a Data Exporter, where we export the dataframe as a csv to S3. The issue is that sometimes, we have an empty pandas dataframe object being passed from the Transformer to the Exporter. When in the Exporter, dataframe part of the "pandas dataframe object" is empty (which is correct), but the `columns` part gets removed/cleared or replaced with an empty list (which is likely incorrect). We need the columns (headers) in the Exporter so that it can export the dataframe to a csv with just the headers (empty file with headers only). ### To reproduce 1. In a Data transformer, create an empty dataframe with columns: ``` @transformer def transform(data, *args, **kwargs): df = pd.DataFrame(columns=['A','B','C','D','E','F','G']) print(df) return df ``` Print result: ``` Empty DataFrame Columns: [A, B, C, D, E, F, G] Index: [] ``` 2. Output that dataframe from the Transformer and input that data into a Data Exporter: ``` @data_exporter def export_data_to_s3(data, **kwargs) -> None: print(data) ``` Print result: ``` Empty DataFrame Columns: [] Index: [] ``` ### Expected behavior Even if the dataframe is empty, the columns part of the dataframe object should still be passed on. In Data Exporter, when I print the incoming df, it should show this: ``` Empty DataFrame Columns: [A, B, C, D, E, F, G] Index: [] ``` ### Screenshots _No response_ ### Operating system v0.9.74 python 3.12.3 ### Additional context _No response_
1medium
Title: How to tweak query structure from relationships Body: I'm working on a simple CRUD REST API to learn GraphQL & SqlAlchemy. I have a Movie table ``` class Movie(Base, Serializer): __tablename__ = 'movie' id = Column(Integer, primary_key=True, index=True) movie = Column(String(50), nullable=False, unique=True) budget = Column(Float, nullable=False) genre_id = Column(Integer, ForeignKey('genre.id'), nullable=False) rating = Column(Float, nullable=False) studio_id = Column(Integer, ForeignKey('studio.id'), nullable=False) director_id = Column(Integer, ForeignKey('director.id'), nullable=False) director = relationship( Director, backref=backref('movies', uselist=True, cascade='delete,all') ) genre = relationship( Genre, backref=backref('movies', uselist=True, cascade='delete,all') ) studio = relationship( Studio, backref=backref('movies', uselist=True, cascade='delete,all') ) actors = relationship( Actor, secondary=movie_actor_association_table, backref='movies', uselist=True ) ``` that has its own properties (movie, budget, rating) but also 4 foreign keys (genre, studio, director, actors). my GraphQL types are simple ``` class Movie(SQLAlchemyObjectType): class Meta: model = MovieModel interfaces = (relay.Node,) class Director(SQLAlchemyObjectType): class Meta: model = DirectorModel interfaces = (relay.Node,) class Genre(SQLAlchemyObjectType): class Meta: model = GenreModel interfaces = (relay.Node,) class Studio(SQLAlchemyObjectType): class Meta: model = StudioModel interfaces = (relay.Node,) class Actor(SQLAlchemyObjectType): class Meta: model = ActorModel interfaces = (relay.Node,) ``` however, now when I query data, for the relationship tables, I have to replicate key, value pairs to get simple data ``` movies { edges { node { id movie budget genre { genre } rating studio { studio } director { director } actors { edges { node{ actor } } } } } } ``` i.e. can I avoid using genre {genre}, studio {studio}, etc. and just retrieve genre directly inside the movie? **bonus question**: adding filters to these relationships doesn't work I have a movie filter ``` class MovieFilter(FilterSet): class Meta: model = MovieModel fields = { 'id': ['eq'], 'movie': ['eq', 'ilike'], 'rating': ['eq', 'gt', 'gte'] } ``` that I can use like so ``` class Query(graphene.ObjectType): node = relay.Node.Field() movies = FilterableConnectionField(Movie.connection, filters=MovieFilter()) ``` to have filtering available for my `movie` table. However, the filters only work for the fields defined in the `movie` table itself, i.e. `movie name`, `rating`, `budget`. Does anyone know how I can use `graphene-sqlalchemy-filter` to filter for all fields (director/actor/genre/studio)? It seems to me that GraphQL doesn't handle relationships all that well.
1medium
Title: [Migrated] How to update app without downtime? Body: Originally from: https://github.com/Miserlou/Zappa/issues/2103 by [xncbf](https://github.com/xncbf) In the case of AWS Beanstalk, can be deployment without downtime through environment replication and url swap. Is it possible to perform a similar practice on Zappa?
1medium
Title: Migrate to multiple databases simultaneously Body: Greetings. I am doing a project and I need to migrate to several databases simultaneously. For example, I have bind 2 databases in SQLALCHEMY_BINDS like that : **app.config['SQLALCHEMY_BINDS'] = { 'bobkov1': 'postgresql://postgres:zabil2012@localhost:5431/bobkov1', 'bobkov' : 'postgresql://postgres:zabil2012@localhost:5431/bobkov' }** And now i want to migrate models to both of this databases. I'm tried to do it like that: **class User(BaseModel, db.Model): __tablename__ = 'user' __bind_key__ = {'bobkov','bobkov1'} id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(64), index=True, unique=True) email = db.Column(db.String(120), index=True, unique=True) password_hash = db.Column(db.String(128)) posts = db.relationship('Post', backref='author', lazy='dynamic') class Post(db.Model): __tablename__ = 'post' __bind_key__ = {'bobkov','bobkov1'} id = db.Column(db.Integer, primary_key=True) body = db.Column(db.String(140)) user_id = db.Column(db.Integer, db.ForeignKey('user.id'))** When trying to run this code, the model migrates only to the main database, which is defined in SQLALCHEMY_DATABASE_URI. Help me please, how to configure that?
1medium
Title: Image uploading and updating is not working with django-filer Body: I have integrated django-jet for all it's beautiful design and customized functionalities. I'm also using django-filer for file and image uploading. But I'm facing this issue when using djang-filer with django-jet. - > I'm unable to change image after uploading it for the first time. Image upload popup is also not opening. Simply, I can select the image for upload for the first time but after that I can not update that image. Please check below screenshot. ![screenshot](https://cloud.githubusercontent.com/assets/6413205/26345692/4e4abc7a-3fc1-11e7-9c83-d4a9f660cbb9.png) Has anybody encountered same problem? Help me.
1medium
Title: pymorphy2 0.9.1 is released Body: Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first. **What problem are we trying to solve?**: Current `pymorphy2` requirement [is obsolete](https://github.com/deepmipt/DeepPavlov/blob/0.12.1/requirements.txt#L11) in DeepPavlov. `pymorphy2 0.9.1` [was released](https://github.com/kmike/pymorphy2/releases/tag/0.9.1). See also: https://github.com/kmike/pymorphy2/issues/125, https://github.com/kmike/pymorphy2/issues/133. **How can we solve it?**: ``` pymorphy2==0.9.1 ````
0easy
Title: setup>functions>conditional measurement stops working all the time Body: ## Mycodo Issue Report: - Specific Mycodo Version: 6.4.5 #### Problem Description Setup conditional measurement control to turn on a relay. Conditional control stops working after some time. Works for hours sometimes. Only way to fix is change some parameters in the conditional measurement control and save the changes. - what were you trying to do using analog sensors to turn a relay on and off depending on the voltage from the sensors.
1medium
Title: Unhandled Exception (495ff691e) Body: Autosploit version: `3.0` OS information: `Linux-4.15.0-45-generic-x86_64-with-Ubuntu-18.04-bionic` Running context: `autosploit.py` Error meesage: `global name 'Except' is not defined` Error traceback: ``` Traceback (most recent call): File "/home/peerles/源代码/Autosploit/autosploit/main.py", line 113, in main loaded_exploits = load_exploits(EXPLOIT_FILES_PATH) File "/home/peerles/源代码/Autosploit/lib/jsonize.py", line 61, in load_exploits except Except: NameError: global name 'Except' is not defined ``` Metasploit launched: `False`
1medium
Title: PolyFit is not robust to missing data Body: ```python so.Plot([1, 2, 3, None, 4], [1, 2, 3, 4, 5]).add(so.Line(), so.PolyFit()) ``` <details><summary>Traceback</summary> ```python-traceback --------------------------------------------------------------------------- LinAlgError Traceback (most recent call last) File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/IPython/core/formatters.py:343, in BaseFormatter.__call__(self, obj) 341 method = get_real_method(obj, self.print_method) 342 if method is not None: --> 343 return method() 344 return None 345 else: File ~/code/seaborn/seaborn/_core/plot.py:265, in Plot._repr_png_(self) 263 def _repr_png_(self) -> tuple[bytes, dict[str, float]]: --> 265 return self.plot()._repr_png_() File ~/code/seaborn/seaborn/_core/plot.py:804, in Plot.plot(self, pyplot) 800 """ 801 Compile the plot spec and return the Plotter object. 802 """ 803 with theme_context(self._theme_with_defaults()): --> 804 return self._plot(pyplot) File ~/code/seaborn/seaborn/_core/plot.py:822, in Plot._plot(self, pyplot) 819 plotter._setup_scales(self, common, layers, coord_vars) 821 # Apply statistical transform(s) --> 822 plotter._compute_stats(self, layers) 824 # Process scale spec for semantic variables and coordinates computed by stat 825 plotter._setup_scales(self, common, layers) File ~/code/seaborn/seaborn/_core/plot.py:1110, in Plotter._compute_stats(self, spec, layers) 1108 grouper = grouping_vars 1109 groupby = GroupBy(grouper) -> 1110 res = stat(df, groupby, orient, scales) 1112 if pair_vars: 1113 data.frames[coord_vars] = res File ~/code/seaborn/seaborn/_stats/regression.py:41, in PolyFit.__call__(self, data, groupby, orient, scales) 39 def __call__(self, data, groupby, orient, scales): ---> 41 return groupby.apply(data, self._fit_predict) File ~/code/seaborn/seaborn/_core/groupby.py:109, in GroupBy.apply(self, data, func, *args, **kwargs) 106 grouper, groups = self._get_groups(data) 108 if not grouper: --> 109 return self._reorder_columns(func(data, *args, **kwargs), data) 111 parts = {} 112 for key, part_df in data.groupby(grouper, sort=False): File ~/code/seaborn/seaborn/_stats/regression.py:30, in PolyFit._fit_predict(self, data) 28 xx = yy = [] 29 else: ---> 30 p = np.polyfit(x, y, self.order) 31 xx = np.linspace(x.min(), x.max(), self.gridsize) 32 yy = np.polyval(p, xx) File <__array_function__ internals>:180, in polyfit(*args, **kwargs) File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/numpy/lib/polynomial.py:668, in polyfit(x, y, deg, rcond, full, w, cov) 666 scale = NX.sqrt((lhs*lhs).sum(axis=0)) 667 lhs /= scale --> 668 c, resids, rank, s = lstsq(lhs, rhs, rcond) 669 c = (c.T/scale).T # broadcast scale coefficients 671 # warn on rank reduction, which indicates an ill conditioned matrix File <__array_function__ internals>:180, in lstsq(*args, **kwargs) File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/numpy/linalg/linalg.py:2300, in lstsq(a, b, rcond) 2297 if n_rhs == 0: 2298 # lapack can't handle n_rhs = 0 - so allocate the array one larger in that axis 2299 b = zeros(b.shape[:-2] + (m, n_rhs + 1), dtype=b.dtype) -> 2300 x, resids, rank, s = gufunc(a, b, rcond, signature=signature, extobj=extobj) 2301 if m == 0: 2302 x[...] = 0 File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/numpy/linalg/linalg.py:101, in _raise_linalgerror_lstsq(err, flag) 100 def _raise_linalgerror_lstsq(err, flag): --> 101 raise LinAlgError("SVD did not converge in Linear Least Squares") LinAlgError: SVD did not converge in Linear Least Squares ``` </details>
1medium
Title: is it possible to train chatterbot on memes? Body: I couldnt find anything on my light google search so I thought id ask. I was wondering if i can train chatterbot on a csv with a meme in the message and response field. Im new to this whole machine learning thing so sorry if its a dumb question Thank you!
1medium
Title: LOCI fails on MacOS with Python 2.7 (caused by np.count_nonzero) Body: It is noted running **LOCI** model on **MacOS** with **Python 2.7** may fail. One potential cause is the following code, as np.count_nonzero returns **int** instead of **array**. I am currently investigating how to fix it. Please stay tuned. ``` def _get_alpha_n(self, dist_matrix, indices, r): """Computes the alpha neighbourhood points. Parameters ---------- dist_matrix : array-like, shape (n_samples, n_features) The distance matrix w.r.t. to the training samples. indices : int Subsetting index r : int Neighbourhood radius Returns ------- alpha_n : array, shape (n_alpha, ) Returns the alpha neighbourhood points. """ if type(indices) is int: alpha_n = np.count_nonzero( dist_matrix[indices, :] < (r * self._alpha)) return alpha_n else: alpha_n = np.count_nonzero( dist_matrix[indices, :] < (r * self._alpha), axis=1) return alpha_n ``` The error message looks like below: > (test27) bash-3.2$ python loci_example.py > /anaconda2/envs/test27/lib/python2.7/site-packages/pyod/models/loci.py:199: RuntimeWarning: divide by zero encountered in double_scalars > outlier_scores[p_ix] = mdef/sigma_mdef > /Users/zhaoy9/.local/lib/python2.7/site-packages/numpy/core/_methods.py:101: RuntimeWarning: invalid value encountered in subtract > x = asanyarray(arr - arrmean) > On Training Data: > Traceback (most recent call last): > File "loci_example.py", line 133, in <module> > evaluate_print(clf_name, y_train, y_train_scores) > File "/anaconda2/envs/test27/lib/python2.7/site-packages/pyod/utils/data.py", line 159, in evaluate_print > roc=np.round(roc_auc_score(y, y_pred), decimals=4), > File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/metrics/ranking.py", line 356, in roc_auc_score > sample_weight=sample_weight) > File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/metrics/base.py", line 77, in _average_binary_score > return binary_metric(y_true, y_score, sample_weight=sample_weight) > File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/metrics/ranking.py", line 328, in _binary_roc_auc_score > sample_weight=sample_weight) > File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/metrics/ranking.py", line 618, in roc_curve > y_true, y_score, pos_label=pos_label, sample_weight=sample_weight) > File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/metrics/ranking.py", line 403, in _binary_clf_curve > assert_all_finite(y_score) > File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/utils/validation.py", line 68, in assert_all_finite > _assert_all_finite(X.data if sp.issparse(X) else X, allow_nan) > File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/utils/validation.py", line 56, in _assert_all_finite > raise ValueError(msg_err.format(type_err, X.dtype)) > ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
1medium
Title: mode on `axis=1` Body: The `mode` method in a `dask` `DataFrame` does not allow for the argument `axis=1`. It would be great to have since it seems that in `pandas`, that operation is very slow and seems straightforward to parallelize. I would like to be able to do this in dask. ``` import pandas as pd import numpy as np import dask.dataframe as dd np.random.seed(0) N_ROWS = 1_000 df = pd.DataFrame({'a':np.random.randint(0, 100, N_ROWS), 'b':np.random.randint(0, 100, N_ROWS), 'c':np.random.randint(0, 100, N_ROWS)}) df['d'] = df['a'] #ensure mode is column 'a', unless b=c, then there are two modes df.mode(axis=1) ``` For reference, in pandas with `N_ROWS = 100_000`, the mode operation takes 20 seconds, and the time seems to grow linearly with number of observations.
1medium
Title: About RNN Body: I wrote a RNN program that i did't use nn.model so that I could see the structure. But I find some problem. Code and the datasets are as follow https://github.com/VeritasXu/RNN Can you run and see the structure? I think my program is correct, but the input type of add_graph function results in the strange structure. Could you help me ?
1medium
Title: It does not support mobile links [BUG] - Your Error Here Body: # Read Below!!! If this doesn't fix your issue delete these two lines **You may need to install chromedriver for your machine globally. Download it [here](https://sites.google.com/a/chromium.org/chromedriver/) and add it to your path.** **Describe the bug** A clear and concise description of what the bug is. **The buggy code** Please insert the code that is throwing errors or is giving you weird unexpected results. ``` # Code Goes Here ``` **Expected behavior** A clear and concise description of what you expected to happen. **Error Trace (if any)** Put the error trace below if there's any error thrown. ``` # Error Trace Here ``` **Desktop (please complete the following information):** - OS: [e.g. Windows 10] - TikTokApi Version [e.g. 3.3.1] - if out of date upgrade before posting an issue **Additional context** Add any other context about the problem here.
1medium
Title: Objects accept node parameter for choosing extra node Body: Objects accept node parameter for choosing extra node
1medium
Title: Gfpgan Not working on colab Body: ![Uploading Screenshot_20240321_084829.jpg…]() I'm regularly using gfpgan on colab to upscale my AI generated images but last two weeks im facing an problem.the image is not upscale please check and correct that please.i tried many times to solve that but I couldn't.please help
1medium
Title: Should init scale matrix as diagonal form? Body: Hi, Phil: I noticed the LayerScale part in the `CaiT`, in the original paper the scale matrix is a diagonal form `(b,d,d)`, but in this implement, it just initialized in a form of vector(maybe can broadcast afterwards, but would it be better just initialize as a diagonal form?) https://github.com/lucidrains/vit-pytorch/blob/3f754956fbfb1f97ae4f1e244a7ecb16eab79296/vit_pytorch/cait.py#L41 Best,
1medium
Title: Add support for active session timeout in Airflow Web UI Body: ### Description Currently, Airflow only support inactive session timeout via the `session_lifetime_minutes` config option. This handles session expiration after a period of inactivity, which is great - but it doesn't cover cases where a session should expire regardless of activity (i.e, an active session timeout). This is a common requirement in environments with stricter security/compliance policies (e.g, session must expire after x hours, even if user is active) ### Use case/motivation Introduce a new configuration option (e.g, `session_max_lifetime_minutes`) that defines the maximum duration a session can remain valid from the time of login, regardless of user activity. This feature will help admins better enforce time-based access control. ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
1medium
Title: Error with trying to get the evolution chain Body: So Im trying to get the evolution chain for pokemon using this link: https://pokeapi.co/api/v2/pokemon-species/2 But it keeps telling me KeyError Im using discord.py to make this into a command btw
1medium
Title: Introduction of https://www.conventionalcommits.org/ for PullRequest Titles? Body: I would consider it useful to introduce https://www.conventionalcommits.org/ at least at PullRequest title level. We could only recommend it, or check it directly with e.g. the following Github Action Use. https://github.com/marketplace/actions/conventional-commit-in-pull-requests I think the advantages are obvious, a better commit history in the main would make it easier for us in terms of release notes etc. A next step would probably be: * Define “Commit Types”, or do we need other than predefined? * Scopes” do we need any? or what could they be? What do you think about this?
1medium
Title: Feature request: Ability to import uvicorn in django to enable websocket support Body: ### Checklist - [X] There are no similar issues or pull requests for this yet. - [ ] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive. ### Is your feature related to a problem? Please describe. When we do `python manage.py runserver` we have a line in our manage.py file `import daphne.server` which enables websocket support with runserver. If we could do the same thing with uvicorn that would let us get rid of daphne entirely. ### Describe the solution you would like. websocket support for django runserver ### Describe alternatives you considered * continue using daphne for runserver (downside: extra dependency) * use uvicorn with autoreload feature. (downside: devs prefer using runserver) ### Additional context _No response_
1medium
Title: Loss is constant Body: I'm using CLIP to train on my custom dataset with the following params: Dataset size : 50k image-text pairs Batch size : 128 Image Size : 224 Gpus : 1 Epochs : 500 It's been running for a while now, I'm on my 15th epoch, and the loss hasn't changed at all. It isn't a constant number, but its constantly at 4.8xxx. Should I be concerned? I'm not sure why this is happening. ![image](https://user-images.githubusercontent.com/28048963/133154185-01bdd63f-b3bc-460b-a583-21f5f9616a02.png)
1medium
Title: MAP和AP50 Body: 如果我的数据集只有一个类别,这时候输出的指标里MAP和AP50应该差不多吧?为什么MAP才0.3,AP50倒是有0.7。怎么修改相应指标呢,如果我想输出其他的指标,例如准确率,召回率或者自定义的一些指标
1medium
Title: You're accessing the development server over HTTPS, but it only supports HTTP. Body: You're accessing the development server over HTTPS, but it only supports HTTP. This error always shows up while assessing a dj-rest-auth view
1medium
Title: Testing: add mypy and pylint to the pre-commit Body: A lot of lines to fix.
1medium
Title: Yahoo Finance Options tests raises ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' Body: Hello, some Yahoo Finance Options tests raises ``` ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' ``` I can see this exception using ``` $ nosetests -s -v ====================================================================== ERROR: test_get_all_data (test_data.TestYahooOptions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates expiry_dates = self._expiry_dates AttributeError: 'Options' object has no attribute '_expiry_dates' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 358, in test_get_all_data data = self.aapl.get_all_data(put=True) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1197, in get_all_data expiry_dates = self.expiry_dates File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates expiry_dates, _ = self._get_expiry_dates_and_links() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp> expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' ====================================================================== ERROR: test_get_all_data_calls_only (test_data.TestYahooOptions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates expiry_dates = self._expiry_dates AttributeError: 'Options' object has no attribute '_expiry_dates' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 372, in test_get_all_data_calls_only data = self.aapl.get_all_data(call=True, put=False) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1197, in get_all_data expiry_dates = self.expiry_dates File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates expiry_dates, _ = self._get_expiry_dates_and_links() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp> expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' ====================================================================== ERROR: test_get_call_data (test_data.TestYahooOptions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates expiry_dates = self._expiry_dates AttributeError: 'Options' object has no attribute '_expiry_dates' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 337, in test_get_call_data calls = self.aapl.get_call_data(expiry=self.expiry) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 901, in get_call_data expiry = self._try_parse_dates(year, month, expiry) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1061, in _try_parse_dates expiry = [self._validate_expiry(expiry)] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1085, in _validate_expiry expiry_dates = self.expiry_dates File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates expiry_dates, _ = self._get_expiry_dates_and_links() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp> expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' ====================================================================== ERROR: test_get_data_with_list (test_data.TestYahooOptions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates expiry_dates = self._expiry_dates AttributeError: 'Options' object has no attribute '_expiry_dates' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 365, in test_get_data_with_list data = self.aapl.get_call_data(expiry=self.aapl.expiry_dates) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates expiry_dates, _ = self._get_expiry_dates_and_links() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp> expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' ====================================================================== ERROR: test_get_expiry_dates (test_data.TestYahooOptions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 351, in test_get_expiry_dates dates, _ = self.aapl._get_expiry_dates_and_links() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp> expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' ====================================================================== ERROR: test_get_near_stock_price (test_data.TestYahooOptions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates expiry_dates = self._expiry_dates AttributeError: 'Options' object has no attribute '_expiry_dates' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 330, in test_get_near_stock_price expiry=self.expiry) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1005, in get_near_stock_price expiry = self._try_parse_dates(year, month, expiry) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1061, in _try_parse_dates expiry = [self._validate_expiry(expiry)] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1085, in _validate_expiry expiry_dates = self.expiry_dates File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates expiry_dates, _ = self._get_expiry_dates_and_links() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp> expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' ====================================================================== ERROR: test_get_options_data (test_data.TestYahooOptions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates expiry_dates = self._expiry_dates AttributeError: 'Options' object has no attribute '_expiry_dates' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 322, in test_get_options_data options = self.aapl.get_options_data(expiry=self.expiry) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 750, in get_options_data self.get_call_data)]).sortlevel() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 749, in <listcomp> for f in (self.get_put_data, File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 964, in get_put_data expiry = self._try_parse_dates(year, month, expiry) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1061, in _try_parse_dates expiry = [self._validate_expiry(expiry)] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1085, in _validate_expiry expiry_dates = self.expiry_dates File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates expiry_dates, _ = self._get_expiry_dates_and_links() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp> expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' ====================================================================== ERROR: test_get_put_data (test_data.TestYahooOptions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates expiry_dates = self._expiry_dates AttributeError: 'Options' object has no attribute '_expiry_dates' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 344, in test_get_put_data puts = self.aapl.get_put_data(expiry=self.expiry) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 964, in get_put_data expiry = self._try_parse_dates(year, month, expiry) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1061, in _try_parse_dates expiry = [self._validate_expiry(expiry)] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1085, in _validate_expiry expiry_dates = self.expiry_dates File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates expiry_dates, _ = self._get_expiry_dates_and_links() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp> expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' ====================================================================== ERROR: test_get_underlying_price (test_data.TestYahooOptions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates expiry_dates = self._expiry_dates AttributeError: 'Options' object has no attribute '_expiry_dates' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 381, in test_get_underlying_price url = options_object._yahoo_url_from_expiry(options_object.expiry_dates[0]) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates expiry_dates, _ = self._get_expiry_dates_and_links() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp> expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data 'September 18, 2015' does not match format '%B %d, %Y' ====================================================================== ERROR: test_month_year (test_data.TestYahooOptions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates expiry_dates = self._expiry_dates AttributeError: 'Options' object has no attribute '_expiry_dates' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 421, in test_month_year data = self.aapl.get_call_data(month=self.month, year=self.year) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 901, in get_call_data expiry = self._try_parse_dates(year, month, expiry) File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1075, in _try_parse_dates expiry = [expiry for expiry in self.expiry_dates if expiry.year == year and expiry.month == month] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates expiry_dates, _ = self._get_expiry_dates_and_links() File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp> expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links] File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' ``` but ``` $ nosetests -s -v pandas_datareader/tests/test_data.py:TestYahooOptions.test_get_all_data ``` don't raises any error ! Any idea ?
1medium
Title: Provide a way to configure the SA engine Body: Hi all, Unless I'm mis-reading the code, there is no way to provide engine creation options. One of them that appears with SA 1.2 is [pool_pre_ping](http://docs.sqlalchemy.org/en/latest/core/pooling.html#pool-disconnects-pessimistic). I'm not sure I can provide extra parameters via flask-sqlalchemy parameters. Should I create the engine out of band? Thanks,
1medium
Title: why 1machine (TITAN RTX ) +1 machine( RTX 3060) training time are slower any one machine Body: python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=0 --master_addr="192.168.8.131" --master_port=12581 train.py configs/ms1mv2_mbf python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr="192.168.8.131" --master_port=12581 train.py configs/ms1mv2_mbf /home/pc/anaconda3/envs/face19/lib/python3.9/site-packages/torch/distributed/launch.py:163: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead logger.warn( The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases. Please read local_rank from `os.environ('LOCAL_RANK')` instead. INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs: entrypoint : train.py min_nodes : 2 max_nodes : 2 nproc_per_node : 1 run_id : none rdzv_backend : static rdzv_endpoint : 192.168.8.131:12581 rdzv_configs : {'rank': 0, 'timeout': 900} max_restarts : 3 monitor_interval : 5 log_dir : None metrics_cfg : {} INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_4a5rychg/none__fkba0g3 INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group /home/pc/anaconda3/envs/face19/lib/python3.9/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future. warnings.warn( INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result: restart_count=0 master_addr=192.168.8.131 master_port=12581 group_rank=0 group_world_size=2 local_ranks=[0] role_ranks=[0] global_ranks=[0] role_world_sizes=[2] global_world_sizes=[2] INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_4a5rychg/none__fkba0g3/attempt_0/0/error.json 0 0 Training: 2022-09-15 11:06:52,012-rank_id: 0 Training: 2022-09-15 11:06:55,830-: margin_list [1.0, 0.5, 0.0] Training: 2022-09-15 11:06:55,830-: network mbf Training: 2022-09-15 11:06:55,834-: resume False Training: 2022-09-15 11:06:55,834-: save_all_states False Training: 2022-09-15 11:06:55,834-: output work_dirs/ms1mv2_mbf Training: 2022-09-15 11:06:55,834-: embedding_size 512 Training: 2022-09-15 11:06:55,834-: sample_rate 1.0 Training: 2022-09-15 11:06:55,834-: interclass_filtering_threshold0 Training: 2022-09-15 11:06:55,834-: fp16 True Training: 2022-09-15 11:06:55,834-: batch_size 256 Training: 2022-09-15 11:06:55,834-: optimizer sgd Training: 2022-09-15 11:06:55,834-: lr 0.1 Training: 2022-09-15 11:06:55,834-: momentum 0.9 Training: 2022-09-15 11:06:55,834-: weight_decay 0.0001 Training: 2022-09-15 11:06:55,834-: verbose 2000 Training: 2022-09-15 11:06:55,834-: frequent 10 Training: 2022-09-15 11:06:55,834-: dali False Training: 2022-09-15 11:06:55,834-: gradient_acc 1 Training: 2022-09-15 11:06:55,834-: seed 2048 Training: 2022-09-15 11:06:55,834-: num_workers 4 Training: 2022-09-15 11:06:55,834-: rec /home/pc/faces_webface_112x112 Training: 2022-09-15 11:06:55,834-: num_classes 10572 Training: 2022-09-15 11:06:55,834-: num_image 494194 Training: 2022-09-15 11:06:55,834-: num_epoch 40 Training: 2022-09-15 11:06:55,835-: warmup_epoch 0 Training: 2022-09-15 11:06:55,835-: val_targets ['lfw', 'cfp_fp', 'agedb_30'] Training: 2022-09-15 11:06:55,835-: total_batch_size 512 Training: 2022-09-15 11:06:55,835-: warmup_step 0 Training: 2022-09-15 11:06:55,835-: total_step 38600 loading bin 0 loading bin 1000 loading bin 2000 loading bin 3000 loading bin 4000 loading bin 5000 loading bin 6000 loading bin 7000 loading bin 8000 loading bin 9000 loading bin 10000 loading bin 11000 torch.Size([12000, 3, 112, 112]) loading bin 0 loading bin 1000 loading bin 2000 loading bin 3000 loading bin 4000 loading bin 5000 loading bin 6000 loading bin 7000 loading bin 8000 loading bin 9000 loading bin 10000 loading bin 11000 loading bin 12000 loading bin 13000 torch.Size([14000, 3, 112, 112]) loading bin 0 loading bin 1000 loading bin 2000 loading bin 3000 loading bin 4000 loading bin 5000 loading bin 6000 loading bin 7000 loading bin 8000 loading bin 9000 loading bin 10000 loading bin 11000 torch.Size([12000, 3, 112, 112]) /home/pc/fc/face/insightface/recognition/arcface_torch/train.py:163: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior. torch.nn.utils.clip_grad_norm_(backbone.parameters(), 5) /home/pc/anaconda3/envs/face19/lib/python3.9/site-packages/torch/optim/lr_scheduler.py:129: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " Training: 2022-09-15 11:07:37,277-Reducer buckets have been rebuilt in this iteration. Training: 2022-09-15 11:07:55,067-Speed 518.42 samples/sec Loss 44.2595 LearningRate 0.099902 Epoch: 0 Global Step: 20 Fp16 Grad Scale: 8192 Required: 13 hours Training: 2022-09-15 11:08:04,952-Speed 517.94 samples/sec Loss 45.0456 LearningRate 0.099850 Epoch: 0 Global Step: 30 Fp16 Grad Scale: 8192 Required: 12 hours Training: 2022-09-15 11:08:14,893-Speed 515.12 samples/sec Loss 45.5388 LearningRate 0.099798 Epoch: 0 Global Step: 40 Fp16 Grad Scale: 8192 Required: 12 hours Training: 2022-09-15 11:08:24,767-Speed 518.53 samples/sec Loss 45.7875 LearningRate 0.099746 Epoch: 0 Global Step: 50 Fp16 Grad Scale: 8192 Required: 12 hours Training: 2022-09-15 11:08:34,667-Speed 517.22 samples/sec Loss 45.5845 LearningRate 0.099695 Epoch: 0 Global Step: 60 Fp16 Grad Scale: 8192 Required: 11 hours Training: 2022-09-15 11:08:44,533-Speed 518.98 samples/sec Loss 45.6968 LearningRate 0.099643 Epoch: 0 Global Step: 70 Fp16 Grad Scale: 8192 Required: 11 hours (face19) ubuntu@ubuntu-X10SRA:~/fc/face/insightface/recognition/arcface_torch$ python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr="192.168.8.131" --master_port=12581 train.py configs/ms1mv2_mbf /home/ubuntu/anaconda3/envs/face19/lib/python3.9/site-packages/torch/distributed/launch.py:163: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead logger.warn( The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases. Please read local_rank from `os.environ('LOCAL_RANK')` instead. INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs: entrypoint : train.py min_nodes : 2 max_nodes : 2 nproc_per_node : 1 run_id : none rdzv_backend : static rdzv_endpoint : 192.168.8.131:12581 rdzv_configs : {'rank': 1, 'timeout': 900} max_restarts : 3 monitor_interval : 5 log_dir : None metrics_cfg : {} INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_bcc_b24k/none_nbf6ckxx INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group /home/ubuntu/anaconda3/envs/face19/lib/python3.9/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future. warnings.warn( INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result: restart_count=0 master_addr=192.168.8.131 master_port=12581 group_rank=1 group_world_size=2 local_ranks=[0] role_ranks=[1] global_ranks=[1] role_world_sizes=[2] global_world_sizes=[2] INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_bcc_b24k/none_nbf6ckxx/attempt_0/0/error.json sgd /home/ubuntu/fc/face/insightface/recognition/arcface_torch/train.py:166: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior. torch.nn.utils.clip_grad_norm_(backbone.parameters(), 5) /home/ubuntu/anaconda3/envs/face19/lib/python3.9/site-packages/torch/optim/lr_scheduler.py:129: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
2hard
Title: Without any api key Body: Can I use this library without keys? Because gpt has a free version, can I use the same?
3misc
Title: Change font size of title. Body: If the length of title is too big , cropping occurs instead of reducing the font size
1medium
Title: [layoutlmv3]: Issue with label format? Inference yields boundary boxes that are too short. Body: Hi, I am working on object detection with layoutlmv3. I am using the publaynet fine tuned model and have a training set with about 600 documents. The issue that I am facing is that the predicted boundary boxes are only kind of correct. In most of the documents the predicted boundary are "too short". Meaning that the lower y coordinate is usually too small. As an example I have attached you an example from my evaluation dataset. It is the case in almost every single inference picture. Thus, I am trying to get some ideas to troubleshoot. I double checked that the drawn boundary boxes in the inference is correctly done. ![ak_state_of_alaska_2020_p144](https://user-images.githubusercontent.com/40527435/202876949-c6e41860-ee7d-4b2f-b34b-c2d7ef7cdd4a.jpeg) ![inference](https://user-images.githubusercontent.com/40527435/202876954-87cab2cc-5f26-4142-a661-d9e7abfb9931.jpeg) Any ideas would be greatly appreciated.
2hard
Title: How to change model for indexing? Body: gpt3 sucks at math and code! I'm trying to use gpt4 for indexing but with no luck. It'd be great if there was a model parameter for indexing commands. currently, it only supports while querying which is not helpful if the context is written using gpt3. I also tried setting the settings parameter model to gpt-4 but it didn't seem to work.
1medium
Title: Plan to release the web demo code Body: Hi, thanks for sharing your work, this is amazing! Do you plan to release the web demo code ?
3misc
Title: How to Freeze Detection Head Layers in YOLOv8m-segment and Train Only Segmentation Head? Body: ### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question Hi all, I'm working with yolov8m-seg.pt and want to freeze the detection head layers (bounding box/class prediction) while training only the segmentation head (mask prediction). The goal is to fine-tune the segmentation capability without updating the detection part. Has anyone done this before? I’m thinking of freezing layers by setting requires_grad = False for detection-related params, but I’m unsure how to precisely identify them in the head (e.g., model.22). Here’s my tentative code—can someone confirm if this approach works or suggest a better way? ### Additional `from ultralytics import YOLO # Load model model = YOLO("yolov8m-seg.pt") # Freeze detection head layers (guessing these are related to 'detect') for name, param in model.model.named_parameters(): if "detect" in name.lower(): # Is this the right way to target detection head? param.requires_grad = False # Train only segmentation head model.train(data="path/to/data.yaml", epochs=50, imgsz=640)` Questions: Does detect correctly target the detection head, or should I use a different identifier (e.g., specific layer indices)? Will this setup ensure the segmentation head (e.g., mask coefficients/Proto) still trains properly? Any pitfalls to watch out for? Thanks for any insights!
1medium
Title: 17. add `print(np.nan in set([np.nan])) # True` Body: print(np.nan == np.nan) # False print(np.nan in set([np.nan])) # True
0easy
Title: Loading indicator needs to be shown for longer Body: I have a notebook that I've published via github pages. It's very nice, and marimo does a wonderful job. But a number of people who have visited have said that they thought it was broken because it initially showed a spinning circle saying "Initializing...", etc., but that circle disappeared, leaving just a white page. The problem is that there's a lag of up to 8 seconds between the spinning circle and the spinning hourglass (which is followed by actual content). And I guess that's just long enough for people to think the page must be broken. We all have pretty beefy machines with high-speed internet. If that spinning circle could just be kept on the screen until other elements start to load, I think it would be perfect. For reference, my published notebook is [here](https://moble.github.io/sxscatalog/), and the raw notebook itself is [here](https://github.com/moble/sxscatalog/blob/main/scripts/catalog_notebook.py). (And thanks again for the wonderful package. It's really amazing.)
1medium
Title: ModelSummary does not account for every type of precision strings Body: ### Bug description The `precision_to_bits` dictionary in https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/utilities/model_summary/model_summary.py#L219 does not account for every type of precision, e.g., `bf16-true`. This will fail in getting the proper key from the dictionary and will default to 32. ### What version are you seeing the problem on? v2.5, master ### How to reproduce the bug In `lightning/pytorch/utilities/model_summary/model_summary.py:L219`, just add the following when `self._model.trainer.precision="bf16-true"`: ```python ... precision_to_bits = {"64": 64, "32": 32, "16": 16, "bf16": 16} print(precision_to_bits.get(self._model.trainer.precision, 32)) raise ... ``` ### Error messages and logs ``` # Error messages and logs here please ``` ### Environment <details> <summary>Current environment</summary> ``` #- PyTorch Lightning Version (e.g., 2.5.0): master #- PyTorch Version (e.g., 2.5): 2.5.1 #- Python version (e.g., 3.12): 3.10 #- OS (e.g., Linux): Ubuntu 22.04 #- CUDA/cuDNN version: 12.4 #- GPU models and configuration: 4xNVIDIA H100 NVL #- How you installed Lightning(`conda`, `pip`, source): pip ``` </details> ### More info _No response_
1medium
Title: Create a version of DLAI lesson "Self-Reflecting Agents with Loops" (entity extraction) using ChatGenerator Body: We need to better understand how complex and difficult to understand Haystack example code would get if we used ChatGenerator instead of the regular Generators. For that purpose, let's create a version of https://learn.deeplearning.ai/courses/building-ai-applications-with-haystack/lesson/6/self-reflecting-agents-with-loops using ChatGenerator.
2hard
Title: DOC: pandas.DataFrame.aggregate return value Body: ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.aggregate.html#pandas.DataFrame.aggregate ### Documentation problem The documentation of pandas.DataFrame.aggregate() method says: The return can be: * scalar : when Series.agg is called with single function * Series : when DataFrame.agg is called with a single function * DataFrame : when DataFrame.agg is called with several functions But df = pd.DataFrame([[1]]) ; type(df.agg(lambda x: 3*x)) returns pandas.core.frame.DataFrame even though .agg() was called with a single function ### Suggested fix for documentation I'd love to offer a fix, but the reason I was looking up the docs was that I'd like to know what .agg() does exactly...
1medium
Title: Enhancement: Add new apps via uploading jupyter notebooks via drag and drop in the browser Body: It would be nice, if users could create new apps via uploading their custom jupyter notebooks via drag and drop in the browser on the home screen of mercury. Due to security concerns, this feature should only be enabled in trustworthy environments, e.g. via explicitly submitting an additional command-line argument `mercury run --enable-app-upload`. I am curious about your thoughts.
1medium
Title: exclude db from .gitignore Body: Hello Please, exclude db from .gitignore because it doesnt work with ci. Image can't start ``` Traceback (most recent call last): File "/app/./venv/bin/libretranslate", line 8, in <module> Loaded support for 3 languages (4 models total)! sys.exit(main()) File "/app/venv/lib/python3.10/site-packages/libretranslate/main.py", line 189, in main app = create_app(args) File "/app/venv/lib/python3.10/site-packages/libretranslate/app.py", line 220, in create_app os.mkdir(default_mp_dir) FileNotFoundError: [Errno 2] No such file or directory: '/app/db/prometheus' ```
1medium
Title: Dropdown and LinePlot buggy interaction Body: ### Describe the bug Interactive dropdowns (```gr.Dropdown(options, interactive=True)```) do not work if a LinePlot (probably similar with ScatterPlot and others, but untested) is provided in the same block. This also happens if the plot is in other columns and rows. I did not check if it also happens with other components, but below you can find a very minimal reproducer, in which the dropdown is not interactible. If the plot is removed, the dropdown works (as shown in [this comment](https://github.com/gradio-app/gradio/issues/6103#issuecomment-1790205932) ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr my_list = ["World", "Gradio", "World2", "abc ", "You"] with gr.Blocks() as demo: drop1 = gr.Dropdown(choices=my_list, label="simple", value=my_list[0], interactive=True) plt = gr.LinePlot() # Comment this out and the dropdown can be interacted with demo.launch(share=True) ``` ### Screenshot _No response_ ### Logs _No response_ ### System Info ```shell I am using gradio 5.5.0, I'll paste the environment output: Gradio Environment Information: ------------------------------ Operating System: Linux gradio version: 5.5.0 gradio_client version: 1.4.2 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.6.2.post1 audioop-lts: 0.2.1 fastapi: 0.115.4 ffmpy: 0.4.0 gradio-client==1.4.2 is not installed. httpx: 0.27.2 huggingface-hub: 0.26.2 jinja2: 3.1.4 markupsafe: 2.1.5 numpy: 2.1.3 orjson: 3.10.11 packaging: 24.2 pandas: 2.2.3 pillow: 11.0.0 pydantic: 2.9.2 pydub: 0.25.1 python-multipart==0.0.12 is not installed. pyyaml: 6.0.2 ruff: 0.7.3 safehttpx: 0.1.1 semantic-version: 2.10.0 starlette: 0.41.2 tomlkit==0.12.0 is not installed. typer: 0.13.0 typing-extensions: 4.12.2 urllib3: 2.2.3 uvicorn: 0.32.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.10.0 httpx: 0.27.2 huggingface-hub: 0.26.2 packaging: 24.2 typing-extensions: 4.12.2 websockets: 12.0 ``` ### Severity Can work around using other components (but not with LinePlots)
1medium
Title: server.jobs.get_by_id failing inconsistently with 401002: Unauthorized Access error Body: Hello, I am writing a python script to trigger and monitor extract refreshes for a given set of datasource IDs. First, I trigger the refresh using `server.datasources.refresh(datasource)` for all the given datasource IDs using multi threading. Then, I monitor the progress of these refreshes and print out a message accordingly. Do note that my Tableau server is configured to run only 2 extract refreshes at once, all others go into a pending state. But, what I'm seeing is that every once in a while, one of the threads will throw a 401002 error when checking the status of the refresh job. Here's my code snippet: ``` def monitor_refresh_progress(self, job_id, datasource): # Get initial job status value, will be -1 if in progress with self.server.auth.sign_in(self.tableau_auth): job_status = self.server.jobs.get_by_id(job_id) # Keep polling until success or failure, added random to avoid multiple simultaneous hits while int(job_status.finish_code) not in [0,1]: time.sleep(randint(110,130)) with self.server.auth.sign_in(self.tableau_auth): job_status = self.server.jobs.get_by_id(job_id) if int(job_status.finish_code) == 0: self.logger.info("Extract Refresh successfully completed for datasource: {}".format(datasource.name)) else: slack.post_message(text=":: ERROR :: Tableau Extract Refresh failed for datasource {}.".format(datasource.name)) self.logger.error("Extract Refresh failed for datasource: {}".format(datasource.name)) raise Exception("Extract Refresh failed for datasource: {}".format(datasource.name)) ``` Right now, I have added a retry decorator for this monitor_refersh_progress() method but I'm not too sure about the efficacy since I'm using multi threading. Am I doing something wrong? Any help would be appreciated. Thanks
1medium
Title: Regarding zone problem Body: ### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Question Currently, learning two yolov8 model, one for person detection other for object detection : Main problem is for automated selfcheckout based on zone logic : where we check weather person holding object crossing zone from left toward right or right towards left and then prepare recipt accordingly. Need guidance in logic : in this case, should I have to combine detection for person and object or should handle logic alternately ? Below Code : #Define empty lists to keep track of labels original_labels = [] final_labels = [] person_bbox = [] p_items = [] purchased_items = set(p_items) a_items = [] added_items = set(a_items) hand_bbox = [] combined_detections = [] #Save result as det_tracking_result with sv.VideoSink("new_det_tracking_result.mp4", video_info) as sink: #Iterate through model predictions and tracking results for index, (result, result1) in enumerate(zip(model.track(source=VID_PATH, show=False, stream=True, verbose=True, persist=True), model1.track(source=VID_PATH, show=False, stream=True, verbose=True, persist=True))): #Define variables to store interactions that are refreshed per frame interactions = [] person_intersection_str = "" # Obtain predictions from model1 frame1 = result1.orig_img detections_objects1 = sv.Detections.from_ultralytics(result1) detections_objects1 = detections_objects1[detections_objects1.class_id == 0] bboxes1 = result1.boxes #print(detections_objects1) #Obtain predictions from yolov8 model frame = result.orig_img detections = sv.Detections.from_ultralytics(result) detections = detections[detections.class_id < 10] bboxes = result.boxes # Apply mask over the single Zone mask1, mask2 = zone.trigger(detections=detections_objects1), zone.trigger(detections=detections) detections_filtered1, detections_filtered2 = detections_objects1[mask1], detections[mask2] if detections_objects1 and len(detections_objects1) > 0: label1 = label_map1[detections_objects1.class_id[0]] # Get the label for the class_id combined_detections.append((detections_objects1, label1)) for detection, label in combined_detections: print("Detections:", detection) print("Label:", label) if bboxes1.id is not None: detections_objects1.tracker_id = bboxes1.id.cpu().numpy().astype(int) labels = [ f'#{tracker_id} {label_map1[class_id]} {confidence:0.2f}' for _, _, confidence, class_id, tracker_id in detections_objects1 ] #Print labels for detections from model1 for _, _, confidence, class_id, _ in detections_objects1: print(f"Label: {label_map1[class_id]} with confidence: {confidence:.2f}") print(detections) # Apply mask over the single Zone mask = zone.trigger(detections=detections) detections_filtered = detections[mask] print("mask", mask) print("Detection", detections_filtered) if detections and len(detections) > 0: label = label_map[detections.class_id[0]] # Get the label for the class_id combined_detections.append((detections, label)) if bboxes.id is not None: detections.tracker_id = bboxes.id.cpu().numpy().astype(int) labels = [ f'#{tracker_id} {label_map[class_id]} {confidence:0.2f}' for _, _, confidence, class_id, tracker_id in detections ] frame = box_annotator.annotate(scene=frame, detections=detections_filtered, labels=labels) frame = zone_annotator.annotate(scene=frame) objects = [f'#{tracker_id} {label_map[class_id]}' for _, _, confidence, class_id, tracker_id in detections] # for _, _, confidence, class_id, _ in detections: # print(f"Label: {label_map[class_id]} with confidence: {confidence:.2f}") # # Combine detections from both models # # combined_detections = np.concatenate((detections_objects1, detections)) # print(combined_detections) # # Extract xyxy attributes from combined detections # combined_detections_xyxy = [detection[0].xyxy for detection in combined_detections] # print(combined_detections_xyxy) # # Check if combined_detections_xyxy is not empty and contains non-empty arrays # if combined_detections_xyxy and all(arr.size > 0 for arr in combined_detections_xyxy): # # Concatenate xyxy arrays into a single array # combined_xyxy_array = np.concatenate(combined_detections_xyxy, axis=0) # else: # combined_xyxy_array = np.empty((0, 4)) # Create an empty array # # Create a Detections object with the concatenated xyxy array # combined_detections_detections = sv.Detections(xyxy=combined_xyxy_array) # # Apply mask over the combined detections # mask = zone.trigger(detections= combined_detections_detections) # # Filter combined detections based on the mask # combined_detections_filtered = [combined_detections[i] for i in range(len(combined_detections)) if mask[i]] # # Print the mask and filtered detections # #print("Combined Detections mask:", mask) # #print("Combined Detections filtered:", combined_detections_filtered) # # Iterate through combined detections to create labels # combined_labels = [] # for detection in combined_detections_filtered: # detections, label = detection # for _, _, confidence, class_id, tracker_id in detections: # combined_labels.append(f'#{tracker_id} {label_map1[class_id]} {confidence:.2f}') # # Print labels for combined detections # for label in combined_labels: # print("combined_labels", label) # frame = box_annotator.annotate(scene=frame, detections=combined_detections_filtered, labels=combined_labels) # frame = zone_annotator.annotate(scene=frame) # objects = [f'#{tracker_id} {label_map[class_id]}' for _, _, confidence, class_id, tracker_id in combined_detections_filtered] # print("Combined Objects:", objects) #If this is the first time we run the application, #store the objects' labels as they are at the beginning if index == 0: original_labels = objects original_dets = len(detections_filtered) else: #To identify if an object has been added or removed #we'll use the original labels and identify any changes final_labels = objects new_dets = len(detections_filtered) #Identify if an object has been added or removed using Counters removed_objects = Counter(original_labels) + Counter(final_labels) added_objects = Counter(final_labels) - Counter(original_labels) #Create two variables we can increment for drawing text draw_txt_ir = 1 draw_txt_ia = 1 #Check for objects being added or removed #if new_dets - original_dets != 0 and len(removed_objects) >= 1: if new_dets != original_dets or removed_objects: #An object has been removed for k,v in removed_objects.items(): #For each of the objects, check the IOU between a designated object #and a person. if 'person' not in k: removed_object_str = f"{v} {k} purchased" removed_action_str = intersecting_bboxes(bboxes, bboxes1, person_bbox, removed_object_str) print("Removed Action String:", removed_action_str) # Add this line if removed_action_str is not None: log.info(removed_action_str) #Add the purchased items to a "receipt" of sorts item = removed_action_str.split() if len(item) >= 3: item = f"{item [0]} {item [1]} {item [2]}" removed_label = item.split(' ')[-1] if any(removed_label in item for item in purchased_items): purchased_items = {f"{int(item.split()[0]) + 1} {' '.join(item.split()[1:])}" if removed_label in item else item for item in purchased_items} else: purchased_items.add(f"{v} {k}") p_items.append(f" - {v} {k}") print("New_Purchased_Items:", purchased_items) print("Removed_Objects:") #Draw the result on the screen draw_text(frame, text=removed_action_str, point=(50, 50 + draw_txt_ir), color=(0, 0, 255)) draw_text(frame, "Receipt: " + str(purchased_items), point=(50, 800), color=(30, 144, 255)) draw_txt_ir += 80 if len(added_objects) >= 1: #An object has been added for k,v in added_objects.items(): #For each of the objects, check the IOU between a designated object #and a person. if 'person' not in k: added_object_str = f"{v} {k} returned" added_action_str = intersecting_bboxes(bboxes, bboxes1, person_bbox, added_object_str) print("Added Action String:", added_action_str) # Add this line if added_action_str is not None: #If we have determined an interaction with a person, #log the interaction. log.info(added_action_str) item = added_object_str.split() if len(item) >= 3: item = f"{item [0]} {item [1]} {item [2]}" item = item.split(' ')[-1] if any(item in item for item in purchased_items): purchased_items = {f"{int(item.split()[0]) - 1} {' '.join(item.split()[1:])}" if item in item else item for item in purchased_items} if any(item.startswith('0 ') for item in purchased_items): purchased_items = {item for item in purchased_items if not item.startswith('0 ')} print("Updated_Purchased_Items:", purchased_items) #p_items.remove(item) added_items.add(added_object_str) a_items.append(added_object_str) print("Added_Objects:") #Draw the result on the screen draw_text(frame, text=added_action_str, point=(50, 300 + draw_txt_ia), color=(0, 128, 0)) draw_text(frame, "Receipt: " + str(purchased_items), point=(50, 800), color=(30, 144, 255)) draw_txt_ia += 80 # Clear the combined_detections list combined_detections.clear() draw_text(frame, "Receipt: " + str(purchased_items), point=(50, 800), color=(30, 144, 255)) sink.write_frame(frame) ### Additional _No response_
2hard
Title: Cannot navigate signal with 1D or 2D navigator with keyboard on macOS Body: #### Describe the bug Hi everyone, not sure what I'm doing wrong here... I cannot navigate a signal on my macOS v15.1 with HyperSpy v2.2. I've tried left/right with all modifier keys (shift, control, option, and command) and combinations thereof. Navigating with the mouse works as before. #### To Reproduce ```python import numpy as np import hyperspy.api as hs s = hs.signals.Signal2D(np.random.random((10, 10, 10, 10))) s.plot() # Try to navigate but cannot ``` #### Expected behavior To navigate the signal as usual. #### Python environment: - HyperSpy version: 2.2 - Python version: 3.12.7 #### Additional context
1medium
Title: performance issues when building custom components using dash-component-boilerplate Body: When using the [dash-component-boilerplate] to build my custom React component, the component becomes very sluggish. This component is related to rendering graphics on a canvas. React framework show this ![lazy1](https://github.com/user-attachments/assets/46302f02-48bb-47a4-9a51-eeb76901a137) and when i use in dash, show this ![lazy](https://github.com/user-attachments/assets/00fedd64-8441-4cc0-91e2-c2c9b8c2ebde)
2hard
Title: Cannot load more than Body: When I try to embed pygwalker in `streamlit`, I get the following error: ``` Dataframe is too large for ipynb files. Only 14862 sample items are printed to the file. ``` Is it a known issue that pygwalker cannot handle large datasets? Thanks a lot for the work, the project looks super cool 😄 Best, Adrien
1medium
Title: selene_sdk issue Body: i got this error message and couldn't find where's the issue ``` ValueError Traceback (most recent call last) <ipython-input-3-c4059c3098d2> in <module> ----> 1 parse_configs_and_run(configs, lr=0.01) 2 print("Fin de Exécussion") ~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/selene_sdk/utils/config_utils.py in parse_configs_and_run(configs, create_subdirectory, lr) 349 "Using a random seed ensures results are reproducible.") 350 --> 351 execute(operations, configs, current_run_output_dir) ~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/selene_sdk/utils/config_utils.py in execute(operations, configs, output_dir) 190 "evaluate" in operations: 191 train_model.create_test_set() --> 192 train_model.train_and_validate() 193 194 elif op == "evaluate": ~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/selene_sdk/train_model.py in train_and_validate(self) 428 for step in range(self._start_step, self.max_steps): 429 self.step = step --> 430 self.train() 431 432 if step % self.nth_step_save_checkpoint == 0: ~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/selene_sdk/train_model.py in train(self) 461 462 predictions = self.model(inputs.transpose(1, 2)) --> 463 loss = self.criterion(predictions, targets) 464 465 self.optimizer.zero_grad() ~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 601 602 def forward(self, input: Tensor, target: Tensor) -> Tensor: --> 603 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) 604 605 ~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction) 2906 raise ValueError( 2907 "Using a target size ({}) that is different to the input size ({}) is deprecated. " -> 2908 "Please ensure they have the same size.".format(target.size(), input.size()) 2909 ) 2910 ValueError: Using a target size (torch.Size([64, 12])) that is different to the input size (torch.Size([64, 11])) is deprecated. Please ensure they have the same size. ```
1medium
Title: Discussion: arguments `old_min` and `old_max` should be removed from `min_max_scale` Body: ```python >>> import pandas as pd >>> import janitor # Use one column dataframe to avoid scaling the entire data or the column data problem >>> df = pd.Series([0, 1, 2]).to_frame() # use the minimum and maximum value of data >>> df.min_max_scale() 0 0 0.0 1 0.5 2 1.0 # Overwrite the value of data. The result seems wired for the user, but it's ok for the formula view. # Question 1: 0 should be scaled or not? 0 is out range of [old_min 1, old_max 2]. # Question 2: I already define the new_min (0) and new_max of value. Why do there have -1? # Question 3: The API differs from sklearn.preprocessing.MinMaxScaler # Min-Max normalization formula # X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0)) # X_scaled = X_std * (max - min) + min >>> df.min_max_scale(old_min=1, old_max=2) 0 0 -1.0 1 0.0 2 1.0 ``` In the end, it's hard to trace the `min_max_scale` [committing history](https://github.com/pyjanitor-devs/pyjanitor/commits/dev/janitor/functions/min_max_scale.py) to know the reason why added these options.
1medium
Title: No code coverage on __main__.py Body: https://github.com/alteryx/featuretools/pull/1882 Is passing for all CI checks except for code coverage, where suddenly there's no coverage of `__main__.py`. That PR's scipy update could be to blame, or it could be some untracked change (setuptools 60.8 vs setuptools 60.7). We should determine why coverage was lost--though the "coverage" was just an import--and improve our tests so that `__main__.py` is truly covered.
1medium
Title: Error in GraphQL Mutation Expected value of type ID Body: Model ```python class Series(models.Model): title = models.CharField(max_length=255, unique=True, db_index=True) desc = RichTextUploadingField(verbose_name="Description", default= "Coming Soon...", max_length=10000) series_type = models.ForeignKey(SeriesType, on_delete=models.CASCADE) SERIES_STATUS = ( (0, 'Not Yet Released'), (1, 'Done') ) user = models.ForeignKey(User, on_delete=models.CASCADE) status = models.PositiveSmallIntegerField(choices=SERIES_STATUS, default=0) ``` Schema ```python class SeriesNode(DjangoObjectType): class Meta: model = models.Series filter_fields = ['title', 'alt'] interfaces = (relay.Node, ) class SeriesMutation(DjangoModelFormMutation): series = graphene.Field(SeriesNode) class Meta: form_class = forms.AdvancedAddSeries class Mutation(graphene.ObjectType): create_series = SeriesMutation.Field() ``` Query Mutation ```gql mutation CreateSeries($input: SeriesMutationInput!){ createSeries(input:$input){ series{ title desc seriesType{ id } } errors{ field messages } } } ``` Query Variables ```json { "input": { "title": "Series1", "desc": "to be updated", "seriesType": { "id": "U2VyaWVzVHlwZU5vZGU6Mg==" }, "user": { "id": "VXNlck5vZGU6MQ==" }, "status": "A_0" } } ``` Image of Error ![Image of Error](https://i.imgur.com/X9e6jiq.png) Reply ```json { "data": { "createSeries": { "series": null, "errors": [ { "field": "series_type", "messages": [ "Select a valid choice. That choice is not one of the available choices." ] }, { "field": "status", "messages": [ "Select a valid choice. A_0 is not one of the available choices." ] }, { "field": "user", "messages": [ "Select a valid choice. That choice is not one of the available choices." ] } ] } } } ```
1medium
Title: User facing changelog for the 0.5.0 release Body: <!--To help us understand and resolve your issue, please fill out the form to the best of your ability.--> <!--You can feel free to delete the sections that do not apply.--> ### Problem We should highlight the major changes landing in `0.5.0` instead of just pointing users to the raw changelog: https://github.com/voila-dashboards/voila/blob/main/CHANGELOG.md. ### Suggested Improvement Follow JupyterLab 4 and Notebook 7 changelogs and create a "Highlights" section in the changelog for user facing changes. - Update to JupyterLab 4 - `--classic-tree` from https://github.com/voila-dashboards/voila/pull/1374 - more
1medium
Title: Branch editing error Body: **Describe the bug** Branch editing in python is not working properly. **Screenshots to reproduce** ![image](https://user-images.githubusercontent.com/20046591/212927983-d4ccdd65-1983-4cd2-9915-e789710f4ff2.png) ![image](https://user-images.githubusercontent.com/20046591/212925085-0e562660-30dc-48d0-954b-d3aa4e7ad7ac.png) ![image](https://user-images.githubusercontent.com/20046591/212925105-b46fe636-8515-48ab-ab4d-22e263b5e188.png) error: Exception: Unexpected operation (Error Code: 0) **Expected behavior** Through rest api it is working, I expect the same behaviour as I do not intent to build this workflow if the arcgis module intent to implement it. ``` with vms.get('version name', "edit") as version: #version.start_editing() update_result = version.edit(<>) # I expected this to work ``` **Platform (please complete the following information):** - OS: Win Server 2019 - Browser chrome - | Name| Version| Build| Channel| |-|-|-|-| |arcgis| 2.0.1 | py39_2825| esri| |arcgispro| 3.0 | 0 | esri| **Additional context** Add any other context about the problem here, attachments etc.
1medium
Title: Testing Result Body: How do I get the image files names for the visrank_topk during testing for the query images? I want to show the file names from the gallery set that have high match with the query image.
1medium
Title: bidirectional buffered stream? Body: Hello, since the API rewrite, it looks like I need to use a `BufferedByteReceiveStream` to use `receive_exactly`. But the class is only for receiving, not writing. Is it by intention that I need to carry around two objects if I want both `receive_exactly` and `send`? Thanks!
1medium
Title: Fix Ivy Failing Test: paddle - elementwise.not_equal Body:
1medium
Title: Bug: `litestar run` CLI has several readability issues Body: ### Description First problem: low contrast on light theme: <img width="788" alt="Снимок экрана 2024-10-16 в 23 04 11" src="https://github.com/user-attachments/assets/7cb84b46-ea93-4b53-84d8-a7fd7f05d6f2"> I can hardly read what grey and yellow texts say. One can argue that this is a problem of my setup / theme, but I've never seen this before in other apps. Second problem: <img width="788" alt="Снимок экрана 2024-10-16 в 23 04 06" src="https://github.com/user-attachments/assets/8af38089-6d61-43c9-bc73-c1e6ad189ba1"> Option's name of `--create-self-signed-c...` (certificate?) is cut. I think that this is the most important part of the help here. And it should not cut the options' names. The same happens with `--unix-domain-so…` ### URL to code causing the issue _No response_ ### MCVE _No response_ ### Steps to reproduce ```bash 1. Run `litestar run -h` ``` ### Screenshots _No response_ ### Logs _No response_ ### Litestar Version main ### Platform - [ ] Linux - [ ] Mac - [ ] Windows - [ ] Other (Please specify in the description above)
0easy
Title: [KeyPoints] - extend `from_mediapipe` with Google MediaPipe FaceMesh Body: # Description Much like #1174 and #1232 adding pose landmark support, we'd also like to add face detection support to the `from_mediapipe` method. * Add `Skeleton.FACEMESH_TESSELETION` of size `468` to the [Skeleton](https://github.com/roboflow/supervision/blob/447ef41fc45353130ec4dccdc7eeaf68b622fb7e/supervision/keypoint/skeletons.py#L7) enum. * The nodes can be found here: https://github.com/google-ai-edge/mediapipe/blob/8cb99f934073572ce73912bb402a94f1875e420a/mediapipe/python/solutions/face_mesh_connections.py#L74 * Docs can be found here: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md * Add the code to the `from_mediapipe` function in [`KeyPoints`](https://github.com/roboflow/supervision/blob/447ef41fc45353130ec4dccdc7eeaf68b622fb7e/supervision/keypoint/core.py#L16) object that is introduced in #1232. * We'd like to support responses from both legacy and modern way to call the face mesher - see links below. ![facemesh](https://github.com/roboflow/supervision/assets/6500785/59433e66-74e0-448c-b902-4f19947d379e) # Links: - Google Mediapipe repository: https://github.com/google/mediapipe - Google Mediapipe face landmarker: https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker - Python Guide (Modern): https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker/python - Legacy: https://colab.research.google.com/github/googlesamples/mediapipe/blob/main/examples/face_landmarker/python/%5BMediaPipe_Python_Tasks%5D_Face_Landmarker.ipynb - Skeletons: https://github.com/google-ai-edge/mediapipe/blob/8cb99f934073572ce73912bb402a94f1875e420a/mediapipe/python/solutions/face_mesh_connections.py#L74 # Additional - Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻
1medium
Title: Extend label issue detection in Datalab to work even without pred_probs input Body: Goal: extend the label issue check in Datalab to work even if user only provided: `features`, `labels` to `Datalab.find_issues()`. There are multiple ways this can be achieved: Option 1 (easiest): Use sklearn `KNNclassifier` (or `LogisticRegression`) applied to `X=features, y=labels` in order to produce out-of-sample `pred_probs` and then continue as usual. Option 2: Use methods from other papers like these (requires benchmarking them first): - [SelfClean: A Self-Supervised Data Cleaning Strategy](https://arxiv.org/abs/2305.17048) - [Detecting Corrupted Labels Without Training a Model to Predict](https://arxiv.org/abs/2110.06283)
1medium
Title: PBS - Unable to extract Body: ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region US ### Provide a description that is worded well enough to be understood https://www.pbs.org/video/take-a-chance-wdZQCx/ [pbs] Downloading JSON metadata Extracting cookies from firefox Extracted 2912 cookies from firefox [pbs] Extracting URL: https://www.pbs.org/video/take-a-chance-wdZQCx/ [pbs] take-a-chance-wdZQCx: Downloading webpage [pbs] Downloading widget/partnerplayer page [pbs] Downloading portalplayer page ERROR: An extractor error has occurred. (caused by KeyError('title')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "yt_dlp\extractor\common.py", line 742, in extract File "yt_dlp\extractor\pbs.py", line 689, in _real_extract KeyError: 'title' ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [X] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', '-o', 'lidia', 'https://www.pbs.org/video/take-a-chance-wdZQCx/'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [0b6b7742c] (win_exe) [debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023) [debug] exe versions: ffmpeg 2020-11-04-git-cfdddec0c8-full_build-www.gyan.dev, ffprobe 2020-11-04-git-cfdddec0c8-full_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1837 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds) [debug] Using fake IP 6.66.219.101 (US) as X-Forwarded-For [pbs] Downloading JSON metadata [pbs] Extracting URL: https://www.pbs.org/video/take-a-chance-wdZQCx/ [pbs] take-a-chance-wdZQCx: Downloading webpage [pbs] Extracting URL: https://www.pbs.org/video/take-a-chance-wdZQCx/ [pbs] take-a-chance-wdZQCx: Downloading webpage [pbs] Downloading widget/partnerplayer page [pbs] Downloading portalplayer page ERROR: An extractor error has occurred. (caused by KeyError('title')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "yt_dlp\extractor\common.py", line 742, in extract File "yt_dlp\extractor\pbs.py", line 689, in _real_extract KeyError: 'title' ```
1medium
Title: Multi-label classification with two labels Body: ### Bug Description <!--- A clear and concise description of what the bug is. --> ImageClassifier classification head treats the multi-label classification with 2 labels as multi-class classification one-hot encoded labels. ### Bug Reproduction Code for reproducing the bug: ---- from sklearn.datasets import make_multilabel_classification X, Y = make_multilabel_classification(n_samples=100, n_features = 64, n_classes=2, n_labels=1, allow_unlabeled=False, random_state=1) X = X.reshape((100, 8, 8)) clf = ak.ImageClassifier(max_trials=2, multi_label=True) clf.fit(X, Y, epochs=3, verbose=2) ---- Data used by the code: synthetic data created with scikit-learn ### Setup Details Include the details about the versions of: - OS type and version: - Python: 3.6 - autokeras: 1.0.2 - keras-tuner: - scikit-learn: - numpy: - pandas: - tensorflow: 2.1.0 ### Additional context <!--- If applicable, add any other context about the problem. -->
1medium
Title: Proxy attributes to stored JSON Body: This way as the GitHub API expands, even if we don't explicitly set it, people can still do things like ``` py pr = github3.pull_request('user', 'project', number) pr.merged ``` It won't be documented in our docs but they'll be able to use it at least
1medium
Title: Dropdown Options Extending Beyond Container Body: For a space-limited dashboard, it's common to have dropdown options with names that are much longer than the space allocated for the dropdown button. Additionally, for my application assume that: - Each option needs to be a single line - The full option text should be visible when the dropdown is open (i.e. no ellipses) - The size of the dropdown and its container cannot be increased Dash Bootstrap's dbc.Select component handles this well by treating the dropdown as a pop-up that can extend beyond its container when open. However, dbc.Select lacks the advanced features of dcc.Dropdown and is not an option for me. Thanks! ![dropdown_example](https://user-images.githubusercontent.com/56934645/199278587-b5f1dbe1-2159-414f-9bb8-bf68dc822763.png)
1medium
Title: Less readable for panel.pane.DataFrame in Jupyter Dark Theme Body: <!-- Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately. --> #### ALL software version info <details> <summary>Software Version Info</summary> ```plaintext altair 5.4.1 anyio 4.6.0 appnope 0.1.4 argon2-cffi 23.1.0 argon2-cffi-bindings 21.2.0 arrow 1.3.0 asttokens 2.4.1 astunparse 1.6.3 async-lru 2.0.4 attrs 24.2.0 babel 2.16.0 beautifulsoup4 4.12.3 black 24.8.0 bleach 6.1.0 bokeh 3.5.2 bqplot 0.12.43 certifi 2024.8.30 cffi 1.17.1 charset-normalizer 3.3.2 click 8.1.7 comm 0.2.2 contourpy 1.3.0 cycler 0.12.1 debugpy 1.8.6 decorator 5.1.1 defusedxml 0.7.1 executing 2.1.0 fastjsonschema 2.20.0 fonttools 4.54.1 fqdn 1.5.1 gast 0.4.0 h11 0.14.0 httpcore 1.0.5 httpx 0.27.2 idna 3.10 ipydatagrid 1.3.2 ipyflow 0.0.200 ipyflow-core 0.0.200 ipykernel 6.29.5 ipympl 0.9.4 ipython 8.27.0 ipython-genutils 0.2.0 ipywidgets 8.1.5 isoduration 20.11.0 itable 0.0.1 jedi 0.19.1 Jinja2 3.1.4 joblib 1.4.2 json5 0.9.25 jsonpointer 3.0.0 jsonschema 4.23.0 jsonschema-specifications 2023.12.1 jupyter 1.1.1 jupyter_client 8.6.3 jupyter-console 6.6.3 jupyter_core 5.7.2 jupyter-events 0.10.0 jupyter-lsp 2.2.5 jupyter_server 2.14.2 jupyter_server_terminals 0.5.3 jupyterlab 4.2.5 jupyterlab-lsp 5.1.0 jupyterlab_pygments 0.3.0 jupyterlab_server 2.27.3 jupyterlab_widgets 3.0.13 kiwisolver 1.4.7 linkify-it-py 2.0.3 Markdown 3.7 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.9.2 matplotlib-inline 0.1.7 mdit-py-plugins 0.4.2 mdurl 0.1.2 mistune 3.0.2 mypy-extensions 1.0.0 narwhals 1.8.3 nbclassic 1.1.0 nbclient 0.10.0 nbconvert 7.16.4 nbformat 5.10.4 nest-asyncio 1.6.0 notebook 7.2.2 notebook_shim 0.2.4 numpy 2.1.1 overrides 7.7.0 packaging 24.1 pandas 2.2.3 pandas-flavor 0.6.0 pandocfilters 1.5.1 panel 1.5.0 param 2.1.1 parso 0.8.4 pathspec 0.12.1 patsy 0.5.6 pexpect 4.9.0 pillow 10.4.0 pingouin 0.5.5 pip 24.2 platformdirs 4.3.6 prometheus_client 0.21.0 prompt_toolkit 3.0.48 psutil 6.0.0 ptyprocess 0.7.0 pure_eval 0.2.3 py2vega 0.6.1 pyccolo 0.0.54 pycparser 2.22 Pygments 2.18.0 pyparsing 3.1.4 python-dateutil 2.9.0.post0 python-json-logger 2.0.7 pytz 2024.2 pyviz_comms 3.0.3 PyYAML 6.0.2 pyzmq 26.2.0 referencing 0.35.1 requests 2.32.3 rfc3339-validator 0.1.4 rfc3986-validator 0.1.1 rpds-py 0.20.0 scikit-learn 1.5.2 scipy 1.14.1 seaborn 0.13.2 Send2Trash 1.8.3 setuptools 75.1.0 six 1.16.0 sniffio 1.3.1 soupsieve 2.6 stack-data 0.6.3 statsmodels 0.14.3 tabulate 0.9.0 terminado 0.18.1 threadpoolctl 3.5.0 tinycss2 1.3.0 tornado 6.4.1 tqdm 4.66.5 traitlets 5.14.3 traittypes 0.2.1 types-python-dateutil 2.9.0.20240906 typing_extensions 4.12.2 tzdata 2024.2 uc-micro-py 1.0.3 uri-template 1.3.0 urllib3 2.2.3 voila 0.5.7 wcwidth 0.2.13 webcolors 24.8.0 webencodings 0.5.1 websocket-client 1.8.0 websockets 13.1 wheel 0.44.0 widgetsnbextension 4.0.13 xarray 2024.9.0 xyzservices 2024.9.0 ``` </details> #### Description of expected behavior and the observed behavior Is there any argument to set background into defaut HTML render background (gray-darkgary) for `pd.DataFrame` in Jupyter dark theme. The defaut black-white background is less readable, especially combined with `pandas.DataFrame.style`. This is no problem with light theme. #### Complete, minimal, self-contained example code that reproduces the issue ```python import panel as pn import pingouin as pg import pandas as pd from pandas.io.formats.style import Styler import numpy as np pn.extension() ``` ```python data = pg.read_dataset("mixed_anova") data_style: Styler = data.style data = data_style.background_gradient(cmap='Blues', subset='Scores') data_pn = pn.pane.DataFrame(data, max_height=200, sizing_mode="stretch_both") data_pn ``` ```python data ``` #### Screenshots or screencasts of the bug in action <img width="811" alt="image" src="https://github.com/user-attachments/assets/2a6b44d7-f420-44ff-a5ce-f1797c979237"> <img width="835" alt="image" src="https://github.com/user-attachments/assets/41b7d30f-0aa7-4bde-b0ff-de820d7f935f">
1medium
Title: [Feature Request]: Add Hypernetwork Refresh API for API Mode. Body: ### Is there an existing issue for this? - [x] I have searched the existing issues and checked the recent builds/commits ### What would your feature do ? Hello, I've recently been working with Stable Diffusion and my project is deployed on a server, necessitating operation via API mode. I noticed that the API includes functions like refresh-checkpoints / reload-checkpoint. However, I've found there's no API for updating the hypernetwork list. This absence means that when new .pt files are added during service operation, they cannot be immediately read, and a complete service restart is required. As an aside, I noticed there's a refresh button in the web API, but I couldn't find a corresponding API endpoint. <img width="599" alt="image" src="https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/83401245/d3ef66d1-4a65-4259-b613-12cfaa3ad8e4"> Lastly, I apologize as my English is not very strong and my coding skills are somewhat limited. I appreciate any guidance or advice. Thank you. my stable-diffusion-webui version : 1.4.1
1medium
Title: Upstreaming OpenMP changes discussion Body: This is just an attempt to start a discussion about what it would take to upstream the changes (or perhaps some another solution) for codon.
1medium
Title: Don't npm install on every serve Body: No reason to install all the node modules every time you run the app, it adds a ton of time to startup needlessly. There should be a dockerfile for the Frontend app that does this and finally just runs the serve command.
1medium
Title: [Question] How to get logs like Zappa Body: Hi there I followed this tutorial to get FastAPI up into a Lambda function: https://adem.sh/blog/tutorial-fastapi-aws-lambda-serverless It seems to be working, but when I tail the logs (`sls logs --function app --stage test`), I see my 'hello' INFO log in there, but it's enclosed in a large block of other logging. It looks like the following: ``` START 2022-02-13 13:58:36,501 Event received. 2022-02-13 13:58:36,501 Waiting for application startup. 2022-02-13 13:58:36,501 LifespanCycleState.STARTUP: 'lifespan.startup.complete' event received from application. 2022-02-13 13:58:36,501 Application startup complete. 2022-02-13 13:58:36,501 HTTP cycle starting. 2022-02-13 13:58:36,502 hello 2022-02-13 13:58:36,502 HTTPCycleState.REQUEST: 'http.response.start' event received from application. 2022-02-13 13:58:36,502 HTTPCycleState.RESPONSE: 'http.response.body' event received from application. 2022-02-13 13:58:36,503 Waiting for application shutdown. 2022-02-13 13:58:36,503 LifespanCycleState.SHUTDOWN: 'lifespan.shutdown.complete' event received from application. END Duration: 3.55 ms Billed Duration: 4 ms Memory Size: 1024 MB Max Memory Used: 79 MB ``` What would I need to do to tail nice coloured logs like I'm used to with Flask/Zappa with the option to filter them? Ideally calls to each endpoint would be logged on a single line, my own log statements would be a single lines, and the uncaught exceptions would also be visible. Basically, I'd like to tail the cloud logs so that they look as similar to the local FastAPI logs as possible.
1medium
Title: test code in rst files Body: for each transformer, and also in the quickstart we have code in rst files. I would like to introduce tests, so when we make changes, the tests would highlight if something is broken and needs fixing. At the moment, we need to manually check. This will get worse when we add more complicated tutorials on rst files.
1medium
Title: AttributeError: '_WindowsSelectorEventLoop' object has no attribute 'acquire' Body: async with pool.acquire() as conn: async with conn.cursor() as cur: # await cur.execute("SELECT 42;") insert_sql = "insert into article_test(title) values('{}') ".format(title) await cur.execute(insert_sql)
1medium
Title: Flagged by Imperva Body: I've been getting "Error 15" after trying to login to "https://driverpracticaltest.dvsa.gov.uk/login". This is the same issue as #690, however the suggested workaround on that thread no longer works as you don't get an instant captcha, so no cookies to grab. I'm able to get the login page fine but as soon as I click login I get flagged. Any suggestions? Here is my requirements.txt: `undetected-chromedriver==3.4.6 anticaptchaofficial==1.0.29 backcall==0.2.0 cachetools==4.2.0 certifi==2020.12.5 chardet==4.0.0 click==7.1.2 decorator==4.4.2 Flask==1.1.2 google-api-core==1.25.0 google-api-python-client==1.12.8 google-auth==1.24.0 google-auth-httplib2==0.0.4 google-auth-oauthlib==0.4.2 googleapis-common-protos==1.52.0 gunicorn==20.0.4 httplib2==0.18.1 idna==2.10 ipython==7.16.1 ipython-genutils==0.2.0 itsdangerous==1.1.0 jedi==0.18.0 Jinja2==2.11.3 MarkupSafe==1.1.1 mysql-connector==2.2.9 oauthlib==3.1.0 parso==0.8.1 pexpect==4.8.0 pickleshare==0.7.5 prompt-toolkit==3.0.13 protobuf==3.14.0 ptyprocess==0.7.0 py==1.10.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 Pygments==2.7.4 PyJWT==1.7.1 python-dotenv==0.15.0 pytz==2020.5 random-user-agent==1.0.1 requests==2.25.1 requests-oauthlib==1.3.0 rsa==4.7 selenium==3.141.0 six==1.15.0 SQLAlchemy==1.3.22 traitlets==4.3.3 twilio==6.53.0 uritemplate==3.0.1 urllib3==1.26.2 wcwidth==0.2.5 Werkzeug==1.0.1 seleniumwire==4.6.1`
1medium
Title: The reasoning result is abnormal Body: ### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report. ### YOLOv5 Component _No response_ ### Bug I'm training on a custom dataset that has only one category,The training is all normal, and the final val result visualization is also normal. ![image](https://github.com/ultralytics/yolov5/assets/72087870/1720fb0a-6ca5-48b4-9322-68d00b843fc8) But when I use the trained model for inference: > python detect.py --weights runs/train/exp/weights/last.pt --data data/bdd100k.yaml --source /root/yolov5/datasets/bdd100k/images/val log: (base) root@autodl-container-23c2469e43-849f7d78:~/yolov5# python detect.py --weights runs/train/exp/weights/last.pt --data data/bdd100k.yaml --source /root/yolov5/datasets/bdd100k/images/train --conf_thres=0.5 --iou_thres=0.1 usage: detect.py [-h] [--weights WEIGHTS [WEIGHTS ...]] [--source SOURCE] [--data DATA] [--imgsz IMGSZ [IMGSZ ...]] [--conf-thres CONF_THRES] [--iou-thres IOU_THRES] [--max-det MAX_DET] [--device DEVICE] [--view-img] [--save-txt] [--save-csv] [--save-conf] [--save-crop] [--nosave] [--classes CLASSES [CLASSES ...]] [--agnostic-nms] [--augment] [--visualize] [--update] [--project PROJECT] [--name NAME] [--exist-ok] [--line-thickness LINE_THICKNESS] [--hide-labels] [--hide-conf] [--half] [--dnn] [--vid-stride VID_STRIDE] detect.py: error: unrecognized arguments: --conf_thres=0.5 --iou_thres=0.1 (base) root@autodl-container-23c2469e43-849f7d78:~/yolov5# python detect.py --weights runs/train/exp/weights/last.pt --data data/bdd100k.yaml --source /root/yolov5/datasets/bdd100k/images/val detect: weights=['runs/train/exp/weights/last.pt'], source=/root/yolov5/datasets/bdd100k/images/val, data=data/bdd100k.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 🚀 v7.0-290-gb2ffe055 Python-3.8.10 torch-1.9.0+cu111 CUDA:0 (NVIDIA GeForce RTX 4090, 24217MiB) Fusing layers... As a result, many bboxes appeared that should not have appeared: ![image](https://github.com/ultralytics/yolov5/assets/72087870/4cb4ce5f-1cdb-4205-bc4b-bf1d43a39a96) ![image](https://github.com/ultralytics/yolov5/assets/72087870/912d5276-69fd-4741-a3ce-477d178dc147) ![image](https://github.com/ultralytics/yolov5/assets/72087870/f3ed3239-bee5-46a4-a48b-6eb44a998ee0) How should I deal with this problem? ### Environment _No response_ ### Minimal Reproducible Example _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
1medium
Title: TypeError: cannot unpack non-iterable NoneType object Body: ## 🐛 Bug I met this bug when I try to load a fairseq translation model. I could see the same issue is still open in fairseq GitHub repo. I tried installing torch and torchvision packages as mentioned in the below link but still I am facing the same issue. [](https://github.com/facebookresearch/fairseq/issues/4214) ### To Reproduce Using your [colab tutorial](https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/pytorch_fairseq_translation.ipynb) will also produce the same bug. ``` Using cache found in /root/.cache/torch/hub/pytorch_fairseq_main --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-21-61d0ed709261> in <module> 1 # Load translation model 2 # en2ru = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-ru.single_model', tokenizer='moses', bpe='fastbpe') ----> 3 ru2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.ru-en.single_model', tokenizer='moses', bpe='fastbpe') 7 frames /usr/local/lib/python3.7/dist-packages/torch/hub.py in load(repo_or_dir, model, source, trust_repo, force_reload, verbose, skip_validation, *args, **kwargs) /usr/local/lib/python3.7/dist-packages/torch/hub.py in _load_local(hubconf_dir, model, *args, **kwargs) /usr/local/lib/python3.7/dist-packages/torch/hub.py in _import_module(name, path) 87 return '[https://github.com/{}/{}/archive/{}.zip](https://github.com/%7B%7D/%7B%7D/archive/%7B%7D.zip)'.format(repo_owner, repo_name, branch) 88 ---> 89 90 def _load_attr_from_module(module, func_name): 91 # Check if callable is defined in the module /usr/lib/python3.7/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib/python3.7/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) ~/.cache/torch/hub/pytorch_fairseq_main/hubconf.py in <module> 37 38 # only do fairseq imports after checking for dependencies ---> 39 from fairseq.hub_utils import ( # noqa; noqa 40 BPEHubInterface as bpe, 41 TokenizerHubInterface as tokenizer, ~/.cache/torch/hub/pytorch_fairseq_main/fairseq/__init__.py in <module> 31 hydra_init() 32 ---> 33 import fairseq.criterions # noqa 34 import fairseq.distributed # noqa 35 import fairseq.models # noqa ~/.cache/torch/hub/pytorch_fairseq_main/fairseq/criterions/__init__.py in <module> 22 CRITERION_DATACLASS_REGISTRY, 23 ) = registry.setup_registry( ---> 24 "--criterion", base_class=FairseqCriterion, default="cross_entropy" 25 ) 26 TypeError: cannot unpack non-iterable NoneType object ``` #### Code sample Using your [colab tutorial](https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/pytorch_fairseq_translation.ipynb) will also produce the same bug. ### Expected behavior successfully load the translation model ### Environment - PyTorch Version (e.g., 1.6) - Google colab
1medium
Title: Irrelevance fields Appear for Rendition (they are Image specific fields) Body: Fields like title,focal_point_x,focal_point_y,focal_point_width,focal_point_height,file_hash,collection,tags are Image specific but they appear for Rendition type and obviously cause error they are coming from `BaseImageObjectType` that it's main purpose appears to be base class for both Image and Rendition due to this: https://github.com/torchbox/wagtail-grapple/blob/2e7cb3e23f81c3c65e1fddc811aeaed99cd7743c/grapple/types/images.py#L62 https://github.com/torchbox/wagtail-grapple/blob/2e7cb3e23f81c3c65e1fddc811aeaed99cd7743c/grapple/types/images.py#L85
1medium
Title: Set create_db callback's parameter to False by default Body: Backend - InfluxDB v1.8 When authorizing with credentials for _non-admin_ user that has access for single database inside InfluxDB instance like so: ``` from cryptofeed import FeedHandler from cryptofeed.backends.influxdb import TradeInflux from cryptofeed.defines import TRADES from cryptofeed.exchanges import Coinbase def main(): f = FeedHandler() address = 'http://localhost:8086' db_name = 'some_db' username = 'some_user' password = 'some_pass' f.add_feed(Coinbase(channels=[TRADES], symbols=['BTC-USD'], callbacks={TRADES: TradeInflux(address, db_name, username=username, password=password)})) f.run() if __name__ == '__main__': main() ``` . Got an error `requests.exceptions.HTTPError: 403 Client Error: Forbidden for url` Problem is that this is somewhat misleading in this context. The actual reason of the error is not incorrect rights set for user, or wrong credentials, but the default value of `create_db` set to `True`. Since creating databases is _admin_ privilege - got an 403 for regular user. So if disable it for such calls like: ``` TradeInflux(address, db_name, create_db=False, username=username, password=password) ``` works as expected. In fact this "feature request" is almost no-op i think. Just spent some time to investigate the problem - so write it here, maybe will help someone.
0easy