text
stringlengths
20
57.3k
labels
class label
4 classes
Title: nexterov in LARS Body: https://github.com/rwightman/pytorch-image-models/blob/75144395739162a153dae5628320b85b5895634b/timm/optim/lars.py#L70-L73 Why set the `nesterov` as False when calling `__setstate__`?
1medium
Title: SimpleJWT integration doesn't take into account the settings AUTH_HEADER_NAME Body: Hi, just figuring this out today because I had to use a custom header for simplejwt auth **Describe the bug** SimpleJwt can be configured by using a dict in the settings.py of a django project. `SIMPLE_JWT = { 'AUTH_HEADER_NAME' : "HTTP_X_TOKEN" # translate to X-token as header. }` But the current implementation doesn't take this settings into account : `class SimpleJWTScheme(OpenApiAuthenticationExtension): target_class = 'rest_framework_simplejwt.authentication.JWTAuthentication' name = 'jwtAuth' def get_security_definition(self, auto_schema): from rest_framework_simplejwt.settings import api_settings if len(api_settings.AUTH_HEADER_TYPES) > 1: warn( f'OpenAPI3 can only have one "bearerFormat". JWT Settings specify ' f'{api_settings.AUTH_HEADER_TYPES}. Using the first one.' ) return { 'type': 'http', 'scheme': 'bearer', 'bearerFormat': api_settings.AUTH_HEADER_TYPES[0], }` Where the return should become something I guess like `{'type':'apiKey', 'in': 'header', 'name': api_settings.SIMPLE_JWT['AUTH_HEADER_NAME']}` **To Reproduce** Change simplejwt header setting. **Expected behavior** Authentication should scheme should follow simplejwt settings. I may have time to make a PR if needed.
1medium
Title: bug: 录制时关机或关闭Windrecoder会导致录像文件损坏 | Shutting down or closing Windrecoder during recording might cause the video file to be damaged. Body: **运行环境**:Win10 Home 22H2, AMD 5800H, 录制编码器 `AMD_h265`, 录制比特率 `150kbps` **bug描述**:录制时关机或关闭Windrecoder会导致录像文件损坏,而且OCR模块会正常更改文件名为 `xxx-OCRED.mp4`,potplayer无法打开视频文件。 ![image](https://github.com/yuka-friends/Windrecorder/assets/54389220/0bc4826b-9ced-48b9-90e7-47b611183467) 个人推测是因为使用mp4封装文件导致文件损坏,可以尝试使用mkv录制,再在索引文件前封装为mp4(OBS推荐做法) ![image](https://github.com/yuka-friends/Windrecorder/assets/54389220/9a51ec47-7be3-45a9-a40d-02c65cfd59e7) Edit:相同码率下mkv视频好像比mp4糊一些
1medium
Title: Publish arctic releases via conda-forge Body: Conda is increasingly used in the data science community. Conda-forge provides build infrastructure for many project. Aim to get arctic built automatically for conda: https://conda-forge.org/#about
1medium
Title: Update transcription notebook Body: Update transcription notebook to incorporate latest models available in transformers added with #362
1medium
Title: about save-txt in yolov5-seg Body: ### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question when activating "save-txt" in yolov5-seg.py, a txt with the coordinates of the predicted region is saved, but i found that the coordinates seem not to be in sequence, that is to say when i use fillpoly in opencv, the coordinates seem unable to form a polygon like the one of prediction. is there a way to make the coordinates in sequence? 我发现启用save-txt后保存的包含预测分割区域的txt里的坐标似乎不是按顺序的(指坐标的保存顺序不是围着分割区域的)?用opencv的fillpoly填充出来的也跟预测的区域不一样。有办法把坐标变成按顺序的吗? ![QQ20240923-174819](https://github.com/user-attachments/assets/c991dea0-47bf-4dc1-93cb-d79697ce0493) ### Additional _No response_
1medium
Title: Export to NCNN format no longer works with Ultralytics 8.3.71 and torch 2.6.0+cpu on Raspberry Pi 4 Body: ### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component Export ### Bug The latest version of `ultralytics 8.3.71` in conjunction with `torch-2.6.0+cpu` causes the `yolo export model=yolo11n.pt format=ncnn` to longer work. It results in the following output, and the model does not get converted to NCNN format. If you downgrade the libraries to `ultralytics 8.3.70` and `torch-2.5.1+cpu`, it does work. Can you please resolve this issue? Is there any way I can help? ``` yolo export model=my_model.pt format=ncnn Ultralytics 8.3.71 Python-3.11.2 torch-2.6.0+cpu CPU (Cortex-A72) YOLO11n summary (fused): 238 layers, 2,583,127 parameters, 0 gradients, 6.3 GFLOPs PyTorch: starting from 'my_model.pt' with input shape (1, 3, 480, 480) BCHW and output shape(s) (1, 9, 4725) (5.2 MB) TorchScript: starting export with torch 2.6.0+cpu... Illegal instruction ``` ![Image](https://github.com/user-attachments/assets/9794a459-f174-4f16-8fa8-662c3dc61909) Thanks to the Ultralytics team for all your great work on this library! ### Environment (venv) evan@raspberrypi:~/yolo/yolo3 $ yolo checks Ultralytics 8.3.71 🚀 Python-3.11.2 torch-2.6.0+cpu CPU (Cortex-A72) Setup complete ✅ (4 CPUs, 7.6 GB RAM, 12.0/27.8 GB disk) OS Linux-6.6.62+rpt-rpi-v8-aarch64-with-glibc2.36 Environment Linux Python 3.11.2 Install pip RAM 7.63 GB Disk 12.0/27.8 GB CPU Cortex-A72 CPU count 4 GPU None GPU count None CUDA None numpy ✅ 2.1.1<=2.1.1,>=1.23.0 matplotlib ✅ 3.10.0>=3.3.0 opencv-python ✅ 4.11.0.86>=4.6.0 pillow ✅ 11.1.0>=7.1.2 pyyaml ✅ 6.0.2>=5.3.1 requests ✅ 2.32.3>=2.23.0 scipy ✅ 1.15.1>=1.4.1 torch ✅ 2.6.0>=1.8.0 torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32" torchvision ✅ 0.21.0>=0.9.0 tqdm ✅ 4.67.1>=4.64.0 psutil ✅ 6.1.1 py-cpuinfo ✅ 9.0.0 pandas ✅ 2.2.3>=1.1.4 seaborn ✅ 0.13.2>=0.11.0 ultralytics-thop ✅ 2.0.14>=2.0.0 ### Minimal Reproducible Example **Steps to reproduce** This is all run on a Raspberry Pi 4b 8GB using 64-bit Raspberry Pi OS version 12 (Bookworm). The OS is freshly installed on an SD cardusing Raspberry Pi Imager. Open a terminal and run the following commands: 1. `sudo apt update && sudo apt upgrade -y` 2. `mkdir yolo && cd yolo` 3. `python3 -m venv venv` 4. `source venv/bin/activate` 5. `pip install ultralytics ncnn` 6. `yolo export model=yolo11n.pt format=ncnn` The error will occur upon running that last command. The error no longer occurs if you downgrade using `pip install ultralytics==8.3.70 torch==2.5.1`. ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
1medium
Title: Using custom model with different input size? (rgb: True) Body: I've successfully trained my own model, thank you so much for the guidance there, but now when I am trying to use the model I made with `rgb: true` (making it have an input of 3 channels) I get errors for size mismatches: ``` RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[1, 1, 64, 256] to have 3 channels, but got 1 channels instead ``` This may be due to the downloaded model, or something? I played with `download_enabled` and removing the downloaded model, that gives this error: ``` FileNotFoundError: Missing ./ocr_model/craft_mlt_25k.pth and downloads disabled ``` How do I use the reader **only using** my network? or am I trying to do something that doesn't make any sense? If my training is making great predictions, I'm confused on why I need another model? ```py reader = easyocr.Reader( ['en'], # here's where my custom_model.pth file is model_storage_directory="./ocr_model", # here's where my custom_model.py and custom_model.yaml live user_network_directory='./ocr_network', recog_network='custom_model' ) reader.readtext('test.jpg') ``` I believe I set `custom_model.yaml` up properly: ```yaml network_params: input_channel: 3 output_channel: 256 hidden_size: 256 imgH: 64 lang_list: - 'en' character_list: 0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz ``` ## EDIT Doing the exact same training but with `rgb: False` and everything works, woo! am I missing out on much without rgb?
1medium
Title: Enhancement of catalogs inheritance Body: Example below should work properly: ``` python """Example of catalogs inheritance.""" import dependency_injector as di class CatalogA(di.AbstractCatalog): """Example catalog A.""" p1 = di.Provider() class CatalogB(CatalogA): """Example catalog B.""" p2 = di.Provider() class CatalogC(CatalogB): """Example catalog C.""" p3 = di.Provider() # `di.AbstractCatalog.providers` attribute is a dict of all available providers, # including current catalog providers and providers that are inherited # from parent catalogs: assert CatalogA.providers == dict(p1=CatalogA.p1) assert CatalogB.providers == dict(p1=CatalogA.p1, p2=CatalogB.p2) assert CatalogC.providers == dict(p1=CatalogA.p1, p2=CatalogB.p2, p3=CatalogC.p3) # `di.AbstractCatalog.inherited_providers` attribute is a dict of all providers that # are inherited from parent catalogs assert CatalogA.inherited_providers == dict() assert CatalogB.inherited_providers == dict(p1=CatalogA.p1) assert CatalogC.inherited_providers == dict(p1=CatalogA.p1, p2=CatalogB.p2) # `di.AbstractCatalog.cls_providers` attribute is a dict of current catalog providers: assert CatalogA.cls_providers == dict(p1=CatalogA.p1) assert CatalogB.cls_providers == dict(p2=CatalogB.p2) assert CatalogB.cls_providers == dict(p3=CatalogC.p3) ```
1medium
Title: AWX Office Hours - 11/12/24 Body: # AWX Office Hours ## Proposed agenda based on topics - https://github.com/ansible/awx/pull/15627 ## What After a successful Contributor Summit in October 2023, one of the bits of feedback we got was to host a regular time for the Automation Controller (AWX) Team to be available for your folks in the AWX Community, so we are happy to announce a new regular video meeting. This kind of feedback loop is vital to the success of AWX and the AWX team wants to make it as easy as possible for you - our community - to get involved. ## Where & When Our next meeting will be held on Tuesday, November 12th, 2024 at [1500 UTC](https://dateful.com/time-zone-converter?t=15:00&tz=UTC) * [Google Meet](https://meet.google.com/vyk-dfow-cfi) * Via Phone PIN: 842522378 [Guide](https://support.google.com/meet/answer/9518557) This meeting is held once a month, on the second Tuesday of the month, at [1500 UTC](https://dateful.com/time-zone-converter?t=15:00&tz=UTC) ## How Add one topic per comment in this GitHub issue If you don't have a GitHub account, jump on [#awx:ansible.com](https://matrix.to/#/#awx:ansible.com) on Matrix and we can add the topic for you ## Talk with us As well as the fortnightly video meeting you can join the Community (inc development team) on Matrix Chat. * Matrix: [#awx:ansible.com](https://matrix.to/#/#awx:ansible.com) (recomended) * libera.chat IRC: `#ansible-awx` (If you are already setup on IRC) The Matrix & IRC channels are bridged, you'll just have a better experience on Matrix ## Links [AWX YouTube Chanel](https://www.youtube.com/@ansible-awx) [Previous Meeting](#15319) [Meeting recording]() Next Meeting See you soon!
3misc
Title: [Suggestion] Docs Translation Body: gino would be useful project (IMO) I have a great interest in gino project. If I has willing to translate Korean, Can You accept my Pull request? P.S. Happy New Year
1medium
Title: Removing related links at end of article/sidebar on news websites? Body: Over here in the Media Cloud project we're seeing poor performance on the content extraction task for a variety of pages that include links to other "related" stories at the end of article content. Our use case is trying to extract only article content as text. Do you have advice on tweaks to make to improve that performance? This might be the opposite of #518, because we do _not_ want related links as part of content. Here's sample code with real examples parsed in a way that looks very similar to our usage. The function returns true if the supplied text is included in the extracted content (the erroneous results, in our use case). Each of these incorrectly includes text that is part of a "related links" type callout that appears _after_ article content. Any advice appreciated. ```python import trafilatura import requests MEDIA_CLOUD_USER_AGENT = 'Mozilla/5.0 (compatible; mediacloud academic archive; mediacloud.org)' def is_text_in_webpage_content(txt:str, url:str) -> bool: req = requests.get(url, headers={'User-Agent': MEDIA_CLOUD_USER_AGENT},timeout=30) parsed = trafilatura.bare_extraction(req.text, only_with_metadata=False, url=url, include_images=False, include_comments=False) content_text = parsed['text'] return txt in content_text print(is_text_in_webpage_content( 'Thai Official', # item on bottom of page in "Latest News" section 'https://www.ibtimes.co.uk/falling-inflation-shifts-focus-when-ecb-could-cut-rates-1722106')) print(is_text_in_webpage_content( 'HIV from Terrence Higgins to Today', # <li> under the "listen on sounds" banner after article 'https://www.bbc.co.uk/sport/football/67640638')) print(is_text_in_webpage_content( 'Madhuri Dixit', # title of an item in the featured movie below the main content area 'https://timesofindia.indiatimes.com/videos/lifestyle/fashion/10-indian-saris-every-woman-should-have-in-her-wardrobe/videoshow/105809845.cms')) print(is_text_in_webpage_content( 'Immigration, Ukraine', # title of an item in the "most popular" sidebar content 'https://www.bfmtv.com/cote-d-azur/nice-25-personnes-expulsees-lors-d-operations-anti-squat-menees-dans-le-quartier-des-liserons_AN-202312150639.html')) ```
1medium
Title: animation_options in Graph has incorrect default value ("ease" instead of "easing") Body: The defaultProps for `animation_options` are `{ frame: { redraw: false, }, transition: { duration: 750, ease: 'cubic-in-out', }, }`, but `Plotly.animate` takes an "easing" argument, not "ease". I don't see any warning in the console for this, so I don't think these arguments are being validated; I can put whatever I like in `animation_options` and never get warnings/errors. As an aside, it would be helpful if the `dash_core_components` [docs](https://dash.plotly.com/dash-core-components/graph) mentioned that `frame.duration` has to be set at least as long as `transition.duration`, or at least linked to [https://plotly.com/javascript/animations/](https://plotly.com/javascript/animations/), as it's not immediately clear that you can't just arbitrarily set `transition.duration` to higher values. In fact the default `frame.duration` is 500, so the 750 default value here is misleading (maybe just setting `frame.duration` to 750 in the default here would at least highlight to users that it needs to be set).
1medium
Title: When training yolov3 with coco, how to merge dog and cat into animal categories? Body: When training yolov3 with coco, how to merge dog and cat into animal categories? Thx.
1medium
Title: How to add error message when `UNIQUE constraint failed` Body: when import twice get a long `Traceback (most recent call last):` ```py class Employee(models.Model): id_number = models.CharField(max_length=180, null=True, blank=True, unique=True) ``` I want to show the error like admin default error message, like `already exist Employee with this _id_number `
1medium
Title: Can .pth change to pytorch .pt file Body: Hi, may I know what kind of models you train in torch? I would like to change the .pth to pytorch .pt file, since .pth is in conflict with python directory, and I cannot open in android mobile. Thanks.
1medium
Title: Issues while runinng Body: here's the output pi@raspberrypi:~/sherlock $ python3 sherlock Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/pi/sherlock/sherlock/__main__.py", line 21, in <module> import sherlock File "/home/pi/sherlock/sherlock/sherlock.py", line 12, in <module> import pandas as pd File "/home/pi/.local/lib/python3.9/site-packages/pandas/__init__.py", line 16, in <module> raise ImportError( ImportError: Unable to import required dependencies: numpy: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.9 from "/usr/bin/python3" * The NumPy version is: "1.23.5" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: libcblas.so.3: cannot open shared object file: No such file or directory please lmk how to solve it
1medium
Title: Why slim.flatten(net) before slim.dropout(net)? Body: when I read the model code, I found there is a layer slim.flatten(net) before net = slim.dropout(net, dropout_keep_prob, is_training=is_training, scope='Dropout'), it's different with paper. sorry, I'm new to this area, thank you very much
1medium
Title: [Request]: Can we please add the support to extend the pytest-html to pytest-bdd framework? Body: This plugin is absolutely working for pytest-bdd framework too. I see the passed/failed/skipped etc. However, I am wondering if we can extend support of this plugin for pytest-bdd framework? As we can see the report with the Feature as well for Feature/Gherkin steps. ?? Any suggestions/thoughts are highly appreciated.
1medium
Title: The second-order analytical gradients are not all zero as described in the article. Body: I printed the value of the value of the second derivative, and found that the second derivative is not zero. I understand that theoretically the second derivative of trilinear interpolation should be 0, but why are the results of the code implementation inconsistent? `gradient = torch.autograd.grad(sdf.sum(), x, create_graph=True)[0]` `hessian = torch.autograd.grad(gradient.sum(), x, create_graph=True)[0]`
1medium
Title: Python package Body: Would you be opposed to turning this into a python package? Something like this: https://github.com/monkeyhippies/xlnet/commit/f0471a242ed5dad5c4be7602b17dd0a96ad6b671 Or is it meant to just be a repo of helpful scripts?
1medium
Title: TimeSeriesForecaster: Predict function returns ValueError Body: Hi, I've been using AutoKeras for Time Series forecasting but when the model has been trained and I apply the test data, it raises: "ValueError: The prediction data requires the original training data to make predictions on subsequent data points" but I can't put both the test data and the training data as separate arguments? Do I need to append the test data to the training data for the model to predict or is there something I'm missing? Setup Details - OS type and version: Windows 10 - Python: 3.9 - autokeras: 1.0.16 - keras-tuner: 1.0.4 - scikit-learn: 1.0 - numpy: 1.19.5 - pandas: 1.3.4 - tensorflow: 2.5.0 (GPU)
1medium
Title: [BUG] Can NOT run deeplake python library Body: ### Severity None ### Current Behavior Can not run deeplake python library, I already tried to have fresh deployment on different version of python, langchain and deeplake. ### Steps to Reproduce - Use conda to create and activate new virutal environment - install all python library, > pip`` install langchain==0.0.208 deeplake openai tiktoken > or pip`` install langchain deeplake openai tiktoken > or pip`` install langchain==0.0.208 deeplake==3.8.2 openai tiktoken - try import deeplake all failed with following code `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/jamesou/Documents/Projects/langchain/Learning/deeplake.py", line 28, in <module> db = DeepLake.from_documents(docs, dataset_path=dataset_path, embedding=OpenAIEmbeddings()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/jamesou/miniforge3/envs/deep/lib/python3.11/site-packages/langchain/vectorstores/base.py", line 317, in from_documents return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/jamesou/miniforge3/envs/deep/lib/python3.11/site-packages/langchain/vectorstores/deeplake.py", line 736, in from_texts deeplake_dataset = cls( ^^^^ File "/Users/jamesou/miniforge3/envs/deep/lib/python3.11/site-packages/langchain/vectorstores/deeplake.py", line 123, in __init__ raise ValueError( ValueError: Could not import deeplake python package. Please install it with `pip install deeplake`.` ### Expected/Desired Behavior it should be able to run successfully ### Python Version 3.11.1 or 3.11.6 ### OS OSX 13.6 ### IDE VS Code ### Packages _No response_ ### Additional Context _No response_ ### Possible Solution _No response_ ### Are you willing to submit a PR? - [ ] I'm willing to submit a PR (Thank you!)
1medium
Title: Looking for a quick answer? Not sure if you have found something new ? Check our Discord Body: [https://discord.gg/n3g5puF](https://discord.gg/n3g5puF) :)
3misc
Title: Dashboard loading stuck in docker Body: I'm using explainer dashboard to visualize models made by different AutoML solutions. When I run my application locally, everything works as intended. But when it runs in a docker environment, the explainer dashboard is stuck when loading a previously generated page at the step generation layout...: ![image](https://github.com/oegedijk/explainerdashboard/assets/79153884/be8eba33-e6bf-4f65-a723-c52b9bb2b415) This does not happen with all AutoML models, and when I restart the docker container manually, the dashboard finally boots properly. I'm currently not sure where to look for this, maybe you have some ideas?
1medium
Title: Partial result returned in range.api.special_cells() Body: #### OS (e.g. Windows 10 or macOS Sierra) MacOS 12.5.1 #### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7) Python 3.9.5 xlwings 0.27.10 Microsoft Excel for Mac v16.62 #### Describe your issue (incl. Traceback!) I have a trouble while using “range.api.special_cells()”, not sure if it is a bug in appscript. In order to make a comparison, I double checked in VBA and xlwings. The used range is “$A5:$IT4165”, and I filtered data manually, which should have 3789 rows left visible. Here are codes in Excel VBA ``` MsgBox Sheet1.UsedRange.SpecialCells(xlCellTypeVisible).Areas.Count MsgBox Sheet1.UsedRange.SpecialCells(xlCellTypeVisible).Areas(35).Address ``` It shows the visible range has been splinted into 35 areas, and the last one points to the last block “$A4152:$IT$4158”. Here are codes in xlwings. ``` sh1.api.used_range.special_cells(type=12).areas() sh1.api.used_range.special_cells(type=12).areas.count() sh1.api.used_range.special_cells(type=12).get_address() ``` The first line of code prints a list returned areas, but the result was trunked, and the list length is 17. I failed to execute the second line of code, it raised an exception CommandError, with message “Parameter error”. I tried areas.count.get(), it prompts “no attribute ‘get’”. The third line, I got a list of address, the same result as the first line of code. Did I miss something in xlwings?
2hard
Title: Dateutil version bump Body: `python-dateutil` is pinned at 2.6.0, but it's starting to [conflict with some things](https://github.com/spulec/freezegun/issues/333) like `freezegun` that would like a newer version. It seems like dependabot has been been fighting for this a while too: https://github.com/Miserlou/Zappa/pulls?q=is%3Apr+dateutil+is%3Aclosed I couldn't locate much rationale for this being pinned, could it be bumped to `>=2.7.0`?
1medium
Title: I tried to update, but everytime i try to extract vocals i cant bc of this error Body: ![imagen_2023-06-23_104951010](https://github.com/Anjok07/ultimatevocalremovergui/assets/122219111/94744594-31cd-4bc9-a49e-2c6692c8f525) What can i do to stop getting this error? I know nothing about programming btw log: Last Error Received: Process: Ensemble Mode If this error persists, please contact the developers with the error details. Raw Error Details: PermissionError: "[WinError 5] Acceso denegado: 'D:\\STEMS\\Ensembled_Outputs_1687524557'" Traceback Error: " File "UVR.py", line 4640, in process_start File "UVR.py", line 533, in __init__ " Error Time Stamp [2023-06-23 08:49:16] Full Application Settings: vr_model: Choose Model aggression_setting: 10 window_size: 512 batch_size: 4 crop_size: 256 is_tta: False is_output_image: False is_post_process: False is_high_end_process: False post_process_threshold: 0.2 vr_voc_inst_secondary_model: No Model Selected vr_other_secondary_model: No Model Selected vr_bass_secondary_model: No Model Selected vr_drums_secondary_model: No Model Selected vr_is_secondary_model_activate: False vr_voc_inst_secondary_model_scale: 0.9 vr_other_secondary_model_scale: 0.7 vr_bass_secondary_model_scale: 0.5 vr_drums_secondary_model_scale: 0.5 demucs_model: Choose Model segment: Default overlap: 0.25 shifts: 2 chunks_demucs: Auto margin_demucs: 44100 is_chunk_demucs: False is_chunk_mdxnet: False is_primary_stem_only_Demucs: False is_secondary_stem_only_Demucs: False is_split_mode: True is_demucs_combine_stems: True demucs_voc_inst_secondary_model: No Model Selected demucs_other_secondary_model: No Model Selected demucs_bass_secondary_model: No Model Selected demucs_drums_secondary_model: No Model Selected demucs_is_secondary_model_activate: False demucs_voc_inst_secondary_model_scale: 0.9 demucs_other_secondary_model_scale: 0.7 demucs_bass_secondary_model_scale: 0.5 demucs_drums_secondary_model_scale: 0.5 demucs_pre_proc_model: No Model Selected is_demucs_pre_proc_model_activate: False is_demucs_pre_proc_model_inst_mix: False mdx_net_model: Choose Model chunks: Auto margin: 44100 compensate: Auto is_denoise: False is_invert_spec: False is_mixer_mode: False mdx_batch_size: Default mdx_voc_inst_secondary_model: No Model Selected mdx_other_secondary_model: No Model Selected mdx_bass_secondary_model: No Model Selected mdx_drums_secondary_model: No Model Selected mdx_is_secondary_model_activate: False mdx_voc_inst_secondary_model_scale: 0.9 mdx_other_secondary_model_scale: 0.7 mdx_bass_secondary_model_scale: 0.5 mdx_drums_secondary_model_scale: 0.5 is_save_all_outputs_ensemble: True is_append_ensemble_name: False chosen_audio_tool: Manual Ensemble choose_algorithm: Min Spec time_stretch_rate: 2.0 pitch_rate: 2.0 is_gpu_conversion: False is_primary_stem_only: True is_secondary_stem_only: False is_testing_audio: False is_add_model_name: False is_accept_any_input: False is_task_complete: False is_normalization: False is_create_model_folder: False mp3_bit_set: 320k save_format: WAV wav_type_set: PCM_16 help_hints_var: False model_sample_mode: False model_sample_mode_duration: 30 demucs_stems: All Stems
1medium
Title: Support arrays of timestamp / timestamptz / date / time in SQLite Body: We don't currently support things like this in SQLite: ```python class MyTable(Table): times = Array(Time()) ``` SQLite doesn't have native support for array columns. The way we support it in Piccolo is by serialising the array into a string before storing it in the database, and deserialising it again when querying the row. We use JSON to do the serialisation / deserialisation, which doesn't support `datetime` / `date` / `time` out of the box. To support this, we'll need to create new row types for SQLite - like `ARRAY_TIME` / `ARRAY_TIMESTAMP` etc. When we read data from a row with this column type, we know we need to deserialise the values back into a list of Python objects. One of the reasons we need this functionality is because we're doing a lot of improvements to arrays in Piccolo Admin, and we often test on SQLite.
1medium
Title: 请问可以不分词直接进行词性标注嘛 Body: 请问可以不分词直接进行词性标注嘛,比如我现在已经有一个词典,需要进行词性标注
1medium
Title: 如果有GPU卡的,可以用ffmpeg的编码加速视频合成codec='h264_nvenc' Body: video.py: final_clip.write_videofile(combined_video_path, codec='h264_nvenc', threads=threads, video.py: result.write_videofile(temp_output_file, codec='h264_nvenc', threads=params.n_threads or 10, video.py: video_clip.write_videofile(output_file, codec='h264_nvenc', audio_codec="aac", threads=params.n_threads or 10,
1medium
Title: training tricks of HIST Body: Hi there, Thanks for making the codes of HIST public. I may need some discussion on two topics. 1. `iter_daily_shuffle` or `iter_daily` Regarding the order of training samples fed to model, I've noticed that the days were shuffled by default `(iter_daily_shuffle).` And when I changed the setting to train samples day by day `(iter_daily),` I met a huge performance decrease. I'm a little confused about that. Is the method `iter_daily_shuffle` on suspicion of some information leakage since the model see later samples at first? Have you met the same situation when training the model? 2. split of train/valid/test sets It introduces great randomness to model performance when I split train/valid/test sets in different time windows. I'm not sure if there is some workaround to overcome this kind of randomness and make the model more robust. Any idea or help would be appreciated!
3misc
Title: support more than 2 percentiles to be passed for `predict_interval` Body: I'm working on creating a back-adapter for `skforecast` models in `sktime`, starting with `ForecasterAutoreg`. The goal is to provide an option to use `skforecast` as an backend alternative to the already existing `make_reduction`. During this, I notice that currently it enforces `interval` to be a list of values in `[0, 100)`, even though it's not necessary that they sum up to 100. While as a single confidence interval it makes sense, enforcing length does not seem to be a necessity as you are calculating the quantiles from bootstrapping predictions. https://github.com/JoaquinAmatRodrigo/skforecast/blob/db04f762d878c096b87b97de1a20f35025bbc437/skforecast/ForecasterAutoreg/ForecasterAutoreg.py#L990 If you remove the fixed length requirement, it will be easier to integrate in `sktime` for `predict_quantiles` methods, otherwise it'd need an otherwise unnecessary for loop or calling `predict_bootstrapping` directly and calculating `quantiles` in the back adapter itself. Can it be considered as a feature request? If you have any other suggestion to address this without this feature request, that will be much appreciated as well.
1medium
Title: Add notebook for AWS CLI setup and common commands Body:
1medium
Title: triton implementation Body: Here's an implementation using triton. I think we can provide faster speeds. https://github.com/qwopqwop200/AutoGPTQ-triton
2hard
Title: 今天Dalle3调用报错403 Body: **Describe the bug 描述bug** 今天白天还可以,晚上刚刚突然不行了。API余额还有很多。 ``` {'error': {'message': '<!DOCTYPE html>\n<!--[if lt IE 7]> <html class="no-js ie6 oldie" lang="en-US"> <![endif]-->\n<!--[if IE 7]> <html class="no-js ie7 oldie" lang="en-US"> <![endif]-->\n<!--[if IE 8]> <html class="no-js ie8 oldie" lang="en-US"> <![endif]-->\n<!--[if gt IE 8]><!--> <html class="no-js" lang="en-US"> <!--<![endif]-->\n<head>\n<title>Attention Required! | Cloudflare</title>\n<meta charset="UTF-8" />\n<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />\n<meta http-equiv="X-UA-Compatible" content="IE=Edge" />\n<meta name="robots" content="noindex, nofollow" />\n<meta name="viewport" content="width=device-width,initial-scale=1" />\n<link rel="stylesheet" id="cf_styles-css" href="/cdn-cgi/styles/cf.errors.css" />\n<!--[if lt IE 9]><link rel="stylesheet" id=\'cf_styles-ie-css\' href="/cdn-cgi/styles/cf.errors.ie.css" /><![endif]-->\n<style>body{margin:0;padding:0}</style>\n\n\n<!--[if gte IE 10]><!-->\n<script>\n if (!navigator.cookieEnabled) {\n window.addEventListener(\'DOMContentLoaded\', function () {\n var cookieEl = document.getElementById(\'cookie-alert\');\n cookieEl.style.display = \'block\';\n })\n }\n</script>\n<!--<![endif]-->\n\n\n</head>\n<body>\n <div id="cf-wrapper">\n <div class="cf-alert cf-alert-error cf-cookie-error" id="cookie-alert" data-translate="enable_cookies">Please enable cookies.</div>\n <div id="cf-error-details" class="cf-error-details-wrapper">\n <div class="cf-wrapper cf-header cf-error-overview">\n <h1 data-translate="block_headline">Sorry, you have been blocked</h1>\n <h2 class="cf-subheadline"><span data-translate="unable_to_access">You are unable to access</span> api.openai.com</h2>\n </div><!-- /.header -->\n\n <div class="cf-section cf-highlight">\n <div class="cf-wrapper">\n <div class="cf-screenshot-container cf-screenshot-full">\n \n <span class="cf-no-screenshot error"></span>\n \n </div>\n </div>\n </div><!-- /.captcha-container -->\n\n <div class="cf-section cf-wrapper">\n <div class="cf-columns two">\n <div class="cf-column">\n <h2 data-translate="blocked_why_headline">Why have I been blocked?</h2>\n\n <p data-translate="blocked_why_detail">This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.</p >\n </div>\n\n <div class="cf-column">\n <h2 data-translate="blocked_resolve_headline">What can I do to resolve this?</h2>\n\n <p data-translate="blocked_resolve_detail">You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.</p >\n </div>\n </div>\n </div><!-- /.section -->\n\n <div class="cf-error-footer cf-wrapper w-240 lg:w-full py-10 sm:py-4 sm:px-8 mx-auto text-center sm:text-left border-solid border-0 border-t border-gray-300">\n <p class="text-13">\n <span class="cf-footer-item sm:block sm:mb-1">Cloudflare Ray ID: <strong class="font-semibold">8c4202eb19ad15bc</strong></span>\n <span class="cf-footer-separator sm:hidden">&bull;</span>\n <span id="cf-footer-item-ip" class="cf-footer-item hidden sm:block sm:mb-1">\n Your IP:\n <button type="button" id="cf-footer-ip-reveal" class="cf-footer-ip-reveal-btn">Click to reveal</button>\n <span class="hidden" id="cf-footer-ip">43.153.99.59</span>\n <span class="cf-footer-separator sm:hidden">&bull;</span>\n </span>\n <span class="cf-footer-item sm:block sm:mb-1"><span>Performance &amp; security by</span> <a rel="noopener noreferrer" href=" " id="brand_link" target="_blank">Cloudflare</a ></span>\n \n </p >\n <script>(function(){function d(){var b=a.getElementById("cf-footer-item-ip"),c=a.getElementById("cf-footer-ip-reveal");b&&"classList"in b&&(b.classList.remove("hidden"),c.addEventListener("click",function(){c.classList.add("hidden");a.getElementById("cf-footer-ip").classList.remove("hidden")}))}var a=document;document.addEventListener&&a.addEventListener("DOMContentLoaded",d)})();</script>\n</div><!-- /.error-footer -->\n\n\n </div><!-- /#cf-error-details -->\n </div><!-- /#cf-wrapper -->\n\n <script>\n window._cf_translation = {};\n \n \n</script>\n\n</body>\n</html>\n', 'type': 'chatanywhere_error', 'param': None, 'code': '403 FORBIDDEN'}} ``` **To Reproduce 复现方法** 调用dalle3 **Screenshots 截图** If applicable, add screenshots to help explain your problem. **Tools or Programming Language 使用的工具或编程语言** Python openai库 **Additional context 其他内容** Add any other context about the problem here.
1medium
Title: Bug: RegressionRandomIndexComponent not robust Body: I have just managed to kill the whole dashboard because of an error with RegressionRandomIndexComponent, i.e. everything works fine without this component, but enabling it yields to the dashboard only displaying "Error loading layout.". The traceback is below. I fixed it by appending ".astype('float')" to the data which goes into RegressionExplainer. ``` Exception on /_dash-layout [GET] Traceback (most recent call last): File "c:\...\lib\site-packages\flask\app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "c:\...\lib\site-packages\flask\app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "c:\...\lib\site-packages\flask\app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "c:\...\lib\site-packages\flask\_compat.py", line 39, in reraise raise value File "c:\...\lib\site-packages\flask\app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "c:\...\lib\site-packages\flask\app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "c:\...\lib\site-packages\dash\dash.py", line 531, in serve_layout json.dumps(layout, cls=plotly.utils.PlotlyJSONEncoder), File "C:\lib\json\__init__.py", line 238, in dumps **kw).encode(obj) File "c:\...\lib\site-packages\_plotly_utils\utils.py", line 45, in encode encoded_o = super(PlotlyJSONEncoder, self).encode(o) File "C:\lib\json\encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "C:\lib\json\encoder.py", line 257, in iterencode return _iterencode(o, 0) TypeError: keys must be str, int, float, bool or None, not numpy.int64 ``` (this shows multiple times)
1medium
Title: Nice to have: label property for dccRadioItems Body: Creating a radio button with: ``` radioButton <- dccRadioItems( id = "radiobutton-selector", options = list( list("label" = "ggplotly", "value" = "ggplotly"), list("label" = "plotly", "value" = "plotly") ), value = "ggplotly") ``` results in: <img width="92" alt="Screen Shot 2019-09-30 at 3 05 47 PM" src="https://user-images.githubusercontent.com/20918264/65908630-d2a1e100-e394-11e9-9bac-89e8ac0bc369.png"> However, having a `label` or `title` property could save user from creating an additional component and aligning it with the radioselector. ``` radioButton <- dccRadioItems( id = "radiobutton-selector", *label = "Plot Type"* options = list( list("label" = "ggplotly", "value" = "ggplotly"), list("label" = "plotly", "value" = "plotly") ), value = "ggplotly") ``` desired result: <img width="97" alt="Screen Shot 2019-09-30 at 3 08 00 PM" src="https://user-images.githubusercontent.com/20918264/65909006-90c56a80-e395-11e9-804b-277f083fb16d.png">
1medium
Title: uvicorn adding its own server header to response Body: ### Discussed in https://github.com/encode/uvicorn/discussions/1435 <div type='discussions-op-text'> <sup>Originally posted by **udit-pandey** April 1, 2022</sup> I have added few headers using [secure package](https://pypi.org/project/secure/) in my fastapi application. I wanted to overwrite the server header(having default value "uvicorn") to something else. All the added headers using secure package are replicated in the responses except the server header which is coming twice(once with value given by me and other with uvicorn) as shown the postman api response below: ![image](https://user-images.githubusercontent.com/18224381/161194376-1ff92c6c-ed3e-41c6-9578-39993ea17179.png) I run my application using: `gunicorn -k uvicorn.workers.UvicornWorker ${APP_MODULE} --bind 0.0.0.0:80` Why is this header being added again by **uvicorn** even though it already exists?</div>
1medium
Title: AUTHENTICATION_WHITELIST not working Body: ** Description ** We are trying to override our DEFAULT_AUTHENTICATION_CLASSES that Swagger UI will use. We have SessionAuthentication and TokenAuthentication set in our Django settings. In our SPECTAULAR_SETTINGS we only want to use the TokenAuthentication, so we add it there as a single item list AUTHENTICATION_WHITELIST: ['rest_framework.authentication.TokenAuthentication']. However swagger when loaded still shows both authentication methods. **To Reproduce** 0.17.3 ![image](https://user-images.githubusercontent.com/10231323/131222228-b8f45a9c-fcc4-40e2-844b-c47f6bed492a.png) ![image](https://user-images.githubusercontent.com/10231323/131222239-f7361c7d-869a-4699-8475-4ce60790d16e.png) **Expected behavior** As per the description in the settings docs, I expected only Token Authentication to appear? ![image](https://user-images.githubusercontent.com/10231323/131222249-37f9665b-fb36-4cf5-91d3-d6ab50d25b61.png)
1medium
Title: Android里使用的get_text()获取button上的文字,在ios里有类似get_text这种方法吗? Body: **描述问题bug** Android里使用的get_text()获取button上的文字,在ios里有类似get_text这种方法吗? 这种方式,ios,不执行这行。求大神指点 poco("Window").offspring("Table").offspring("已关注").get_text() == "已关注": 元素信息如下: type : Button name : 已关注 visible : True isEnabled : b'1' label : b'\xe5\xb7\xb2\xe5\x85\xb3\xe6\xb3\xa8' identifier : b'' size : [0.16533333333333333, 0.035982008995502246] pos : [0.8773333333333333, 0.13343328335832083] zOrders : {'local': 0, 'global': 0} anchorPoint : [0.5, 0.5] **复现步骤** 1. get_text() 获取button上的文字name 2. 不执行该语句 **预期效果** 可以获取button的文字,对比name是否一致 **python 版本:** `python3.5` **airtest 版本:** `1.2.2` **设备:** - 型号: iphone7 - 系统: 12.3.1
1medium
Title: Issue with Incorrect Inference Results in INT8 Model After Converting Custom YOLOv11.pt to TensorRT Body: ### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question After converting a custom-trained YOLOv11.pt model to TensorRT in FP16, the inference results are correct. However, when converting the same model to TensorRT in INT8, the inference results are incorrect, with most of the scores being zero. When using the official YOLOv11s.pt model, both INT8 and FP16 TensorRT conversions produce correct detection results. The environment and calibration dataset for all conversions are the same. What could be causing this issue? ### Additional _No response_
2hard
Title: Timeout when resolving challenge Body: ### Have you checked our README? - [X] I have checked the README ### Have you followed our Troubleshooting? - [X] I have followed your Troubleshooting ### Is there already an issue for your problem? - [X] I have checked older issues, open and closed ### Have you checked the discussions? - [X] I have read the Discussions ### Environment ```markdown - FlareSolverr version: 3.2.2 - Last working FlareSolverr version: 3.2.2 - Operating system: Fedora Server - Are you using Docker: yes - FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 - Are you using a VPN: no - Are you using a Proxy: no - Are you using Captcha Solver: no - If using captcha solver, which one: - URL to test this issue: private, no acces outside lan ``` ### Description This morning when I checked radarr and sonarr I found an error on jackett about YGG (protected by cloudflare). When I go to the log jackett ask me to check flaresolver logs and I see that `The Cloudflare 'Verify you are human' button not found on the page` and a 500 Internal Error in logs ### Logged Error Messages ```text 2023-07-31 09:25:22 INFO ReqId 140094368220928 Incoming request => POST /v1 body: {'maxTimeout': 55000, 'cmd': 'request.get', 'url': 'https://www3.yggtorrent.wtf/engine/search?do=search&order=desc&sort=publish_date&category=all'} 2023-07-31 09:25:22 DEBUG ReqId 140094368220928 Launching web browser... 2023-07-31 09:25:23 DEBUG ReqId 140094368220928 Started executable: `/app/chromedriver` in a child process with pid: 7322 2023-07-31 09:25:23 DEBUG ReqId 140094368220928 New instance of webdriver has been created to perform the request 2023-07-31 09:25:23 DEBUG ReqId 140094341994240 Navigating to... https://www3.yggtorrent.wtf/engine/search?do=search&order=desc&sort=publish_date&category=all 2023-07-31 09:25:24 DEBUG ReqId 140094341994240 Response HTML: **** 2023-07-31 09:25:24 INFO ReqId 140094341994240 Challenge detected. Title found: Just a moment... 2023-07-31 09:25:24 DEBUG ReqId 140094341994240 Waiting for title (attempt 1): Just a moment... 2023-07-31 09:25:34 DEBUG ReqId 140094341994240 Timeout waiting for selector 2023-07-31 09:25:34 DEBUG ReqId 140094341994240 Try to find the Cloudflare verify checkbox 2023-07-31 09:25:35 DEBUG ReqId 140094341994240 Cloudflare verify checkbox found and clicked 2023-07-31 09:25:35 DEBUG ReqId 140094341994240 Try to find the Cloudflare 'Verify you are human' button 2023-07-31 09:25:35 DEBUG ReqId 140094341994240 The Cloudflare 'Verify you are human' button not found on the page 2023-07-31 09:25:37 DEBUG ReqId 140094341994240 Waiting for title (attempt 2): Just a moment... 2023-07-31 09:25:47 DEBUG ReqId 140094341994240 Timeout waiting for selector 2023-07-31 09:25:47 DEBUG ReqId 140094341994240 Try to find the Cloudflare verify checkbox 2023-07-31 09:25:47 DEBUG ReqId 140094341994240 Cloudflare verify checkbox found and clicked 2023-07-31 09:25:47 DEBUG ReqId 140094341994240 Try to find the Cloudflare 'Verify you are human' button 2023-07-31 09:25:48 DEBUG ReqId 140094341994240 The Cloudflare 'Verify you are human' button not found on the page 2023-07-31 09:25:50 DEBUG ReqId 140094341994240 Waiting for title (attempt 3): Just a moment... 2023-07-31 09:26:00 DEBUG ReqId 140094341994240 Timeout waiting for selector 2023-07-31 09:26:00 DEBUG ReqId 140094341994240 Try to find the Cloudflare verify checkbox 2023-07-31 09:26:01 DEBUG ReqId 140094341994240 Cloudflare verify checkbox found and clicked 2023-07-31 09:26:01 DEBUG ReqId 140094341994240 Try to find the Cloudflare 'Verify you are human' button 2023-07-31 09:26:01 DEBUG ReqId 140094341994240 The Cloudflare 'Verify you are human' button not found on the page 2023-07-31 09:26:03 DEBUG ReqId 140094341994240 Waiting for title (attempt 4): Just a moment... 2023-07-31 09:26:13 DEBUG ReqId 140094341994240 Timeout waiting for selector 2023-07-31 09:26:13 DEBUG ReqId 140094341994240 Try to find the Cloudflare verify checkbox 2023-07-31 09:26:14 DEBUG ReqId 140094341994240 Cloudflare verify checkbox found and clicked 2023-07-31 09:26:14 DEBUG ReqId 140094341994240 Try to find the Cloudflare 'Verify you are human' button 2023-07-31 09:26:14 DEBUG ReqId 140094341994240 The Cloudflare 'Verify you are human' button not found on the page 2023-07-31 09:26:16 DEBUG ReqId 140094341994240 Waiting for title (attempt 5): Just a moment... 2023-07-31 09:26:19 DEBUG ReqId 140094368220928 A used instance of webdriver has been destroyed 2023-07-31 09:26:19 ERROR ReqId 140094368220928 Error: Error solving the challenge. Timeout after 55.0 seconds. 2023-07-31 09:26:19 DEBUG ReqId 140094368220928 Response => POST /v1 body: {'status': 'error', 'message': 'Error: Error solving the challenge. Timeout after 55.0 seconds.', 'startTimestamp': 1690788322982, 'endTimestamp': 1690788379023, 'version': '3.2.2'} 2023-07-31 09:26:19 INFO ReqId 140094368220928 Response in 56.041 s 2023-07-31 09:26:19 INFO ReqId 140094368220928 172.18.0.3 POST http://flaresolverr:8191/v1 500 Internal Server Error ``` ### Screenshots _No response_
1medium
Title: [DOC] Time Series Segmentation with sktime and ClaSP Notebook Example Contains Bug Body: #### Describe the issue linked to the documentation The ClaSP notebook example contains a bug: https://www.sktime.net/en/stable/examples/annotation/segmentation_with_clasp.html The `fmt` parameter is no longer present in the API. It appears to have been replaced with `dense_to_sparse` and `sparse_to_dense` methods. This is a minor issue, but it's the only annotation example, so I thought I would fix it. (immediate pull request to follow). <!-- Tell us about the confusion introduced in the documentation. --> #### Suggest a potential alternative/fix The fix is to remove the 'fmt' attribute from the `ClaSPSegmentation` call and then change the Output Format section. <!-- Tell us how we could improve the documentation in this regard. -->
0easy
Title: torch_dtype is actually used now? Body: ### System Info different transformers versions. see description ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Previously (v4.46.3, didn't check all versions), `torch_dtype` in the config was ignored, meaning that model weights would get loaded in fp32 by default (correct behavior for training). On latest transformers version (v4.49.0), it seems it is now used, and so the weights get loaded with whatever is in the checkpoint. Was this change intentional? I previously recall seeing somewhere in the code that you weren't going to make the change to actually use torch_dtype until v5, and I didn't see anything in release notes at a glance, although maybe I missed it. ``` In [1]: import transformers In [2]: llama1bcfg = transformers.AutoConfig.from_pretrained('meta-llama/Llama-3.2-1B-Instruct') In [3]: llama1b = transformers.AutoModelForCausalLM.from_config(llama1bcfg) In [4]: next(llama1b.parameters()).dtype Out[4]: torch.bfloat16 ``` ### Expected behavior Not actually sure, would like to confirm what you expect now.
1medium
Title: log.txt文件中的filepath是写死的绝对路径。能否改成(主程序)的相对路径 Body: log.txt文件中的filepath是写死的绝对路径。 通过log.txt生成的report.html 所读取的文件资源都是通过这些绝对路径获取。 一旦我需要把报告,打包给别人或更换位置(比如放nginx服务器上)那么都会读取不到文件资源而打开报告失败。 以下是 log.tx日志。 当我打包报告给别人是,打开报告还是会去以下filepath路径获取资源。 {"data": {"call_args": {"screen": "array([[[202, 167, 141],\n [200, 164, 140],\n [202, 160, 141],\n ...,\n [207, 167, 139],\n [208, 168, 140],\n [209, 169, 141]],\n\n [[203, 166, 140],\n [200, 162, 138],\n [201, 159, 140],\n ...,\n [209, 169, 141],\n [209, 169, 141],\n [209, 169, 141]],\n\n [[209, 169, 144],\n [207, 165, 142],\n [203, 160, 141],\n ...,\n [212, 172, 144],\n [210, 170, 142],\n [208, 168, 140]],\n\n ...,\n\n [[ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0],\n ...,\n [ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0]],\n\n [[ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0],\n ...,\n [ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0]],\n\n [[ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0],\n ...,\n [ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0]]], dtype=uint8)", "self": {"filename": "/home/gitlab-runner/builds/5VAdgMWr/0/FollowmeTest/appuitest/item/picture/android/CN/Home/pendingAccProcess.jpg", "_filepath": "/home/gitlab-runner/builds/5VAdgMWr/0/FollowmeTest/appuitest/item/picture/android/CN/Home/pendingAccProcess.jpg", "threshold": 0.7, "target_pos": 5, "record_pos": null, "resolution": [], "rgb": false, "__class__": "Template"}}, "start_time": 1561023054.4419475, "end_time": 1561023055.4388847, "name": "_cv_match", "ret": null}, "depth": 3, "tag": "function", "time": "2019-06-20 17:30:55"}
1medium
Title: How to describe the label Body: I want to add a description to the label
3misc
Title: Athena read_sql_query with pyarrow backend trims time in timestamp Body: ### Describe the bug Running this query: ``` wr.athena.read_sql_query("SELECT TIMESTAMP '2024-06-24 9:30:51'", dtype_backend='pyarrow') ``` yields `2024-06-24` instead of `2024-06-24 09:30:51`. It seems like `timestamp` from Athena is mapped to `date64[pyarrow]` instead of `timestamp[ns][pyarrow]` ### How to Reproduce ``` wr.athena.read_sql_query("SELECT TIMESTAMP '2024-06-24 9:30:51'", dtype_backend='pyarrow') ``` ### Expected behavior The result should be similar to running with numpy backend: ``` wr.athena.read_sql_query("SELECT TIMESTAMP '2024-06-24 9:30:51'") ``` which correctly gives back `2024-06-24 09:30:51` ### Your project _No response_ ### Screenshots _No response_ ### OS Linux ### Python version 3.12 ### AWS SDK for pandas version 3.8.0 ### Additional context _No response_
1medium
Title: auto-generated fastapi 0.89.0 error Body: Heads up: It looks like the [fastapi 0.89.0 release](https://github.com/tiangolo/fastapi/releases/tag/0.89.0) breaks the asgi code generated by piccolo ( `piccolo asgi new` ) ``` (venv) $ python main.py INFO: Will watch for changes in these directories: ['/...'] INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [47337] using StatReload ... File "/.../app.py", line 21, in <module> create_admin( File "/.../venv/lib/python3.9/site-packages/piccolo_admin/endpoints.py", line 1085, in create_admin return AdminRouter( File "/.../venv/lib/python3.9/site-packages/piccolo_admin/endpoints.py", line 523, in __init__ private_app.add_api_route( File "/.../venv/lib/python3.9/site-packages/fastapi/applications.py", line 304, in add_api_route self.router.add_api_route( File "/.../venv/lib/python3.9/site-packages/fastapi/routing.py", line 572, in add_api_route route = route_class( File "/.../venv/lib/python3.9/site-packages/fastapi/routing.py", line 400, in __init__ self.response_field = create_response_field( File "/.../venv/lib/python3.9/site-packages/fastapi/utils.py", line 90, in create_response_field raise fastapi.exceptions.FastAPIError( fastapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that <class 'starlette.responses.JSONResponse'> is a valid pydantic field type ``` Down-grading to `0.88.0` fixed the error.
1medium
Title: Click choropleth to reveal Altair chart Body: Hello folks, I see that I can click a marker to get a chart to pop up; however, is there a way to click within a choropleth area to show a chart as a pop up? Any nudge would be much appreciated!
1medium
Title: Expose typing hints as they are part of the API Body: Just recently upgraded to 2.2.1 from 1.11 and pyright gave me lots of type hint errors. ``` /home/builder/archivist/confirmer.py /home/builder/archivist/confirmer.py:84:15 - error: Argument of type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to parameter "on_giveup" of type "_Handler | Iterable[_Handler]" in function "on_predicate" Type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to type "_Handler | Iterable[_Handler]" Type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to type "_Handler" Parameter 1: type "Details" cannot be assigned to type "dict[str, Any]" "Details" is incompatible with "dict[str, Any]" "function" is incompatible with protocol "Iterable[_Handler]" "_iter_" is not present (reportGeneralTypeIssues) /home/builder/archivist/confirmer.py:122:15 - error: Argument of type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to parameter "on_giveup" of type "_Handler | Iterable[_Handler]" in function "on_predicate" Type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to type "_Handler | Iterable[_Handler]" Type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to type "_Handler" Parameter 1: type "Details" cannot be assigned to type "dict[str, Any]" "Details" is incompatible with "dict[str, Any]" "function" is incompatible with protocol "Iterable[_Handler]" "_iter_" is not present (reportGeneralTypeIssues) ``` In order to fix this I had to import a private directory viz: from backoff._typing import Details Surely type hints are part of the API and should not be private ?
1medium
Title: Static dimensions per track over an entire video Body: ### Actions before raising this issue - [X] I searched the existing issues and did not find anything similar. - [X] I read/searched [the docs](https://docs.cvat.ai/docs/) ### Is your feature request related to a problem? Please describe. Hi, We are using CVAT a lot for labeling, one of your key USPs for us is the good interpolation of bounding boxes and support for rotated bounding boxes. In our data domain, we label ships, bridges, buoys, etc... in topview videos, where the size of these objects is static. Sometimes these objects aren't clearly visible in early parts of the video, and we would like to change the dimensions of a track. Currently, we have to step through all the keyframes per object and manually adapt the size of the bounding box. Mostly, we just discard the bounding boxes and start over because that is faster. <img width="453" alt="Screenshot 2024-08-30 at 07 58 25" src="https://github.com/user-attachments/assets/a11b4147-c0af-4ba3-aabd-5ec02375e688"> ### Describe the solution you'd like It would save us A LOT of time, if we had the option to change the dimensions of an object over the entire track in a video at once. For example, be able to right-click a bounding box and select "propagate dimensions to track" or similar. Thank you for considering this request and keep up the good work here!
1medium
Title: Bug Report:cannot import name 'Buffer' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py) Body: ### Current Behaviour I get this error: "cannot import name 'Buffer' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)" when I try importing from ydata_profiling import ProfileReport on google colab. I wondered if you could help me. ### Expected Behaviour install nomally ### Data Description - ### Code that reproduces the bug ```Python from ydata_profiling import ProfileReport ``` ### pandas-profiling version v4.6.3 ### Dependencies ```Text python==3.10.12 pandas==1.5.3 numpy==1.23.5 ``` ### OS google colab ### Checklist - [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues) - [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report. - [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
1medium
Title: Hovering issue Body: Hello, thanks for the work. Short bug report. In safari, when you are hovering over the visualisation in google colab, the window in of the visualisation is scrolling down automatically, making it impossible to work with model_view and neuron_view. Works in chrome. Thanks again
1medium
Title: Background task considerations. Body: There are likely some considerations around backlogs and process shutdown to think about here. In Starlette, background tasks run within the request/response task. The response has been sent, and the server can continue to service other requests on the same connection, but it is also implicitly able to determine how many background tasks are running (response sent, but ASGI call not yet complete), and is able to effect graceful shutdown (for standard process restarts. Close connections, wait until background tasks are complete with a suitable timeout ceiling, then shutdown) A global queue attached to the app is valid too, but ideally you’d want to 1. Ensure that the ASGI lifecycle shutdown waits for the queue to empty. 2. Consider how you want to deal with task backlogs. Actions for now could be: 1. Attach background tasks to responses instead, ala Starlette. Everything’ll be handled for you then. 2. Don’t sweat the finer details for now, since it’s all alpha. We can come back to it.
1medium
Title: how to use .range() using supabase-py Body: please tell me how to use .range() using supabase-py
0easy
Title: [Question] Is there any way to prevent pdoc from documenting method objects? Body: #### Problem Description A clear and concise description of what the bug is. In my application, I use `typer` as a CLI framework (https://typer.tiangolo.com/). Part of the framework uses decorators with annotations to manage flags, help text, etc. In the example below you can see that the `@callback` decorator uses a class `CommonOptions` which provide a reusable set of options. When I run `pdoc` the `typer.Option()` method (`typer.models.OptionInfo`) is found to be an in-memory object, and as a result the `id` value changes on every run of `pdoc`. This is problematic as the documentation related to the files which use this decorator are never "correct" (the `id` values are always different). I would like to use `pdoc` in a pre-commit hook, but this behavior makes that infeasible. My question is: Is there any way to prevent this from happening? There are a few possible "solutions" that would be acceptable if possible, for example: 1. Is my code written "incorrectly" with relation to how `pdoc` works that isn't obvious to me? 2. Is there a flag that I can add to `pdoc` to write a static reference to the in-memory object? (I don't believe so) 3. Is there a decorator that will skip "following" the object, and just leave a static reference to the method? I have tried `@private` in the docstring, however I would really prefer to have the methods documented. Also, since `typer` uses the docstring for help text, this will show up in the help text and cause confusion. 4. Is there a way to tell `pdoc` to ignore / exclude an entire directory tree from documentation? So I don't have to add `@private` to all of the docstrings individually. 5. Is there some change that could implement one or more of the above solutions within `pdoc`? Including example code below in case there's something I can do to fix my code. I can try to produce a simpler example for reproducibility, if requested. code: ```python @callback(app, CommonOptions) def callback(ctx: typer.Context): """ Callback help text """ if ctx.invoked_subcommand is None: typer.help(ctx) ``` documentation: ```python @callback(app, CommonOptions) def callback( ctx: typer.models.Context, *, log_level: Annotated[Optional[str], <typer.models.OptionInfo object at 0x7ff4362c79b0>] = None, version: Annotated[Optional[bool], <typer.models.OptionInfo object at 0x7ff4362c4a10>] = False ): ``` #### Steps to reproduce the behavior: 1. See example code below. #### System Information Paste the output of "pdoc --version" here. ```shell ❯ pdoc --version pdoc: 14.3.0 Python: 3.12.1 Platform: Linux-6.2.0-1017-lowlatency-x86_64-with-glibc2.35 ``` #### Example source `CommonOptions` source: ```python """ source: https://github.com/tiangolo/typer/issues/153#issuecomment-1771421969 """ from dataclasses import dataclass, field from typing import Annotated, Optional import click import typer from lbctl.utils.helpers.version import Version, VersionCheckErrors @dataclass class CommonOptions: """ Dataclass defining CLI options used by all commands. @private - hide from pdoc output due to some dynamic objects """ instance = None ATTRNAME: str = field(default="common_params", metadata={"ignore": True}) def __post_init__(self): CommonOptions.instance = self @classmethod def from_context(cls, ctx: typer.Context) -> "CommonOptions": if (common_params_dict := getattr(ctx, "common_params", None)) is None: raise ValueError("Context missing common_params") return cls(**common_params_dict) def callback_log_level(cls, ctx: typer.Context, value: str): """Callback for log level.""" if value: from lbctl.utils.config import config config.configure_logger(console_log_level=value) def callback_version(cls, ctx: typer.Context, value: bool): """Callback for version.""" if value: try: ver = Version() ver.version(show_check=True, suggest_update=True) except (KeyboardInterrupt, click.exceptions.Abort): raise VersionCheckErrors.Aborted raise VersionCheckErrors.Checked log_level: Annotated[ Optional[str], typer.Option( "--log-level", "-L", help="Set log level for current command", callback=callback_log_level, ), ] = None version: Annotated[ Optional[bool], typer.Option("--version", "-V", help="Show version and exit", callback=callback_version), ] = False ``` decorator source ```python """ source: https://github.com/tiangolo/typer/issues/153#issuecomment-1771421969 """ from dataclasses import fields from functools import wraps from inspect import Parameter, signature from typing import TypeVar import typer from lbctl.common.options import CommonOptions OptionsType = TypeVar("OptionsType", bound="CommonOptions") def callback(typer_app: typer.Typer, options_type: OptionsType, *args, **kwargs): def decorator(__f): @wraps(__f) def wrapper(*__args, **__kwargs): if len(__args) > 0: raise RuntimeError("Positional arguments are not supported") __kwargs = _patch_wrapper_kwargs(options_type, **__kwargs) return __f(*__args, **__kwargs) _patch_command_sig(wrapper, options_type) return typer_app.callback(*args, **kwargs)(wrapper) return decorator def command(typer_app, options_type, *args, **kwargs): def decorator(__f): @wraps(__f) def wrapper(*__args, **__kwargs): if len(__args) > 0: raise RuntimeError("Positional arguments are not supported") __kwargs = _patch_wrapper_kwargs(options_type, **__kwargs) return __f(*__args, **__kwargs) _patch_command_sig(wrapper, options_type) return typer_app.command(*args, **kwargs)(wrapper) return decorator def _patch_wrapper_kwargs(options_type, **kwargs): if (ctx := kwargs.get("ctx")) is None: raise RuntimeError("Context should be provided") common_opts_params: dict = {} if options_type.instance is not None: common_opts_params.update(options_type.instance.__dict__) for field in fields(options_type): if field.metadata.get("ignore", False): continue value = kwargs.pop(field.name) if value == field.default: continue common_opts_params[field.name] = value options_type(**common_opts_params) setattr(ctx, options_type.ATTRNAME, common_opts_params) return {"ctx": ctx, **kwargs} def _patch_command_sig(__w, options_type) -> None: sig = signature(__w) new_parameters = sig.parameters.copy() options_type_fields = fields(options_type) for field in options_type_fields: if field.metadata.get("ignore", False): continue new_parameters[field.name] = Parameter( name=field.name, kind=Parameter.KEYWORD_ONLY, default=field.default, annotation=field.type, ) for kwarg in sig.parameters.values(): if kwarg.kind == Parameter.KEYWORD_ONLY and kwarg.name != "ctx": if kwarg.name not in new_parameters: new_parameters[kwarg.name] = kwarg.replace(default=kwarg.default) new_sig = sig.replace(parameters=tuple(new_parameters.values())) setattr(__w, "__signature__", new_sig) ```
1medium
Title: Docker containers on spaCy website are not working Body: <!-- NOTE: For questions or install related issues, please open a Discussion instead. --> ## How to reproduce the behaviour 1. Go to "https://spacy.io/usage/rule-based-matching" 2. Go to "Editable Code" block 3. Select "Run" button Returns: "Connecting failed. Please reload and try again." <!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. --> ## Your Environment <!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.--> * Operating System: * Python Version Used: * spaCy Version Used: * Environment Information:
1medium
Title: Getting `ValueError: Hyperopt Section not present in config` while loading hyperopt from YAML config Body: **Description** The code in https://github.com/ludwig-ai/ludwig/blob/7e34450188f1265e6cd9cbda600dc0a605627099/ludwig/hyperopt/run.py#L208 seems to have a bug where its comparing if the HYPEROPT key is contained in the variable `config` - which is actually a string that contains the path to the config file. I believe the original intention was to compare it with `config_dict` **To Reproduce** ``` #hyperopt.py from ludwig.hyperopt.run import hyperopt import pandas df = pandas.read_csv('./rotten_tomatoes.csv') results = hyperopt(config='./rotten_tomatoes.yaml', dataset=df) ``` Running the above results in the following error: The rottent_tomatoes.csv and rotten_tomatoes.yaml file is as per the tutorial here https://ludwig.ai/latest/getting_started/hyperopt/ ``` $ python hyperopt.py Traceback (most recent call last): File "/Users/mohan.krishnan/Workspace/autotrain/hyperopt.py", line 12, in <module> results = hyperopt(config='./rotten_tomatoes.yaml', dataset=df) File "/Users/mohan.krishnan/Workspace/autotrain/env/lib/python3.10/site-packages/ludwig/hyperopt/run.py", line 209, in hyperopt raise ValueError("Hyperopt Section not present in config") ValueError: Hyperopt Section not present in config ``` **Expected behavior** The config is correctly parsed without the exception being thrown **Environment:** - OS: Mac OS - Version 13.6.3 - Python version : 3.10
1medium
Title: MegaUpload model link not working Body: The MegaUpload link to the models is no longer working. maybe replace it with a new [mega.nz](url) link?
0easy
Title: Using a different speaker encoder Body: Hello, I really appreciate the work on display here. I was just wondering if I could use a different speaker encoder. If someone used a different encoder, could you explain the difficulties of replacing the encoder and how the results were different from the speaker encoder already in use?
1medium
Title: Custom axes titles for learning curve subplots Body: Please add options to add custom x, y axes labels when plotting subplots for learning curves. At present unable to add them. Also same goes with xticks, yticks. Xticks: As you can see for my implementation there's no fixed xticks for each subplots. Make an option to add custom xticks. ![image](https://user-images.githubusercontent.com/23068132/81775905-54e9aa80-950b-11ea-87f0-8eef81084147.png)
1medium
Title: pydntic error on importing supabase Body: **Describe the bug** If I import supabase as `from supabase import create_client` it leads to an import error for field_validator from pydantic. **To Reproduce** Steps to reproduce the behavior: 1. Install supabase using conda. 2. Import supabase. **Expected behavior** Import with no errors. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: linux - Version 1.0.3
1medium
Title: On Installation process, "segmentation fault (core dumped)" Body: Hi, thanks for a great work! I am trying to git clone the project and try inference using the source code. However, when I try this command: "python basicsr/setup.py develop" it keeps making "segmentation fault (core dumped)" error. My environment is Ubuntu 18.04.6 When I tried the process in the colab, it does work. Can you check if everything is correct on your installation process or description? Were there any changes in "basicsr/setup.py" file? Thank you
1medium
Title: Tokenizing Dataset Fails with newline or index error Body: When trying to tokenize a dataset, it fails with either the error `Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?` or one about list index out of range. Running the newest version of the Colab notebook and this happens with both GPT-2 and GPT-Neo. Please let me know what info is needed or what I can try to fix this. Thanks!
1medium
Title: Visual ATM Model Report Body: **Describe the solution you'd like** The [Auto Tune Models (ATM)](https://github.com/HDI-Project/ATM) Python library is an easy-to-use classification model solver that searches for the best model given a CSV dataset containing features and a target. During its run it creates a SQLite database that stores results from its auto-tuning and can be accessed using a results object with summary data, scores, and the best model found. ATM also has a CLI and a REST API. To take advantage of ATM, a Yellowbrick `contrib` module (e.g. `yellowbrick.contrib.atm`) should implement visual diagnostics functionality for the ATM results object, allowing the user to explore classification visualizers for the best classifier, or compare classification visualizers across multiple models. Note that ATM may be an excellent start to getting multi-model report functionality from Yellowbrick, since ATM wraps a suite of trained and cross-validated models. Open questions include: - Should Yellowbrick directly access ATM's database? - What data does ATM provide that could enable ATM-specific visualizations? - Can Yellowbrick be used with the REST API? A successful conclusion to this issue is the creation of an `yellowbrick.contrib.atm` package with the following functionality: - [ ] A wrapper for `atm.Model` and/or `atm.Datarun` that enables Yellowbrick classifier visualizers - [ ] Documentation/blog post about how to integrate Yellowbrick and ATM - [ ] Follow on issues for ATM-specific visualizers and functionality **Is your feature request related to a problem? Please describe.** This issue is related to #397 that described using Yellowbrick with other ML libraries. Since this discussion, Yellowbrick has incorporated contrib support for 3rd party libraries using wrappers and other methods (see #1103) and has been used successfully with [other projects like Keras](https://towardsdatascience.com/evaluating-keras-neural-network-performance-using-yellowbrick-visualizations-ad65543f3174). The ATM library, however, will not work with the wrapper since it uses sklearn under the hood and has extended functionality that Yellowbrick could take advantage of, such as multi-model comparisons. Because of this, an ATM contrib model would be well suited to Yellowbrick's out of core approach. **Examples** N/A @mitevpi any thoughts on this?
1medium
Title: Form -> FlaskForm rename breaks custom metaclasses Body: When using a custom metaclass you usually subclass `FormMeta`. This fails thanks to the different metaclass used to show the deprecation warning (#249): > TypeError: Error when calling the metaclass bases > metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases It's an easy fix (not using the deprecated `Form` class) but still, I think this should at least e mentioned in the docs. cc @davidism
1medium
Title: System.ExecutionEngineException creating Microsoft.ML.OnnxRuntime.SessionOptions Body: This issue is reported by https://developercommunity.visualstudio.com/user/25004 and moved from https://developercommunity.visualstudio.com/t/SystemExecutionEngineException-creating/10794175 Hi devs! I have a C#.NET project that uses the Microsoft.ML.OnnxRuntime and Microsoft.ML.OnnxRuntime.Managed packages. System.ExecutionEngineException error began while creating Microsoft.ML.OnnxRuntime.SessionOptions when the Microsoft.ML.OnnxRuntime and Microsoft.ML.OnnxRuntime.Managed Version=1.19.2 packages were updated to Version=1.20.0. Do the 1.20.0 packages include all required dependencies? Previous version packages seem to work without issue. Thanks! -Denny
1medium
Title: OKEx error: Unexpected keyword argument 'status' Body: **Describe the bug** I am using the sample script for liquidations with OKEx. After a couple of seconds, it ends with this error message: ``` File "[...]/lib/python3.9/site-packages/cryptofeed/feed.py", line 296, in callback await cb(**kwargs) TypeError: __call__() got an unexpected keyword argument 'status' ``` **To Reproduce** ``` f.add_feed(OKEx(channels=[LIQUIDATIONS], symbols=['BTC-USD-SWAP'], callbacks={LIQUIDATIONS: LiquidationCallback(liquidations)}), timeout=-1) ```
1medium
Title: default_factory doesn't work Body: Hi! I use default_factory to initialize my variable, but variable always returns the same result. It seems like default_factory doesn't work and it returns always the same result of function. Here is example to reproduce: https://play.strawberry.rocks/?gist=a7a5e62ffe4e68696b44456398d11104
1medium
Title: Add Support For OT Lora, Loha and Dora for HunYuan Video in ComfyUI Body: ### Feature Idea Please add support in ComfyUI for loading of OneTrainer Lora, LoHa and Dora files. Attached are the key names for an OT Lora, LoHa, Dora with full layers, TE1 and TE2 trained and a bundled embedding (essentially every option possible) [LoHaFullTETI_keys.txt](https://github.com/user-attachments/files/18631714/LoHaFullTETI_keys.txt) [LoRaFullTETI_keys.txt](https://github.com/user-attachments/files/18631715/LoRaFullTETI_keys.txt) [DoRaFullTETI_keys.txt](https://github.com/user-attachments/files/18631713/DoRaFullTETI_keys.txt) Safetensors if needed can be found here: [SafeTensor Files](https://huggingface.co/datasets/Calamdor/OT_Files/tree/main) ### Existing Solutions https://github.com/comfyanonymous/ComfyUI/issues/6531#issuecomment-2617789374 is a workaround for an OT Lora, but not for Dora and likely not for a Lora with TE. ### Other _No response_
1medium
Title: Python does not seem to correctly report fitted values in statmodels ARIMA when differencing is involved Body: When fitting an ARIMA model using the statsmodels python implementation I see the following behaviour, python does not seem to correctly provide the values for the differenced lags. I am comparing the results with the ones obtained using the R ARIMA implementation. ```python import pandas as pd df=pd.read_csv('ffp2\datasets\euretail.csv') df.index=pd.date_range(start=df['index'].str[:4].min(),freq='1Q',periods=df.shape[0]) df=df['value'] from statsmodels.tsa.arima.model import ARIMA model=ARIMA(df, order=(0,1,1), seasonal_order=(0,1,1,4)).fit() pd.concat((df,model.fittedvalues, model.resid), axis=1).rename(columns={0:'py_fit',1:'py_resid'}).head(10) ``` <details> Scenario: - Seasonal non-stationary data which requires - Single differencing - Seasonal first differencing of period 4 - We end up with a ARIMA(0,1,1)(0,1,1,4) model Please note the issue occurs when using the python ARIMA implementation when importing from `statsmodels.tsa.arima.model import ARIMA` I've seen webtutorials where it seems to function correctly but they seem to be using the previous implementation `from statsmodels.tsa.arima_model import ARIMA`, this implementation seems to currently be deprecated https://github.com/statsmodels/statsmodels/issues/3884 Python fitted model below (for reference) ``` SARIMAX Results ======================================================================================= Dep. Variable: value No. Observations: 64 Model: ARIMA(0, 1, 1)x(0, 1, 1, 4) Log Likelihood -34.642 Date: Thu, 01 Dec 2022 AIC 75.285 Time: 14:09:56 BIC 81.517 Sample: 03-31-1996 HQIC 77.718 - 12-31-2011 Covariance Type: opg ============================================================================== coef std err z P>|z| [0.025 0.975] ------------------------------------------------------------------------------ ma.L1 0.2903 0.155 1.872 0.061 -0.014 0.594 ma.S.L4 -0.6912 0.132 -5.250 0.000 -0.949 -0.433 sigma2 0.1810 0.034 5.316 0.000 0.114 0.248 =================================================================================== Ljung-Box (L1) (Q): 0.25 Jarque-Bera (JB): 1.91 Prob(Q): 0.62 Prob(JB): 0.38 Heteroskedasticity (H): 0.76 Skew: -0.22 Prob(H) (two-sided): 0.54 Kurtosis: 3.77 =================================================================================== Warnings: [1] Covariance matrix calculated using the outer product of gradients (complex-step). ``` R model code for reference below ```r chooseCRANmirror(graphics=FALSE, ind=70) # # Install if needed # install.packages("fpp2") # install.packages("readr") # install.packages("tsibble") library("fpp2") library("readr") library("tsibble") fit <- Arima(euretail, order=c(0,1,1), seasonal=c(0,1,1)) fitted(fit) resid(fit) ``` </details> #### Expected Output I would expect R and Python implementations to provide much closer results, it seems as if the Python implementation might have a bug ``` value py_fit py_resid r_fit r_resid 1996-03-31 89.13 0.000000 89.130000 89.078541 0.051459 <---- (Issue Here) 1996-06-30 89.52 89.130003 0.389997 89.496685 0.023315 1996-09-30 89.88 89.520000 0.360000 89.864432 0.015568 1996-12-31 90.12 89.879998 0.240002 90.143352 -0.023352 1997-03-31 89.19 134.685000 -45.495000 89.380354 -0.190354 <---- (Issue Here) 1997-06-30 89.78 89.579993 0.200007 89.621984 0.158016 1997-09-30 90.03 90.193547 -0.163547 90.164354 -0.134354 1997-12-31 90.38 90.222837 0.157163 90.251028 0.128972 1998-03-31 90.27 89.463010 0.806990 89.602346 0.667654 1998-06-30 90.77 90.951737 -0.181737 90.937687 -0.167687 1998-09-30 91.85 91.019609 0.830391 91.077857 0.772143 1998-12-31 92.51 92.389250 0.120750 92.397782 0.112218 1999-03-31 92.21 92.044928 0.165072 92.056141 0.153859 1999-06-30 92.52 92.752252 -0.232252 92.744481 -0.224481 1999-09-30 93.62 93.066856 0.553144 93.083940 0.536060 ``` Dataset [euretail.csv](https://github.com/statsmodels/statsmodels/files/10134041/euretail.csv) #### Output of ``import statsmodels.api as sm; sm.show_versions()`` <details> INSTALLED VERSIONS ------------------ Python: 3.9.6.final.0 statsmodels =========== Installed: 0.13.5 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\statsmodels) Required Dependencies ===================== cython: Not installed numpy: 1.21.4 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\numpy) scipy: 1.7.3 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\scipy) pandas: 1.3.5 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\pandas) dateutil: 2.8.2 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\dateutil) patsy: 0.5.2 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\patsy) Optional Dependencies ===================== matplotlib: 3.5.1 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\matplotlib) backend: module://matplotlib_inline.backend_inline cvxopt: Not installed joblib: 1.1.0 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\joblib) Developer Tools ================ IPython: 7.30.1 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\IPython) jinja2: Not installed sphinx: Not installed pygments: 2.10.0 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\pygments) pytest: Not installed virtualenv: Not installed </details>
2hard
Title: [BUG] Pokemon weight is inaccurate Body: When requesting a pokemon from the API, the weight is off by 10x. For example, when requesting `https://pokeapi.co/api/v2/pokemon/snorlax` `weight = 4600` when it should be `weight = 460kg` This is true for Ditto and Meowth as well. Have not checked other pokemon, but assume it is the same. Edit: If this falls on me to fix, it will be delayed.
0easy
Title: DRF - Adding User from Panel vs adding with code error? Body: `class CustomerRegister(APIView): permission_classes = (permissions.AllowAny,) def post(self, request): data = request.data data['is_active'] = True serializer = UserSerializer(data=data) if serializer.is_valid(): user = User.objects.create_user(**data) user.save() customer = Customer.objects.create(user=User.objects.get(username=data['username'])) url, headers, body, status_code = self.create_token_response(request) return Response(json.loads(body), status=status_code) return Response(data=serializer.errors, status=status.HTTP_400_BAD_REQUEST)` When I am adding the user from the admin panel everything works and I am getting my tokens. When I use this code I am getting `{ "error": "invalid_grant", "error_description": "Invalid credentials given." }` My request for the token is absolutely the same. Does anyone has any idea why?
1medium
Title: [BUG] Batch inference DDP + zero stage 3 = inference code hangs Body: https://github.com/deepspeedai/DeepSpeed/issues/7128 I ran the batch inference code with deepspeed generation, not the vllm one. The code hangs while I set zero stage = 3. I created a minimal code snippet for you to debug the error. ```python import os import torch import torch.distributed as dist from transformers import AutoModelForCausalLM, AutoTokenizer import deepspeed # Initialize distributed environment def setup_distributed(): dist.init_process_group(backend="nccl", init_method="env://") local_rank = int(os.getenv("LOCAL_RANK", 0)) torch.cuda.set_device(local_rank) return local_rank def load_model(model_name="facebook/opt-1.3b", local_rank=0): # Ensure distributed environment is set up if not dist.is_initialized(): dist.init_process_group(backend="nccl", init_method="env://") world_size = dist.get_world_size() # Number of GPUs available torch.cuda.set_device(local_rank) # Assign each process to a GPU print( f"Loading model {model_name} on rank {local_rank}, using {world_size} GPUs for model parallelism" ) # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # ✅ DeepSpeed Inference config for Model Parallelism ds_config = { # "replace_with_kernel_inject": False, # Enables optimized inference kernels "tensor_parallel": {"tp_size": 1}, # Enables Model Parallelism "dtype": "bf16" if torch.cuda.is_bf16_supported() else "fp16", # Automatic dtype selection } # ✅ Initialize DeepSpeed for Model Parallel Inference model = deepspeed.init_inference(model, config=ds_config) return model, tokenizer # Perform inference with data parallelism def batch_inference(model, tokenizer, prompts, local_rank): inputs = tokenizer(prompts, return_tensors="pt", padding=True, truncation=True).to( f"cuda:{local_rank}" ) with torch.no_grad(): outputs = model.generate(**inputs, max_length=150, synced_gpus=True) return tokenizer.batch_decode(outputs, skip_special_tokens=True) def main(): local_rank = setup_distributed() model, tokenizer = load_model(local_rank=local_rank) # Each GPU gets a different batch global_batch = [ [ "What is AI?", "Explain deep learning.", ], # Batch for GPU 0 [ "Tell me a joke.", "What is reinforcement learning? Tell me all the details", ], # Batch for GPU 1 ] prompts = global_batch[local_rank] if local_rank < len(global_batch) else [] print(f"GPU {local_rank} prompts:", prompts) # Perform batch inference results = batch_inference(model, tokenizer, prompts, local_rank) print(f"GPU {local_rank} results:", results) dist.barrier() # Ensure all GPUs finish if __name__ == "__main__": main() ``` Run the code with ```bash NCCL_DEBUG=INFO NCCL_BLOCKING_WAIT=1 NCCL_ASYNC_ERROR_HANDLING=1 deepspeed --num_gpus 2 test_deepspeed.py ``` The code should run without error because it's DDP. Now, if we change set "tensor_parallel": {"tp_size": 1} -> "tensor_parallel": {"tp_size": 2} and rerun the code. The code hangs forever. Note that the bug happens when DDP + TP are enabled.
2hard
Title: Accuracies totally different Body: Hi. I am converting the tree model to C using m2cgen. Although the inference latencies are much lower, the accuracies are way off. Here's how I am converting and reading the .so files ``` from xgboost import XGBRFRegressor num_est=100 model = XGBRFRegressor(n_estimators=num_est, max_depth=8) model.fit(X_train, y_train) code = m2c.export_to_c(model) len(code) with open('model.c', 'w') as f: f.write(code) !gcc -Ofast -shared -o lgb_score.so -fPIC model.c !ls -l lgb_score.so lib = ctypes.CDLL('./lgb_score.so') score = lib.score # Define the types of the output and arguments of this function. score.restype = ctypes.c_double score.argtypes = [ndpointer(ctypes.c_double)] ``` Why is this happening and how can I fix it?
1medium
Title: using tweepy to extract data but getting error Body: i am getting this coroutine error for my code.i need some tweet data for some sentiment analysis and I am unable to get it due to this error. python from twikit import Client, TooManyRequests import time from datetime import datetime import csv from configparser import ConfigParser from random import randint MINIMUM_TWEETS = 20 QUERY = 'stock' config = ConfigParser() config.read("config.ini") username = config['X']["username"] email = config['X']["email"] password = config ['X']["password"] # 1. use the login credentials client =Client(language = "en-US") # client.login(auth_info_1=username,auth_info_2=email,password=password) # client.save_cookies("cookies.json") client.load_cookies('cookies.json') #get tweets tweets = client.search_tweet(QUERY, product = 'Top') for tweet in tweets: print(vars(tweets)) break File "d:\VS CODE\Sentiment Analysis using Tweepy\main.py", line 26, in <module> for tweet in tweets: TypeError: 'coroutine' object is not iterable sys:1: RuntimeWarning: coroutine 'Client.search_tweet' was never awaited - Dask version: - Python version:3.12 - Operating System: windows 10 - Install method (conda, pip, source): pip
1medium
Title: All static files return 403 Body: Not sure what I'm doing wrong but all my static files are issuing `403` errors and `permission denied` errors. Logging into the docker environment and changing permissions to 777 fixes the errors but doesn't fix the underlying issue.
1medium
Title: Following picture can not be restored, seems can not recognize the face Body: Following picture can not be restored, seems can not recognize the face ![real06b](https://user-images.githubusercontent.com/5558722/218959050-cb9c953f-e9d4-4fcd-8687-8cf8f911af22.jpg) It would be an issue of RealESRGAN, I'm not sure, but I believe you can figure it out much quickly than I do.
1medium
Title: dvc exp show: External s3 address not properly shown Body: # Bug Report <!-- ## dvc exp show: External s3 address not properly shown --> ## Description Hello, I extended the example from https://github.com/iterative/dvc/issues/9713. Thank you so much for addressing that so quickly! This is much appreciated! When now using an external s3 address `s3://<BUCKET>/<FILE_NAME>` (e.g., `s3://my_bucket/model.pkl`) as an output location in DVC 3.7, `workspace` and `master` branch in `dvc exp show` use two different names to refer to the s3 location, neither of which seems correct: `master` uses `<REPO_PATH>/<BUCKET>/<FILE_NAME>`, `workspace` uses `<BUCKET>/<FILENAME>`. Both are missing the prefix `s3://` ### Reproduce For reproducing, please specify the respective `<BUCKET>` and `<FILE_NAME>` in the following: ``` git init -q dvc_issue cd dvc_issue dvc init -q cat <<EOT >> .dvc/config [cache] type = symlink EOT cat <<EOT >> dvc.yaml vars: - uri_model: s3://<BUCKET>/<FILE_NAME> stages: train: cmd: python train.py deps: - train.py outs: - \${uri_model}: cache: false evaluate: cmd: python evaluate.py deps: - evaluate.py - \${uri_model} metrics: - metrics.json: cache: false EOT cat <<EOT >> train.py import boto3 def main(): bucket_name = <BUCKET> file_name = <FILE_NAME> data = b"weights: 1, 2, 3" s3 = boto3.resource('s3') object = s3.Object(bucket_name, file_name) object.put(Body=data) print("Finished train.") if __name__ == "__main__": main() EOT cat <<EOT >> evaluate.py import json def main(): metrics_filename = "metrics.json" data = {"auc": 0.29} with open(metrics_filename, 'w') as f: json.dump(data, f) print("Finished evaluate.") if __name__ == "__main__": main() EOT dvc repro -q git add . git commit -q -m "initial" dvc exp show -v ``` ### Expected A single column with the entry `s3://<BUCKET>/<FILENAME>`. ### Environment information **Output of `dvc doctor`:** ```console $ dvc doctor VC version: 3.7.0 (pip) ------------------------ Platform: Python 3.10.8 on Linux-3.10.0-1127.8.2.el7.x86_64-x86_64-with-glibc2.17 Subprojects: dvc_data = 2.6.0 dvc_objects = 0.23.1 dvc_render = 0.5.3 dvc_task = 0.3.0 scmrepo = 1.0.4 Supports: http (aiohttp = 3.8.5, aiohttp-retry = 2.8.3), https (aiohttp = 3.8.5, aiohttp-retry = 2.8.3), s3 (s3fs = 2023.6.0, boto3 = 1.26.0) Config: Global: /home/kpetersen/.config/dvc System: /etc/xdg/dvc ``` -->
1medium
Title: AttributeError: module 'google.colab.output' has no attribute 'serve_kernel_port_as_iframe' in latest version Body: I experienced the following issue described here: https://stackoverflow.com/questions/68729989/jupyterdash-in-jupyterlabs-fails-after-using-plotly-express-in-a-prior-cell/68737108#68737108 I was only able to solve my issue by downgrading to version 0.2.1. Maybe there can be a way for users to turn off the connectivity to Google Colab when it causes problems?
1medium
Title: Model.predict giving same predictions for every examples Body: I have a 110 layer resnet trained and validated with 4 classes to classify. Training examples are in decent proportion (30%,20%,25%,25%). It has validation accuracy of around 90%. When testing it for new examples it gives same class as output always. I am giving a list of arrays as input to model.predict. I have attached the code below. `from __future__ import division, print_function, absolute_import import numpy as np #import pandas as pd import tflearn import os from glob import glob import cv2 import csv import pickle from tflearn.data_utils import shuffle, to_categorical from tflearn.layers.core import input_data, dropout, fully_connected from tflearn.layers.conv import conv_2d, max_pool_2d from tflearn.layers.estimator import regression from tflearn.layers.normalization import local_response_normalization from tflearn.data_preprocessing import ImagePreprocessing from tflearn.data_augmentation import ImageAugmentation import h5py train_data = h5py.File('train_dataset_her2.h5', 'r') X = train_data['X'] Y = train_data['Y'] test_data = h5py.File('test_dataset_her2.h5', 'r') testX = test_data['X'] testY = test_data['Y'] # Real-time data preprocessing img_prep = tflearn.ImagePreprocessing() img_prep.add_featurewise_zero_center() # Real-time data augmentation img_aug = tflearn.ImageAugmentation() img_aug.add_random_flip_leftright() #network n=18 network = input_data(shape=[None, 224, 224,3])#data_preprocessing=img_prep,data_augmentation=img_aug) network=conv_2d(network,108,3,activation='relu') network=max_pool_2d(network,2) network=conv_2d(network,108,3,activation='relu') network=max_pool_2d(network,2) network = conv_2d(network,108, 3, activation='relu') network = dropout(network, 0.8) network = tflearn.conv_2d(network, 16, 3, regularizer='L2', weight_decay=0.0001) network = tflearn.residual_block(network, n, 16) network = tflearn.residual_block(network, 1, 32, downsample=True) network = tflearn.residual_block(network, n-1, 32) network = tflearn.residual_block(network, 1, 64, downsample=True) network = tflearn.residual_block(network, n-1, 64) network = tflearn.batch_normalization(network) network = tflearn.activation(network, 'relu') network = tflearn.global_avg_pool(network) network = tflearn.fully_connected(network, 4, activation='softmax') adam =tflearn.optimizers.Adam(learning_rate=0.002) network = tflearn.regression(network, optimizer=adam, loss='categorical_crossentropy') model = tflearn.DNN(network, tensorboard_verbose=0) model.load("dataset_adam_resnet.tfl.ckpt-4000") print("Done loading model") ############################################################################ ###Prediction sub_folder = [temp[0] for temp in os.walk('Dataset_Test_Data')] sub_folder = sub_folder[1:] ###################################################################### ###predict without pickle for f1 in range(1):#(len(sub_folder)): list_images = sorted(glob(sub_folder[f1] + '/*.jpg')) predictions = [] temp_m = sub_folder[f1].split("/") print("OPerating '%s' folder" %temp_m[1]) for item in list_images: print("predicting%s"%item) predictions.append(model.predict_label((cv2.imread(item)).astype(float).reshape(1,224,224,3))) writer = csv.writer(open('./HER2_Test_Data/'+temp_m[1]+'/Prediction_cnn_without_pickle' + temp_m[1] + '.csv', "w")) writer.writerows(predictions) `
1medium
Title: Celery is NOT running Body: There is a problem. Celery is not running. You can start celery with this command: flaskbb --config None celery worker when do it as above there will be: -------------- celery@ip-172-31-16-221 v4.0.2 (latentcall) ---- **** ----- --- * *** * -- Linux-4.4.44-39.55.amzn1.x86_64-x86_64-with-glibc2.2.5 2017-06-08 02:42:07 -- * - **** --- - ** ---------- [config] - ** ---------- .> app: flaskbb:0x7******27410 - ** ---------- .> transport: redis://localhost:6379// - ** ---------- .> results: redis://localhost:6379/ - *** --- * --- .> concurrency: 1 (prefork) -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) --- ***** ----- -------------- [queues] .> celery exchange=celery(direct) key=celery [2017-06-08 02:42:08,577: CRITICAL/MainProcess] Unrecoverable error: TypeError("can_read() got an unexpected keyword argument 'timeout'",) Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/celery/worker/worker.py", line 203, in start self.blueprint.start(self) File "/usr/local/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python2.7/site-packages/celery/bootsteps.py", line 370, in start return self.obj.start() File "/usr/local/lib/python2.7/site-packages/celery/worker/consumer/consumer.py", line 318, in start blueprint.start(self) File "/usr/local/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python2.7/site-packages/celery/worker/consumer/consumer.py", line 594, in start c.loop(*c.loop_args()) File "/usr/local/lib/python2.7/site-packages/celery/worker/loops.py", line 88, in asynloop next(loop) File "/usr/local/lib/python2.7/site-packages/kombu/async/hub.py", line 345, in create_loop cb(*cbargs) File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 1039, in on_readable self.cycle.on_readable(fileno) File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 337, in on_readable chan.handlers[type]() File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 671, in _receive while c.connection.can_read(timeout=0): TypeError: can_read() got an unexpected keyword argument 'timeout'
1medium
Title: When installed via pip, gir1.2-appindicator3-0.1 required Body: ## Classification: UI/Usability ## Reproducibility: Always ## Summary Attempting to install via pip3 on recent debian has no problem until you attempt to launch the gui. It depends on the package ## Steps to Reproduce - pip3 install autokey - for good measure install libappindicator-3 (and -dev) - fix any more missing pip3 dependancies - pip3 install autokey - autokey-gtk ## Expected Results - The gui should start ## Actual Results $ autokey-gtk (minikube/kube-system) Traceback (most recent call last): File "/home/ranyardm/.local/bin/autokey-gtk", line 7, in <module> from autokey.gtkui.__main__ import main File "/home/ranyardm/.local/lib/python3.5/site-packages/autokey/gtkui/__main__.py", line 4, in <module> from autokey.gtkapp import Application File "/home/ranyardm/.local/lib/python3.5/site-packages/autokey/gtkapp.py", line 31, in <module> from autokey.gtkui.notifier import get_notifier File "/home/ranyardm/.local/lib/python3.5/site-packages/autokey/gtkui/notifier.py", line 22, in <module> gi.require_version('AppIndicator3', '0.1') File "/usr/lib/python3/dist-packages/gi/__init__.py", line 118, in require_version raise ValueError('Namespace %s not available' % namespace) ValueError: Namespace AppIndicator3 not available If helpful, submit screenshots of the issue to help debug. `autokey-gtk --verbose` output is also useful. ## Version 0.93.10 If the problem is known to be present in more than one version, please list all of those. Installed via: pip3 install autokey Distro: latest debian (9.4) Describe any debugging steps you've taken yourself. Lots of googling, eventually found a solution If you've found a workaround, provide it here. apt install gir1.2-appindicator3-0.1 ^^ The above should be in the README.md, the wiki somewhere or somewhere much more obvious than it is.
1medium
Title: Task Execution API: Implement default version handling for no header Body: Task Execution API versioning was added in https://github.com/apache/airflow/pull/47951 via [Cadwyn](https://github.com/zmievsa/cadwyn). As a follow-up we should Implement default version handling for no header since it isn't available out of the box. So if a version isn't provided, we default it to latest.
1medium
Title: Tabula read_pdf cannot read all pages Body: <!--- Provide a general summary of your changes in the Title above --> # Summary of your issue tabula.read_pdf cannot read all pages. # What did you do when you faced the problem? I could not do anything. ## Code: ``` import tabula fp = r"https://www.apple.com/supplier-responsibility/pdf/Apple-Supplier-List.pdf" df = tabula.read_pdf(fp, pages = "all", pandas_options = {'header': None}) ``` ## Expected behavior: <!--- Write your expected results/outputs --> ``` write your expected output ``` A list of 33 elements. Each element is a dataframe that contains the content of one page of the pdf report ## Actual behavior: <!--- Put the actual results/outputs --> ``` paste your output ``` The code only returns a list of one element, a dataframe: 0 1 0 Kyocera Corporation 1166-1 Hebimizo-cho, Higashiomi, Shiga, Japan 1 Kyocera Corporation 1-1 Kokubuyamashita-cho, Kirishima-shi, Kagoshima, Japan 2 Kyocera Corporation 1 Mikata, Ayabe, Kyoto, Japan 3 Laird PLC Building 1, Dejin Industrial Park, Fuyuanyi Road, Heping Community, Fuyong Town, Bao'an 4 District, Shenzhen, Guangdong, China 5 Laird PLC Building No. 1-7 & 9, No. 8 Pengfeng Road, Dakun Industry Park, Songjiang District, Shanghai, 6 China 7 Laird PLC 3rd Building, No. 398, Yuandian Road, Minhang District, Shanghai, China 8 Laird PLC 28 Huanghe South Road, Kunshan Economic & Tech Development Zone, Kunshan, Jiangsu, 9 China 10 Largan Precision No. 18, Tutong First Industrial District, Tutong, Changping, Dongguan, Guangdong, China 11 Company Limited 12 Largan Precision No. 11, Jingke Road, Nantun District, Taichung, Taiwan 13 Company Limited ## Related Issues:
1medium
Title: Asyncio + sqlalchemy ORM Body: Привіт. Як працювати з об'єктами алхімії? Ну тобто зараз є можливість виконувати sql запити через conn.execute() і отримувати звідти скаляри. А можна отримувати обджекти?
1medium
Title: Enabling private sixel color registers should not be necessary Body: This is not a big deal, but these private sixel color functions don't actually do anything: https://github.com/joouha/euporie/blob/be40f57aefad591f8880ea400f64ea7a9ecee924/euporie/core/io.py#L187-L193 Mode 1070 is a private mode, so those sequences should be `\x1b[?1070h` and `\x1b[?1070l`. That said, it's highly unlikely you need to request private color registers anyway. That's only necessary if you're using an image without defining a color palette, and you want it to use the default palette. But it's not a standard mode, and it won't work consistently even on terminals that do support it, so there's very little justification for using it.
1medium
Title: Chrome blocking data uri Body: I'm running Chrome 60 and cannot click to open any of the Links (Browser Log, Server Log, etc.) in my self-contained pytest-html report. This seems to be due to Chrome blocking top frame navigation to data urls. See [here](https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/GbVcuwg_QjM) for their developer discussion on this. The console error is ```Not allowed to navigate top frame to data URL:``` Workaround is to use firefox, or right click to open the URL in a new tab.
1medium
Title: Not expected behavior with drag and drop of subplots Body: https://github.com/user-attachments/assets/a017d3f5-a794-492b-9985-d395b8615825 I experience weird behavior when manually interacting with the subplots generated by the example code (see the recorded screen). Configuration: MacOS 13.7.1 Python 3.10 Pylustrator 1.3.0
1medium
Title: dcc.dropdown options order not consistent during search Body: dash 2.18.0 dash-bootstrap-components 1.6.0 dash-core-components 2.0.0 dash-html-components 2.0.0 dash-table 5.0.0 Chrome, Windows **Describe the bug** For this dcc.Dropdown: `dcc.Dropdown(options=['1', '11 Text', '12', '110', '111', '112']) ` The desired order is as set in options. When nothing is typed in search I see as expected: ![image](https://github.com/user-attachments/assets/3870eced-4d7d-4197-b8f9-01c4f13aa0f7) Once I start searching for 11, the order changes and no longer matches the desired order: ![image](https://github.com/user-attachments/assets/47804504-50fa-4710-9470-4353aea84ea8)
1medium
Title: Update tests to support Pandas 1.0 Body: Pandas 1.0 introduces some deprecations that break our tests; we need to update our tests in order to fully support newer versions of Pandas.
1medium
Title: requests-html, pyppeteer and proxies Body: Hi All I'm attempting to use proxy servers with pypetter (via requests-html) Are you able to confirm that pyppeteer is able to use proxies such as `user:[email protected]:12345` I attempt to use them as per chromium docs with `--proxy-server="user:[email protected]:12345"` but get various `PageErrors` relating to proxy etc. Are you able to confirm that pyppeteer is able to accept the proxy command line switches for chromium? Thanks!
1medium
Title: Update documentation to match changes with v8.x.x Body: ## Describe the bug Some of the documentation has mismatched method names. For example, the documentation under the `Usage -> Routes` page still references an `after_register` handler, but that is now the `on_after_register` handler within a `UserManager`. ## To Reproduce Go to the [Usage > Routes Page](https://fastapi-users.github.io/fastapi-users/usage/routes/#post-register) to see some of the mismatched method names. ## Fixes - I opened #768 to address this. - The examples at [https://fastapi-users.github.io/fastapi-users/configuration/full-example/](https://fastapi-users.github.io/fastapi-users/configuration/full-example/) also have invalid `UserManager` classes and method names, but those examples are not in this repo for me to fix.
0easy
Title: can i use netadapt with yolov5? Body: **Describe the issue**: **Environment**: - NNI version: - Training service (local|remote|pai|aml|etc): - Client OS: - Server OS (for remote mode only): - Python version: - PyTorch/TensorFlow version: - Is conda/virtualenv/venv used?: - Is running in Docker?: **Configuration**: - Experiment config (remember to remove secrets!): - Search space: **Log message**: - nnimanager.log: - dispatcher.log: - nnictl stdout and stderr: <!-- Where can you find the log files: LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout --> **How to reproduce it?**:
3misc
Title: 下载视频会被定向到douyin.wtf上 Body: 使用docker将项目部署到自己服务器后使用下载功能会被定向到douyin.wtf而不是部署服务器本身,如果手动将douyin.wtf替换为部署服务器地址则可正常下载 ***发生错误的平台?*** TikTok 如:抖音/TikTok ***发生错误的端点?*** Web APP 如:API-V1/API-V2/Web APP ***提交的输入值?*** https://www.douyin.com/discover?modal_id=7069543727328398622 如:短视频链接 ***是否有再次尝试?*** 是 如:是,发生错误后X时间后错误依旧存在。 ***你有查看本项目的自述文件或接口文档吗?*** 有,并且很确定该问题是程序导致的 如:有,并且很确定该问题是程序导致的。
1medium
Title: Problems using `.raw()` instead of `.filter()` Body: **Describe the bug** To get consistency between Tortoise-based and non-Tortoise based queries, I started replacing the usual `<ModelClassName>.filter(...)` fetches with `<ModelClassName>.raw(<PlainSQLQueryString>)`. It seemed to work, but I later found that saves were often not actually committing to the database. This caused some serious data consistency and workflow problems in several of my networks, which only came right after changing back to `.filter(...)`. **Expected behavior** When using `<ModelClassName>.raw(...)` I was expecting the returned Model row objects to behave in the same way as the ones returned from awaiting a `<ModelClassName>.filter(...)` query. **Additional context** Is there a "safe" and recommended way to get `Model` row objects from raw SQL queries, which behave identically to those retrieved from `.filter(...)` ? All help appreciated.
2hard
Title: dkim missing via auth-smtp (submission), but added via local shell mail only Body: # Impacted versions * OS Type: Debian * OS Version: Debian 10 (buster) * Database Type: PostgreSQL * Database version: 11.19 * Modoboa: 2.0.5 * installer used: Yes # Steps to reproduce Standard configuration via Installer. /etc/postfix/main.cf contains: ``` smtpd_milters = inet:127.0.0.1:12345 non_smtpd_milters = inet:127.0.0.1:12345 milter_default_action = accept milter_content_timeout = 30s ``` opendkim.service is configured with `Socket inet:12345@localhost` and is running. Domain has a dkim key and has a green dkim icon. # Current behavior When sending an email with the correct From header from a mail client, no dkim header is added. The mail.log has no dkim header: ``` Mar 6 11:45:45 modoboa1 postfix/submission/smtpd[29468]: connect from gw.apg2.net[81.3.13.187] Mar 6 11:45:45 modoboa1 postfix/submission/smtpd[29468]: Anonymous TLS connection established from gw.apg2.net[81.3.13.187]: TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits) Mar 6 11:45:45 modoboa1 postfix/submission/smtpd[29468]: NOQUEUE: client=gw.apg2.net[81.3.13.187], sasl_method=PLAIN, [email protected] Mar 6 11:45:47 modoboa1 postfix/smtpd[29508]: connect from localhost[127.0.0.1] Mar 6 11:45:47 modoboa1 postfix/smtpd[29508]: 7E631BFBB1: client=localhost[127.0.0.1], orig_client=gw.apg2.net[81.3.13.187] Mar 6 11:45:47 modoboa1 postfix/cleanup[29509]: 7E631BFBB1: message-id=<[email protected]> Mar 6 11:45:47 modoboa1 postfix/smtpd[29508]: disconnect from localhost[127.0.0.1] ehlo=1 xforward=1 mail=1 rcpt=1 data=1 quit=1 commands=6 Mar 6 11:45:47 modoboa1 postfix/qmgr[25853]: 7E631BFBB1: from=<[email protected]>, size=4655, nrcpt=1 (queue active) Mar 6 11:45:47 modoboa1 amavis[18307]: (18307-14) Passed CLEAN {RelayedOutbound}, ORIGINATING LOCAL [81.3.13.187]:37543 [81.3.13.187] <[email protected]> -> <[email protected]>, Message-ID: <[email protected]>, mail_id: MhInoSFVu_r4, Hits: -2.899, size: 4179, queued_as: 7E631BFBB1, 1564 ms Mar 6 11:45:47 modoboa1 postfix/submission/smtpd[29468]: proxy-accept: END-OF-MESSAGE: 250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250 2.0.0 Ok: queued as 7E631BFBB1; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<[10.5.248.3]> sasl_username=<[email protected]> Mar 6 11:45:49 modoboa1 postfix/smtp[29511]: Trusted TLS connection established to mta6.am0.yahoodns.net[67.195.228.110]:25: TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256 Mar 6 11:45:51 modoboa1 postfix/smtp[29511]: 7E631BFBB1: to=<[email protected]>, relay=mta6.am0.yahoodns.net[67.195.228.110]:25, delay=3.6, delays=0.04/0.03/2.1/1.4, dsn=2.0.0, status=sent (250 ok dirdel) Mar 6 11:45:51 modoboa1 postfix/qmgr[25853]: 7E631BFBB1: removed ``` An email sent locally via a linux shell mail tool gets the dkim header though: ``` Mar 6 12:27:42 modoboa1 postfix/pickup[25852]: 4A49BBFD04: uid=1000 from=<user> Mar 6 12:27:42 modoboa1 postfix/cleanup[17156]: 4A49BBFD04: message-id=<[email protected]> Mar 6 12:27:42 modoboa1 opendkim[1502]: 4A49BBFD04: DKIM-Signature field added (s=modoboa, d=apg2.de) Mar 6 12:27:42 modoboa1 postfix/qmgr[25853]: 4A49BBFD04: from=<[email protected]>, size=464, nrcpt=1 (queue active) Mar 6 12:27:43 modoboa1 postfix/smtp[17163]: Trusted TLS connection established to mta6.am0.yahoodns.net[67.195.204.77]:25: TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256 Mar 6 12:27:44 modoboa1 postfix/smtp[17163]: 4A49BBFD04: to=<[email protected]>, relay=mta6.am0.yahoodns.net[67.195.204.77]:25, delay=2.2, delays=0.06/0.01/1.3/0.82, dsn=2.0.0, status=sent (250 ok dirdel) Mar 6 12:27:44 modoboa1 postfix/qmgr[25853]: 4A49BBFD04: removed ``` # Expected behavior Mails sent via authenticated-smtp from an external mail client should get the dkim signature header.
2hard
Title: Cannot build a new model Body: When I want to contribute a new model, it notices me as: return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked ModuleNotFoundError: No module named 'qlib.contrib.model.pytorch_newmodel_ts' How can I solve this. & So, is there a document for development?
1medium
Title: [Bug] Random Talk Body: ### Describe the bug when i try tts in hindi with coqui tts it speak the given senetence but with some random talk (not understandable). ### To Reproduce genetrate :- हिंदी भाषा in hi it will speak hindi but with random talk ### Expected behavior _No response_ ### Logs _No response_ ### Environment ```shell xttsv2_2.0.3 python 3.11 pytorch 2.2.1 os windows cuda rtx 3050 4gb 75watts conda ``` ### Additional context _No response_
1medium