text
stringlengths
20
57.3k
labels
class label
4 classes
Title: Creating Pydantic objects in Rust and passing to the interpreter. Body: What's the best way to do this? I'd like to avoid passing JSON via Pyo3 to python and THEN creating the model. Use case: I am moving bounding box processing logic in my library [Docprompt](https://github.com/docprompt/Docprompt) into Rust. Documents can have tens of thousands of bounding boxes, so small overhead becomes an issue. Thank you for the help!
1medium
Title: Unable to get predictions for multiple sentences using TARS Zero Shot Classifier Body: Here is the example code to use TARS Zero Shot Classifier ``` from flair.models import TARSClassifier from flair.data import Sentence # 1. Load our pre-trained TARS model for English tars = TARSClassifier.load('tars-base') # 2. Prepare a test sentence sentence = Sentence("I am so glad you liked it!") # 3. Define some classes that you want to predict using descriptive names classes = ["happy", "sad"] #4. Predict for these classes tars.predict_zero_shot(sentence, classes) # Print sentence with predicted labels print(sentence) print(sentence.labels[0].value) print(round(sentence.labels[0].score,2)) ``` Now this code is wrapped into the following function so that I can use it to get predictions for multiple sentences in a dataset. ``` def tars_zero(example): sentence = Sentence(example) tars.predict_zero_shot(sentence,classes) print(sentence) inputs = ["I am so glad you liked it!", "I hate it"] for input in inputs: tars_zero(input) #output: Sentence: "I am so glad you liked it !" → happy (0.8667) Sentence: "I hate it" ``` Here I'm able to get the prediction only for the first sentence. Can someone tell me how to use TARS Zero Shot Classifier to get predictions for multiple sentences in a dataset? @alanakbik @lukasgarbas @kishaloyhalder @dobbersc @tadejmagajna
1medium
Title: Statistics log division by zero errors Body: ### The problem The statistics sensor produces division by zero errors in the log. This seems to be caused by having values that have identical change timestamps. It might be that this was caused by sensors that were updated several times in a very short time interval and the precision of the timestamps is too low to distinguish the two change timestamps (that is just a guess though). It could also be that this is something triggered by the startup phase. I also saw that there was a recent change in the code where timestamps were replaced with floats, which might have reduced the precision of the timestamp delta calculation. I can easily reproduce the problem, so it is not a once in a lifetime exceptional case. ### What version of Home Assistant Core has the issue? core-2025.3.4 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant OS ### Integration causing the issue statistics ### Link to integration documentation on our website https://www.home-assistant.io/integrations/statistics/ ### Diagnostics information _No response_ ### Example YAML snippet ```yaml ``` ### Anything in the logs that might be useful for us? ```txt `2025-03-22 09:57:43.540 ERROR (MainThread) [homeassistant.helpers.event] Error while dispatching event for sensor.inverter_production to <Job track state_changed event ['sensor.inverter_production'] HassJobType.Callback <bound method StatisticsSensor._async_stats_sensor_state_change_listener of <entity sensor.inverter_production_avg_15s=0.0>>> Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/event.py", line 355, in _async_dispatch_entity_id_event hass.async_run_hass_job(job, event) ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^ File "/usr/src/homeassistant/homeassistant/core.py", line 940, in async_run_hass_job hassjob.target(*args) ~~~~~~~~~~~~~~^^^^^^^ File "/usr/src/homeassistant/homeassistant/components/statistics/sensor.py", line 748, in _async_stats_sensor_state_change_listener self._async_handle_new_state(event.data["new_state"]) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/src/homeassistant/homeassistant/components/statistics/sensor.py", line 734, in _async_handle_new_state self._async_purge_update_and_schedule() ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^ File "/usr/src/homeassistant/homeassistant/components/statistics/sensor.py", line 986, in _async_purge_update_and_schedule self._update_value() ~~~~~~~~~~~~~~~~~~^^ File "/usr/src/homeassistant/homeassistant/components/statistics/sensor.py", line 1097, in _update_value value = self._state_characteristic_fn(self.states, self.ages, self._percentile) File "/usr/src/homeassistant/homeassistant/components/statistics/sensor.py", line 142, in _stat_average_step return area / age_range_seconds ~~~~~^~~~~~~~~~~~~~~~~~~ ZeroDivisionError: float division by zero` ``` ### Additional information _No response_
1medium
Title: ValueError: The checkpoint you are trying to load has model type `gemma3` but Transformers does not recognize this architecture. Body: ### System Info enviroment from pyproject.toml: ``` [tool.poetry] name = "rl-finetunning" package-mode = false version = "0.1.0" description = "" readme = "README.md" [tool.poetry.dependencies] python = "^3.12" torch = {version = "2.5.1+cu121", source = "torch-repo"} torchaudio = {version = "2.5.1+cu121", source = "torch-repo"} langchain = {extras = ["all"], version = "^0.3.14"} numpy = "<2" ujson = "^5.10.0" tqdm = "^4.67.1" ipykernel = "^6.29.5" faiss-cpu = "^1.9.0.post1" wandb = "^0.19.4" rouge-score = "^0.1.2" accelerate = "0.34.2" datasets = "^3.2.0" evaluate = "^0.4.3" bitsandbytes = "^0.45.1" peft = "^0.14.0" deepspeed = "0.15.4" trl = "^0.15.2" transformers = "^4.49.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" [[tool.poetry.source]] name = "torch-repo" url = "https://download.pytorch.org/whl/cu121" priority = "explicit" ``` ### Who can help? @ArthurZucker @gante ### Reproduction Code to reproduce: https://pastebin.com/vGXdw5e7 Model weights was downloaded in current directory. Full Traceback: ``` Traceback (most recent call last): File "/home/calibri/.cache/pypoetry/virtualenvs/rl-finetunning-LD6GBRk7-py3.12/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 1092, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/calibri/.cache/pypoetry/virtualenvs/rl-finetunning-LD6GBRk7-py3.12/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 794, in __getitem__ raise KeyError(key) KeyError: 'gemma3' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/calibri/experiments/rl_finetunning/sft.py", line 118, in <module> model = AutoModelForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/calibri/.cache/pypoetry/virtualenvs/rl-finetunning-LD6GBRk7-py3.12/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 526, in from_pretrained config, kwargs = AutoConfig.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/calibri/.cache/pypoetry/virtualenvs/rl-finetunning-LD6GBRk7-py3.12/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 1094, in from_pretrained raise ValueError( ValueError: The checkpoint you are trying to load has model type `gemma3` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date. You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git` ``` ### Expected behavior I downloaded latest transformers version so I expected the code to run without errors.
1medium
Title: 请问如何在嵌入式设备上使用?谢谢! Body: 请问如何在嵌入式设备上使用? 处理器:ARM 编程环境:C语言 操作系统:linux或RT-thread交叉编译 谢谢!
1medium
Title: transpose ignored in using raw_value to set column data Body: #### OS (e.g. Windows 10 or macOS Sierra) Mac Big Sur #### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7) python 3.8.7 xlwings 23.0 #### Describe your issue (incl. Traceback!) the second fills the column with 0 to 9, the first all with 0 I was expecting the transpose to work with raw_value as well. Maybe that's what's supposed to happen. #### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!) ```python # Your code here thisbk = xw.Book.caller() thisbk.sheets.active.range((1,1),(10,1)).options(transpose=True).raw_value = [0,1,2,3,4,5,6,7,8,9] thisbk.sheets.active.range((1,1),(10,1)).options(transpose=True).value = [0,1,2,3,4,5,6,7,8,9] ```
1medium
Title: keyerror: all_points_y Body: I'm a beginner of deep-learning, recently, i try to use the mask-rcnn to make a project about rust detection. Firstly i watch the video how to use this code, then i modify the code of 'balloon', and have a try to train my own data, but i meet the error called keyerror: all_points_y, my dataset are many binary images which contain the rust areas(white) and the background(black). To approprate the requirements of code, i use the edge detection to found the outline of rust areas, then i save the class name(only 'rust' class) and the coordinate of the points which on the outlines as .json file according to the requirement format(firstly i save these data in a dict, then i use json.dump to transform the dict to json and save it): #{ 'filename': '28503151_5b5b7ec140_b.jpg', # 'regions': { # '0': { # 'region_attributes': {}, # 'shape_attributes': { # 'all_points_x': [...], # 'all_points_y': [...], # 'name': 'polygon'}}, # ... more regions ... # }, # 'size': 100202 # } During the training processing, i saw some error messages repeatly about keyerror: all_point_y, the message repeat about 20 times when my train dataset has 120 pictures and the val dataset has 30 pictures. Easily see that not all picture will bring error, but i don't know how the error happened. I try to search the problem and only find this kind of answer: the points in dataset should be in an polygon instead of circle or rectangle, but the obviously the answer isn't fit for my question. Futher more, when load the json file, the order of data usually have change,for example, when i save the data, the format is: 'shape_attributes': { 'all_points_x': [...], 'all_points_y': [...], 'name': 'rust'} but when i load the data, it change to the format as: 'all_points_y': [...], 'name': 'rust', 'all_points_y': [...]} whether this issue have an affect to the training processing? ps: 1.In fact , i'm also a beginner of english. 2.Help me, every scholars(dalao), thanks·
1medium
Title: ploomber interact should also display the custom cli from pipeline parameters Body:
1medium
Title: Scene not rotating if cursor below the menu Body: If the cursor is anywhere below the controls menu, rotating via click and drag does not work. The issue seems to be that that the dg div sets as its height the entire window rather than just the height of the menu.
1medium
Title: How to pass hyper-parameters to model??? Body:
1medium
Title: ENH: Session id is prefixed with K8s namespace, when in the K8s environment Body: Note that the issue tracker is NOT the place for general support. For discussions about development, questions about usage, or any general questions, contact us on https://discuss.xorbits.io/.
1medium
Title: OpenAPI v3 Handling Issue Body: Hello, We have had a BlackSheep app running for over a year. When we attempted to upgrade to 2.0.7, we ran into this error. It seems to be an error in the fundamental OpenAPI class. Since this was working fine until today, I think that it must be a bug in 1.0.9. Here is the error message: ```Traceback (most recent call last): File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 307, in _get_array_outer_type return field_info.outer_type_ AttributeError: 'FieldInfo' object has no attribute 'outer_type_' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/application.py", line 726, in _handle_lifespan await self.start() File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/application.py", line 715, in start await self.after_start.fire() File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/application.py", line 126, in fire await handler(self.context, *args, **kwargs) File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/common.py", line 404, in build_docs docs = self.generate_documentation(app) File "/home/mistral/llm_server_env/datascience-llm-server/app/docs/handler.py", line 34, in generate_documentation paths=self.get_paths(app), File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 449, in get_paths own_paths = self.get_routes_docs(app.router, path_prefix) File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 1146, in get_routes_docs request_body=self.get_request_body(handler), File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 847, in get_request_body content=self._get_body_binder_content_type(body_binder, body_examples), File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 821, in _get_body_binder_content_type return { File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 823, in <dictcomp> schema=self.get_schema_by_type(body_binder.expected_type), File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 642, in get_schema_by_type schema = self._get_schema_by_type(child_type, type_args) File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 663, in _get_schema_by_type return self._get_schema_for_class(object_type) File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 567, in _get_schema_for_class for field in self.get_fields(object_type): File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 739, in get_fields return handler.get_type_fields( File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 342, in get_type_fields return [ File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 345, in <listcomp> self._open_api_v2_field_schema_to_type( File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 297, in _open_api_v2_field_schema_to_type return self._get_array_outer_type(field_info) File "/home/mistral/llm_server_env/lib/python3.10/site-packages/blacksheep/server/openapi/v3.py", line 311, in _get_array_outer_type return List[field_info.annotation.__args__[0]] AttributeError: type object 'list' has no attribute '__args__'. Did you mean: '__add__'?```
2hard
Title: setup.py does not recognize opencv2 of Anaconda Body: When run the setup, error happens. opencv is installed on Anaconda. Is it possible to install imgaug on Anaconda? ... Processing ./dist/imgaug-0.2.0.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-K5MRPU-build/setup.py", line 6, in <module> raise Exception("Could not find package 'cv2' (OpenCV). It cannot be automatically installed, so you will have to manually install it.") Exception: Could not find package 'cv2' (OpenCV). It cannot be automatically installed, so you will have to manually install it.
1medium
Title: 504 Gateway Time-out Body: ### Describe the bug 504 gate way timeout today it's ok when i used it tow days ago , any my version is gradio-5.15.0 gradio-client-1.7.0 ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction ```python import gradio as gr ``` ### Screenshot _No response_ ### Logs ```shell ``` ### System Info ```shell Gradio Environment Information: ------------------------------ Operating System: Linux gradio version: 5.15.0 gradio_client version: 1.7.0 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.8.0 audioop-lts is not installed. fastapi: 0.115.8 ffmpy: 0.5.0 gradio-client==1.7.0 is not installed. httpx: 0.28.1 huggingface-hub: 0.28.1 jinja2: 3.1.4 markupsafe: 2.1.5 numpy: 1.26.4 orjson: 3.10.15 packaging: 24.2 pandas: 2.2.3 pillow: 10.4.0 pydantic: 2.10.6 pydub: 0.25.1 python-multipart: 0.0.20 pyyaml: 6.0.2 ruff: 0.9.5 safehttpx: 0.1.6 semantic-version: 2.10.0 starlette: 0.45.3 tomlkit: 0.12.0 typer: 0.15.1 typing-extensions: 4.12.2 urllib3: 2.3.0 uvicorn: 0.34.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.6.1 httpx: 0.28.1 huggingface-hub: 0.28.1 packaging: 24.2 typing-extensions: 4.12.2 websockets: 11.0.3 ``` ### Severity I can work around it
1medium
Title: Fix Ivy Failing Test: paddle - creation.arange Body: To-Do List: https://github.com/unifyai/ivy/issues/27501
1medium
Title: Speeding up Loading Time of encoder in ``encoder/inference.py`` Body: Original comment from @CorentinJ: TODO: I think the slow loading of the encoder might have something to do with the device it was saved on. Worth investigating. This refers to the ``load_model`` function in the named module.
1medium
Title: Gradio Demo Malfunction on Hugging Face Spaces Body: ### Describe the bug Hi Team, We’ve been hosting a Gradio demo on Hugging Face Spaces (zero GPU) at [this link](https://huggingface.co/spaces/facebook/vggsfm), which has been running smoothly for several months. However, today a user reported that it’s no longer functioning. I’ve rebuilt the factory but it seems does not help. I checked the backend and retrieved the error logs as attached below. My best guess is that the version of Gradio on Hugging Face Spaces might have been updated, possibly leading to incompatibilities with the old version. The error logs are somewhat vague, making it difficult to pinpoint the exact issue. Is there any insight on resolving this, or could you point me towards the relevant documentation? It is much appreciated :) Best, Jianyuan ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction https://huggingface.co/spaces/facebook/vggsfm/tree/main ### Screenshot _No response_ ### Logs ```shell ZeroGPU tensors packing: 0.00B [00:00, ?B/s] ZeroGPU tensors packing: 0.00B [00:00, ?B/s] Running on local URL: http://0.0.0.0:7860 INFO:httpx:HTTP Request: GET http://localhost:7860/startup-events "HTTP/1.1 200 OK" INFO:httpx:HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" /usr/local/lib/python3.10/site-packages/gradio/blocks.py:2434: UserWarning: Setting share=True is not supported on Hugging Face Spaces warnings.warn( To create a public link, set `share=True` in `launch()`. INFO:httpx:HTTP Request: POST http://device-api.zero/schedule?cgroupPath=%2Fkubepods.slice%2Fkubepods-burstable.slice%2Fkubepods-burstable-pod1372a787_40e1_4b6a_8d19_2f4b46eca6e6.slice%2Fcri-containerd-edf0748ce7994c4de66b1374d62b458da7869fb7e8cf60b41a1dd4f939fa1c54.scope&taskId=140223021710736&enableQueue=true&durationSeconds=240&token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpcCI6IjE2My4xMTQuMTMxLjEyOSIsInVzZXIiOiJKaWFueXVhbldhbmciLCJ1dWlkIjpudWxsLCJlcnJvciI6bnVsbCwiZXhwIjoxNzM3MTQ3MDczfQ.WQjeWxsjH1Fnj2DefdNZDKeDjKERP1tDwwXRWPk4w6E "HTTP/1.1 200 OK" INFO:httpx:HTTP Request: POST http://device-api.zero/allow?allowToken=9264c9602de0db756c3f6b1d2e9d1cab36b9b217010d3f1110b88b4ef3f646f6&pid=313 "HTTP/1.1 200 OK" INFO:httpx:HTTP Request: POST http://device-api.zero/release?allowToken=9264c9602de0db756c3f6b1d2e9d1cab36b9b217010d3f1110b88b4ef3f646f6&fail=true "HTTP/1.1 200 OK" Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 135, in worker_init torch.init(nvidia_uuid) File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 373, in init torch.Tensor([0]).cuda() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 298, in _lazy_init torch._C._cuda_init() RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 304: OS call failed or operation not supported on this OS Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 532, in process_events response = await route_utils.call_process_api( File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 276, in call_process_api output = await app.get_blocks().process_api( File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1928, in process_api result = await self.call_function( File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1514, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run result = context.run(func, *args) File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 832, in wrapper response = f(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 214, in gradio_handler raise error("ZeroGPU worker error", res.error_cls) gradio.exceptions.Error: 'RuntimeError' INFO:httpx:HTTP Request: POST http://device-api.zero/schedule?cgroupPath=%2Fkubepods.slice%2Fkubepods-burstable.slice%2Fkubepods-burstable-pod1372a787_40e1_4b6a_8d19_2f4b46eca6e6.slice%2Fcri-containerd-edf0748ce7994c4de66b1374d62b458da7869fb7e8cf60b41a1dd4f939fa1c54.scope&taskId=140223021710736&enableQueue=true&durationSeconds=240&token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpcCI6IjE2My4xMTQuMTMxLjEyOSIsInVzZXIiOiJKaWFueXVhbldhbmciLCJ1dWlkIjpudWxsLCJlcnJvciI6bnVsbCwiZXhwIjoxNzM3MTQ3MjU0fQ.E_mE7SO_dJKy00chpXUrF0QHwHzRUzWtCtTRjbdIPrA "HTTP/1.1 200 OK" INFO:httpx:HTTP Request: POST http://device-api.zero/allow?allowToken=ea91d23b086f5770c368d624fac358a31965f04141384f50a0eb2f42c892040e&pid=317 "HTTP/1.1 200 OK" INFO:httpx:HTTP Request: POST http://device-api.zero/release?allowToken=ea91d23b086f5770c368d624fac358a31965f04141384f50a0eb2f42c892040e&fail=true "HTTP/1.1 200 OK" Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 135, in worker_init torch.init(nvidia_uuid) File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 373, in init torch.Tensor([0]).cuda() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 298, in _lazy_init torch._C._cuda_init() RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 304: OS call failed or operation not supported on this OS Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 532, in process_events response = await route_utils.call_process_api( File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 276, in call_process_api output = await app.get_blocks().process_api( File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1928, in process_api result = await self.call_function( File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1514, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run result = context.run(func, *args) File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 832, in wrapper response = f(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 214, in gradio_handler raise error("ZeroGPU worker error", res.error_cls) gradio.exceptions.Error: 'RuntimeError' ``` ### System Info ```shell hugging face spaces, which seems to be using gradio version 4.36.1 ``` ### Severity Blocking usage of gradio
1medium
Title: NaN Representative_Docs Body: Hi @MaartenGr, I keep getting NaN values as representative documents when I load my model, after saving it with either 'safetensors' or 'pytorch'. Here is my code: ` embedding_model = SentenceTransformer('all-MiniLM-L6-v2') topic_model.save('/content/drive/MyDrive/boombust_cs_model', serialization="safetensors", save_embedding_model= embedding_model )` `BERTopic.load('/content/drive/MyDrive/boombust_cs_model', embedding_model= embedding_model)` What might be the issue?
1medium
Title: ak.stock_zh_a_hist()获取数据错误 Body: 以下涉及的是 ak.stock_zh_a_hist()返回的 df 中 "单日情况"列的值为"成交金额"对应的行的数据错误: 1 代码 stock_zh_a_hist_df = ak.stock_zh_a_hist( symbol="603777", period="daily", start_date="20240101", end_date="20250201", adjust="qfq" ) 2 错误问题 获取数据集中 收盘价有负值 3 错误问题 再次运行代码 获取数据为空 4 版本号 Python 3.8.10 Akshare 1.15.22
1medium
Title: Can it be easily used it in Microsoft Singularity? Body: Many users are using clusters. However, NNI has not support the interface to easily adaptation to run on those clusters, without support of job scheduling, job maintaining, result aggregation and metric calculating, which has significantly limited the usability of NNI on advanced clusters such as Singularity. **What would you like to be added**: easy port to Singularity cluster in Microsoft. **Why is this needed**: Easy-to-use in common clusters is important for industrial users, especially those in Microsoft. **Without this feature, how does current nni work**:not worked yet, very hard to use. **Components that may involve changes**: Job scheduler, metric calculator and visualization tools. **Brief description of your proposal if any**:
2hard
Title: No target coloring in jointplot Body: **Describe the bug** After assigning `y` values in the `fit()` method of `JointPlot`, no heatmap of such target is drawn on the samples **To Reproduce** ```python import numpy as np from yellowbrick.features.jointplot import JointPlot X = np.random.rand(100, 2) y = np.random.rand(100) viz = JointPlot(columns=[0, 1]) # here, fit_transform is just fit viz.fit(X=X, y=y) viz.show() ``` ![image](https://user-images.githubusercontent.com/18686697/97182885-5fd48680-179d-11eb-8db7-e2eebc54b100.png) **Dataset** No. **Expected behavior** There should be a heatmap to color the samples by the values in `y` as in the PCA case. ![image](https://user-images.githubusercontent.com/18686697/97187357-a973a000-17a2-11eb-8628-eee182d39547.png) **Traceback** ``` If applicable, add the traceback from the exception. ``` **Desktop (please complete the following information):** - OS: Debian10 - Python Version 3.7 - Yellowbrick Version 1.2 **Additional context**
1medium
Title: [BUG] `df.sort_values` produces incorrect result Body: <!-- Thank you for your contribution! Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue. --> **Describe the bug** `df.sort_values`'s result is incorrect. **To Reproduce** ``` Python In [10]: df = pd.DataFrame( ...: np.random.rand(100, 10), columns=["a" + str(i) for i in range(10)] ...: ) In [11]: mdf = md.DataFrame(df, chunk_size=10) In [12]: r = mdf.sort_values(["a3", "a4"], ascending=[False, True]).execute() 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100.0/100 [00:00<00:00, 631.75it/s] In [13]: r Out[13]: a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 16 0.107025 0.476072 0.335091 0.982758 0.100404 0.535478 0.843878 0.993314 0.519650 0.039456 22 0.918816 0.782424 0.685651 0.981538 0.054176 0.204336 0.164184 0.545094 0.626901 0.001013 27 0.102054 0.078257 0.412166 0.977549 0.943098 0.095842 0.908522 0.078090 0.321839 0.579264 82 0.948665 0.387145 0.140989 0.962591 0.253510 0.053363 0.695930 0.322598 0.434367 0.831326 37 0.921237 0.795837 0.419291 0.957214 0.029907 0.224950 0.239270 0.694234 0.494518 0.698810 .. ... ... ... ... ... ... ... ... ... ... 38 0.479132 0.447279 0.262018 0.047719 0.232861 0.967857 0.608678 0.285415 0.385973 0.443056 1 0.318765 0.664569 0.631351 0.045131 0.163595 0.965267 0.037361 0.044477 0.963650 0.140346 11 0.391554 0.169611 0.384232 0.040313 0.397935 0.822954 0.042206 0.522298 0.944956 0.611841 67 0.022653 0.053233 0.813252 0.020961 0.366821 0.261931 0.592673 0.948731 0.476598 0.604238 20 0.955453 0.521866 0.419302 0.007513 0.303412 0.231128 0.984855 0.439811 0.755543 0.441908 [200 rows x 10 columns] In [14]: mdf.shape Out[14]: (100, 10) ```
1medium
Title: Didn't use Cpu on m1 Mac Body: I've been using the program for a while, and it goes well. However, I found that it barely use CPU on m1, while the GPU is fully loaded, the Cpu is barely used. Does it use Cpu on other platform or it only use gpu? I've heard that apple has built-in machine learning units in their m1 chip, maybe we can make use of them in a future update.
1medium
Title: [BUG] Ray context GC bug Body: <!-- Thank you for your contribution! Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue. --> **Describe the bug** A clear and concise description of what the bug is. Set the `DEFAULT_SUBTASK_MONITOR_INTERVAL` to 0 in `mars/services/task/execution/ray/config.py`, then run `mars/dataframe/base/tests/test_base_execution.py::test_cut_execution` with env `MARS_CI_BACKEND=ray`. The case will fail with the following exception: ``` python mars/dataframe/base/tests/test_base_execution.py::test_cut_execution FAILED [100%] mars/dataframe/base/tests/test_base_execution.py:777 (test_cut_execution) setup = <mars.deploy.oscar.session.SyncSession object at 0x12d589e10> @pytest.mark.ray_dag def test_cut_execution(setup): session = setup rs = np.random.RandomState(0) raw = rs.random(15) * 1000 s = pd.Series(raw, index=[f"i{i}" for i in range(15)]) bins = [10, 100, 500] ii = pd.interval_range(10, 500, 3) labels = ["a", "b"] t = tensor(raw, chunk_size=4) series = from_pandas_series(s, chunk_size=4) iii = from_pandas_index(ii, chunk_size=2) # cut on Series r = cut(series, bins) result = r.execute().fetch() pd.testing.assert_series_equal(result, pd.cut(s, bins)) r, b = cut(series, bins, retbins=True) r_result = r.execute().fetch() b_result = b.execute().fetch() r_expected, b_expected = pd.cut(s, bins, retbins=True) pd.testing.assert_series_equal(r_result, r_expected) np.testing.assert_array_equal(b_result, b_expected) # cut on tensor r = cut(t, bins) # result and expected is array whose dtype is CategoricalDtype result = r.execute().fetch() expected = pd.cut(raw, bins) assert len(result) == len(expected) for r, e in zip(result, expected): np.testing.assert_equal(r, e) # one chunk r = cut(s, tensor(bins, chunk_size=2), right=False, include_lowest=True) result = r.execute().fetch() pd.testing.assert_series_equal( result, pd.cut(s, bins, right=False, include_lowest=True) ) # test labels r = cut(t, bins, labels=labels) # result and expected is array whose dtype is CategoricalDtype result = r.execute().fetch() expected = pd.cut(raw, bins, labels=labels) assert len(result) == len(expected) for r, e in zip(result, expected): np.testing.assert_equal(r, e) r = cut(t, bins, labels=False) # result and expected is array whose dtype is CategoricalDtype result = r.execute().fetch() expected = pd.cut(raw, bins, labels=False) np.testing.assert_array_equal(result, expected) # test labels which is tensor labels_t = tensor(["a", "b"], chunk_size=1) r = cut(raw, bins, labels=labels_t, include_lowest=True) # result and expected is array whose dtype is CategoricalDtype result = r.execute().fetch() expected = pd.cut(raw, bins, labels=labels, include_lowest=True) assert len(result) == len(expected) for r, e in zip(result, expected): np.testing.assert_equal(r, e) # test labels=False r, b = cut(raw, ii, labels=False, retbins=True) # result and expected is array whose dtype is CategoricalDtype r_result, b_result = session.fetch(*session.execute(r, b)) r_expected, b_expected = pd.cut(raw, ii, labels=False, retbins=True) for r, e in zip(r_result, r_expected): np.testing.assert_equal(r, e) pd.testing.assert_index_equal(b_result, b_expected) # test bins which is md.IntervalIndex r, b = cut(series, iii, labels=tensor(labels, chunk_size=1), retbins=True) r_result = r.execute().fetch() > b_result = b.execute().fetch() mars/dataframe/base/tests/test_base_execution.py:858: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ mars/core/entity/executable.py:164: in fetch return self._fetch(session=session, **kw) mars/core/entity/executable.py:161: in _fetch return fetch(self, session=session, **kw) mars/deploy/oscar/session.py:1941: in fetch return session.fetch(tileable, *tileables, **kwargs) mars/deploy/oscar/session.py:1720: in fetch return asyncio.run_coroutine_threadsafe(coro, self._loop).result() ../../.pyenv/versions/3.7.7/lib/python3.7/concurrent/futures/_base.py:435: in result return self.__get_result() ../../.pyenv/versions/3.7.7/lib/python3.7/concurrent/futures/_base.py:384: in __get_result raise self._exception mars/deploy/oscar/session.py:1909: in _fetch data = await session.fetch(tileable, *tileables, **kwargs) mars/deploy/oscar/tests/session.py:68: in fetch results = await super().fetch(*tileables) mars/deploy/oscar/session.py:1126: in fetch chunk_metas = await self._meta_api.get_chunk_meta.batch(*get_chunk_metas) mars/oscar/batch.py:146: in _async_batch return [await self._async_call(*args_list[0], **kwargs_list[0])] mars/oscar/batch.py:95: in _async_call return await self.func(*args, **kwargs) mars/services/meta/api/oscar.py:179: in get_chunk_meta return await self._meta_store.get_meta(object_id, fields=fields, error=error) mars/oscar/core.pyx:263: in __pyx_actor_method_wrapper async with lock: mars/oscar/core.pyx:266: in mars.oscar.core.__pyx_actor_method_wrapper result = await result mars/oscar/batch.py:95: in _async_call return await self.func(*args, **kwargs) mars/services/meta/store/dictionary.py:95: in get_meta return self._get_meta(object_id, fields=fields, error=error) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <mars.services.meta.store.dictionary.DictMetaStore object at 0x12ed69210> object_id = '8553d79ca62df9bbf3150edc97f20b79_0', fields = ('object_refs',) error = 'raise' def _get_meta( self, object_id: str, fields: List[str] = None, error: str = "raise" ) -> Dict: if error not in ("raise", "ignore"): # pragma: no cover raise ValueError("error must be raise or ignore") try: > meta = self._store[object_id] E KeyError: '8553d79ca62df9bbf3150edc97f20b79_0' mars/services/meta/store/dictionary.py:80: KeyError ``` The bug was introduced by https://github.com/mars-project/mars/pull/3061. **To Reproduce** To help us reproducing this bug, please provide information below: 1. Your Python version 3.7.7 2. The version of Mars you use Latest master 3. Versions of crucial packages, such as numpy, scipy and pandas 4. Full stack of the error. 5. Minimized code to reproduce the error. **Expected behavior** A clear and concise description of what you expected to happen. **Additional context** Add any other context about the problem here.
2hard
Title: Multi-model metrics visualizer Body: **Describe the solution you'd like** I would like to create an at-a-glance representation of multiple model scores so that I can easily compare and contrast different model instances. This will be our first attempt handling multiple models in a visualizer - so could be tricky, and may require a new API. I envision something that creates a heatmap of metrics to models, sort of like the classification report, but where the rows are not classes but are instead are models. I propose the code would look something like this: ```python viz = MultiModelMetrics([ ("Naive Bayes", GaussianNB()), ("Neural Network", MultilayerPerceptron()), ("Logistic", LogisticRegression()), ("Boosting", GradientBoostingClassifier()), ("Bagging", RandomForestClassifier()), ], is_fitted=False, metrics="classification") viz.fit(X_train, y_train) viz.score(X_test, y_test) viz.show() ``` Like a pipeline, this API allows us to specify names for the estimator that will be visualized, or a list of visualizers can be added and the estimator name will be used. **Examples** A prototype example: ![multimodelscores](https://user-images.githubusercontent.com/745966/75354447-f94c4100-587a-11ea-887e-47ac13738eda.png)
2hard
Title: gr.Dataframe dynamic update Body: - [x] I have searched to see if a similar issue already exists. **Is your feature request related to a problem? Please describe.** Cannot dynamically yield Dataframe update to a gr.Dataframe() **Describe the solution you'd like** I want to dynamically update a gr.Dataframe based on a single button click **Additional context** See below code. The gr.Dataframe updates once, then never again ```py import gradio as gr import pandas as pd from time import sleep initial_data = { 'category': ["cat1", "cat2", "cat2", "cat3", "cat4", "cat5", "cat6", "cat7", "cat8", "cat9", "cat10", "cat11"], 'Model 1': [0, 11395, 5732, 0, 0, 0, 344, 2856, 812, 0, 7965, 0], 'Model 2': [0, 5391, 7716, 0, 0, 0, 0, 45, 0, 0, 525, 0] } df_initial = pd.DataFrame(initial_data) def update_dataframe(df): for i in range(10): df['Model 1'] = df['Model 1'] + i df['Model 2'] = df['Model 2'] + (i * 2) sleep(1) print("updating") yield df with gr.Blocks() as demo: gr.Markdown("### Dynamic DataFrame Update Example") df_component = gr.Dataframe(value=df_initial, label="Editable DataFrame", type="pandas", interactive=True,render=True) update_button = gr.Button("Update DataFrame 10 Times") update_button.click(update_dataframe, inputs=[df_component], outputs=[df_component]) demo.launch() ```
1medium
Title: Functional API not work as expected when concatenating two models with multiple output & input Body: Keras version: 3.6.0 OS: Win Hello, Lets say I've got following two models `A` and `B`: ```python A_input = keras.Input(shape=(4,)) A = keras.layers.Dense(5)(A_input) A = keras.Model(inputs=A_input, outputs=[ keras.layers.Dense(4)(A), keras.layers.Dense(4)(A) ]) ``` ![model](https://github.com/user-attachments/assets/4994021c-a0f3-444a-87c3-f94cecf496dc) ```python B_input = [ keras.Input(shape=(4,)), keras.Input(shape=(4,)) ] B = keras.layers.Concatenate()(B_input) B = keras.layers.Dense(5)(B) B = keras.Model(inputs = B_input, outputs=B) ``` ![model](https://github.com/user-attachments/assets/5e9c4706-e337-4e20-af79-28f5df69492a) and I want to merge them into one model via `keras.Model(inputs=A_input, outputs=B(A))` which unfortunately crashes ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[20], [line 1](vscode-notebook-cell:?execution_count=20&line=1) ----> [1](vscode-notebook-cell:?execution_count=20&line=1) merged = keras.Model(inputs=A_input, outputs=B(A)) # why not work? File c:\Users\Marcin\.miniconda3\envs\torch\Lib\site-packages\keras\src\utils\traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs) [119](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/utils/traceback_utils.py:119) filtered_tb = _process_traceback_frames(e.__traceback__) [120](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/utils/traceback_utils.py:120) # To get the full stack trace, call: [121](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/utils/traceback_utils.py:121) # `keras.config.disable_traceback_filtering()` --> [122](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/utils/traceback_utils.py:122) raise e.with_traceback(filtered_tb) from None [123](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/utils/traceback_utils.py:123) finally: [124](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/utils/traceback_utils.py:124) del filtered_tb File c:\Users\Marcin\.miniconda3\envs\torch\Lib\site-packages\keras\src\layers\input_spec.py:160, in assert_input_compatibility(input_spec, inputs, layer_name) [158](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/layers/input_spec.py:158) inputs = tree.flatten(inputs) [159](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/layers/input_spec.py:159) if len(inputs) != len(input_spec): --> [160](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/layers/input_spec.py:160) raise ValueError( [161](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/layers/input_spec.py:161) f'Layer "{layer_name}" expects {len(input_spec)} input(s),' [162](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/layers/input_spec.py:162) f" but it received {len(inputs)} input tensors. " [163](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/layers/input_spec.py:163) f"Inputs received: {inputs}" [164](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/layers/input_spec.py:164) ) [165](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/layers/input_spec.py:165) for input_index, (x, spec) in enumerate(zip(inputs, input_spec)): [166](file:///C:/Users/Marcin/.miniconda3/envs/torch/Lib/site-packages/keras/src/layers/input_spec.py:166) if spec is None: ValueError: Layer "functional_4" expects 2 input(s), but it received 1 input tensors. Inputs received: [<Functional name=functional_2, built=True>] ``` This looks like a bug to me, because following works: ```python B(A(keras.ops.ones(shape=(1, 4)))) #works tensor([[-0.2388, -0.3490, -0.3166, 0.2736, -1.2349]], device='cuda:0', grad_fn=<AddBackward0>) ``` Temporarily I've found following workaround to create that merged model: ```python merged = keras.Model(inputs=A_input, outputs=B(A(A_input))) ``` but that have a caveats it plots model with a loop in input: ![image](https://github.com/user-attachments/assets/dc02dc28-ebb7-4526-a88c-9860f2330115)
2hard
Title: Python socketio[client] giving the following error in Raspberry pi Body: Hi, I have a piece of code, that works fine in windows, but when I trie to run it on my raspberry pi, returns me an error and is not able to connect to the server. ``` Traceback (most recent call last): File "/home/pi/.local/lib/python3.7/site-packages/socketio/client.py", line 279, in connect engineio_path=socketio_path) File "/home/pi/.local/lib/python3.7/site-packages/engineio/client.py", line 187, in connect url, headers or {}, engineio_path) File "/home/pi/.local/lib/python3.7/site-packages/engineio/client.py", line 306, in _connect_polling 'OPEN packet not returned by server') engineio.exceptions.ConnectionError: OPEN packet not returned by server During handling of the above exception, another exception occurred: Traceback (most recent call last): File "setup.py", line 93, in <module> sio.connect("*******") File "/home/pi/.local/lib/python3.7/site-packages/socketio/client.py", line 283, in connect exc.args[1] if len(exc.args) > 1 else exc.args[0]) File "/home/pi/.local/lib/python3.7/site-packages/socketio/client.py", line 547, in _trigger_event return self.handlers[namespace][event](*args) TypeError: connect_error() takes 0 positional arguments but 1 was given ``` My server is working fine. (it is already hosted) A prove of that is that my windows code executed with no errors. Some of the code: ``` sio = socketio.Client() @sio.event def connect(): print("I'm connected!") #print('my sid is', sio.sid) sio.emit("**", **) open_radio() @sio.event def connect_error(): print("The connection failed!") @sio.event def disconnect(): print("I'm disconnected!") sio.connect(SERVER_URL) sio.wait() ```
1medium
Title: How to work with images without objects? Body: Hello! I want to detect an object type in an image. I mean, I have only 1 class. The thing is that I have images without that object but I am very interested in evaluating the algorithm on those images to see if it is a proponent of false positives on that type of images. The solution I have adopted is to add the name of the image in the images field of the coco json without adding any annotation related to that image. That is, let's suppose that the image with id=1 does not have the object I want to segment, no annotation linked to the image with id=1 will appear. Is the approach I have taken correct? Thank you very much
1medium
Title: Why can't I have multiple response codes in apispec? Body: Perhaps I'm doing this wrong, but even though my route has multiple `@blp.response` values and the actual calls return the correct information, but specs only show the top response and none of the error responses. I have this: ``` @blp.response(code=204, description="success") @blp.response(code=404, description="Failed updating shovel statuses") @blp.response(code=409, description="Database is empty or some shovel statuses requesting updates are missing.") @blp.response(code=500, description="Internal server error") def put(self, shovels): ``` And this is what I see swagger: ![image](https://user-images.githubusercontent.com/41277488/93821950-424b5480-fc14-11ea-82aa-c35f88de77fa.png) Similarly, if I have a response(description='fine by me') above the 204 code, then I get the 200 error instead and don't see the 204 at all. I have a few cases where depending on the type of data requested, I would either return a 200 error or a 204 error, so I would like this type of granularity as well. Again, am I doing something wrong? I'm using the latest version of flask-smorest: 0.24.1 I'm hoping I don't have to mess with @doc, but I will if someone can explain how I use it for multiple routes and functions. Thanks
1medium
Title: Trio's nursery lifetime interacts badly with start_action Body: (Based on discussion in https://trio.discourse.group/t/eliot-the-causal-logging-library-now-supports-trio/167) Consider: ```python from eliot import start_action, to_file import trio to_file(open("trio.log", "w")) async def say(message, delay): with start_action(action_type="say", message=message): await trio.sleep(delay) async def main(): async with trio.open_nursery() as nursery: with start_action(action_type="main"): nursery.start_soon(say, "hello", 1) nursery.start_soon(say, "world", 2) trio.run(main) ``` The result: ``` 0ed1a1c3-050c-4fb9-9426-a7e72d0acfc7 └── main/1 ⇒ started 2019-04-26 13:01:13 ⧖ 0.000s └── main/2 ⇒ succeeded 2019-04-26 13:01:13 0ed1a1c3-050c-4fb9-9426-a7e72d0acfc7 └── <unnamed> ├── say/3/1 ⇒ started 2019-04-26 13:01:13 ⧖ 2.002s │ ├── message: world │ └── say/3/2 ⇒ succeeded 2019-04-26 13:01:15 └── say/4/1 ⇒ started 2019-04-26 13:01:13 ⧖ 1.001s ├── message: hello └── say/4/2 ⇒ succeeded 2019-04-26 13:01:14 ``` What happens is that the `start_action` finishes before the nursery schedules the `say()` calls, so they get logged after the action is finished. Putting the `start_action` outside the nursery lifetime fixes this. Depending how you look at this this is either: 1. A problem with Trio integration. 2. A design flaw in the parser (notice that all those messages are actually the same tree, the parser just decides the tree is finished too early). 3. A general problem with async context managers/contextvars/how actions finish.
1medium
Title: EMA with high decay results in worse performance because of 1) no zero-init and 2) no debias. Body: When using [ModelEMAV2](https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/utils/model_ema.py#L82) with decay >=0.99999 and ~25k iterations, performance is worse than expected. Encountered this bug when fine-tuning on ImageNet with EMA. Fixed this locally by following the[ optax implementation of EMA](https://github.com/deepmind/optax/blob/252d152660300fc7fe22d214c5adbe75ffab0c4a/optax/_src/transform.py#L120-L158), posting here in case other people encounter the same thing. There are two things which differ in the optax implementation. 1) [EMA is initialized with zeros](https://github.com/deepmind/optax/blob/252d152660300fc7fe22d214c5adbe75ffab0c4a/optax/_src/transform.py#L143-L147). 2) [Bias correction is applied to EMA](https://github.com/deepmind/optax/blob/252d152660300fc7fe22d214c5adbe75ffab0c4a/optax/_src/transform.py#L103-L106). Apologies if I am missing something or mis-using the timm EMA implementation. Just figured this would be helpful to post in case others are using EMA with high decay. If I am not missing something, I'm happy to submit a PR for this.
1medium
Title: ENH: RST support Body: ### Feature Type - [X] Adding new functionality to pandas - [ ] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description I wish I could use ReStructured Text with pandas ### Feature Description The end users code: ```python import pandas as pd df=pd.read_rst(rst) df.to_rst() ``` I believe tabulate has a way to do this. ### Alternative Solutions I also built a way to make rst tables. ### Additional Context - [The RST docs](https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html#tables) I think `Grid Tables` would be best for pandas (or `Simple Tables`) I did not use sudo-code in the examples due to complexity and that examples of how to do this can be seen in the above packages. See RST docs for what they look like.
1medium
Title: Question on fine-tuning document form parsing labeling requirement Body: My goal is to read a specific field (say, box 30) from a nationally standardized insurance claim form. The form has 40 boxes/fields in fixed locations and each boxed is labeled clearly with box number and title. To save annotation time, I would like our labeling team to annotate the text from box 30 only (ignore all other boxes in the form). If I fine-tune on such annotations, is donut expected to give good results or not? If we have to annotate the entire form box-by-box, the time it takes will be over 10x longer.
1medium
Title: Technical Issue in code Body: ``` public class HomeController : Controller { private readonly MLContext _mlContext; private readonly PredictionEngine<SignLanguageInput, SignLanguageOutput> _predictionEngine; public HomeController() { _mlContext = new MLContext(); // Load ONNX model var modelPath = Path.Combine(Directory.GetCurrentDirectory(), "wwwroot", "models", "hand_landmark_sparse_Nx3x224x224.onnx"); var dataView = _mlContext.Data.LoadFromEnumerable(new List<SignLanguageInput>()); var pipeline = _mlContext.Transforms.ApplyOnnxModel(modelPath); var trainedModel = pipeline.Fit(dataView); _predictionEngine = _mlContext.Model.CreatePredictionEngine<SignLanguageInput, SignLanguageOutput>(trainedModel); } public IActionResult Index() { return View(); } public IActionResult PredictGesture([FromBody] string base64Image) { if (string.IsNullOrEmpty(base64Image) || base64Image == "data:,") return BadRequest("Image data is missing!"); // Decode the base64 image and save it temporarily var imageBytes = Convert.FromBase64String(base64Image.Replace("data:image/png;base64,", "")); var tempImagePath = Path.Combine(Path.GetTempPath(), $"{Guid.NewGuid()}.png"); System.IO.File.WriteAllBytes(tempImagePath, imageBytes); try { // Preprocess the image to create input tensor var inputTensor = PreprocessImage(tempImagePath, 92, 92); float[] inputArray = inputTensor.ToArray(); // Create input for prediction var input = new SignLanguageInput { input = inputArray }; // Predict gesture var result = _predictionEngine.Predict(input); if (result != null) { return Ok(new { Prediction = result.Label, Confidence = result.Confidence, }); } else { return StatusCode(500, "Prediction returned null"); } } catch (Exception ex) { return StatusCode(500, $"Internal server error: {ex.Message}"); } finally { // Clean up temporary file System.IO.File.Delete(tempImagePath); } } private DenseTensor<float> PreprocessImage(string imagePath, int width, int height) { using var bitmap = new Bitmap(imagePath); using var resized = new Bitmap(bitmap, new Size(width, height)); int channels = 3; // RGB var tensor = new float[channels * width * height]; int index = 0; for (int y = 0; y < resized.Height; y++) { for (int x = 0; x < resized.Width; x++) { var pixel = resized.GetPixel(x, y); // Channel-first format (C, H, W) tensor[index + 0] = pixel.R / 255f; // Red tensor[index + 1] = pixel.G / 255f; // Green tensor[index + 2] = pixel.B / 255f; // Blue index += channels; } } // Create a DenseTensor directly from the preprocessed image var tensorShape = new[] { 1, 3, height, width }; // NCHW format return new DenseTensor<float>(tensor, tensorShape); // Return as DenseTensor } } ``` I have given my code (in mvc). In this code, i am getting error on line "**var result = _predictionEngine.Predict(input);**" and error is "**System.ArgumentOutOfRangeException: 'Index was out of range. Must be non-negative and less than the size of the collection. (Parameter 'index')'**" Using package : .Net Framework : net8.0 Microsoft.ML : Version="4.0.0" Microsoft.ML.OnnxRuntime.Gpu Version="1.20.1" Microsoft.ML.OnnxRuntime.Managed Version="1.20.1" Microsoft.ML.OnnxTransformer Version="4.0.0" SixLabors.ImageSharp Version="3.1.6" System.Drawing.Common Version="9.0.0" and using window is "Window 11" with 64bit OS
1medium
Title: Globaleaks does not start Body: **Describe the bug** Globaleaks doesn't start **To Reproduce** `globaleaks start` WARNING: The current long term supported platform is Debian 11 (bullseye) WARNING: It is recommended to use only this platform to ensure stability and security WARNING: To upgrade your system consult: https://docs.globaleaks.org/en/main/user/admin/UpgradeGuide.html ` systemctl status globaleaks globaleaks.service ` globaleaks.service - LSB: Start the GlobaLeaks server. Loaded: loaded (/etc/init.d/globaleaks; generated) Active: failed (Result: exit-code) since Mon 2022-09-19 17:08:24 CEST; 3min 1s ago Docs: man:systemd-sysv-generator(8) Process: 957 ExecStart=/etc/init.d/globaleaks start (code=exited, status=1/FAILURE) Sep 19 17:08:23 tst-xxxxx-py-leaks01-tstweb.site02.xxxxx.it globaleaks[957]: * Enabling Globaleaks Network Sandboxing... Sep 19 17:08:23 tst-xxxxx-py-leaks01-tstweb.site02.xxxxx.it globaleaks[957]: ...done. Sep 19 17:08:24 tst-xxxxx-py-leaks01-tstweb.site02.xxxxx.it globaleaks[957]: WARNING: The current long term supported platform is Debian 11 (bullseye) Sep 19 17:08:24 tst-xxxxx-py-leaks01-tstweb.site02.xxxxx.it globaleaks[957]: WARNING: It is recommended to use only this platform to ensure stability and security Sep 19 17:08:24 tst-xxxxx-py-leaks01-tstweb.site02.xxxxx.it globaleaks[957]: WARNING: To upgrade your system consult: https://docs.globaleaks.org/en/main/user/admin/UpgradeGuide.html Sep 19 17:08:24 tst-xxxxx-py-leaks01-tstweb.site02.xxxxx.it globaleaks[957]: Unable to start GlobaLeaks: [Errno 13] Permission denied: '/var/globaleaks/globaleaks.pid' Sep 19 17:08:24 tst-xxxxx-py-leaks01-tstweb.site02.xxxxx.it globaleaks[957]: ...fail! Sep 19 17:08:24 tst-xxxxx-py-leaks01-tstweb.site02.xxxxx.it systemd[1]: globaleaks.service: Control process exited, code=exited status=1 Sep 19 17:08:24 tst-xxxxx-py-leaks01-tstweb.site02.xxxxx.it systemd[1]: globaleaks.service: Failed with result 'exit-code'. Sep 19 17:08:24 tst-xxxxx-py-leaks01-tstweb.site02.xxxxx.it systemd[1]: Failed to start LSB: Start the GlobaLeaks server.. **Log** 2022-09-19 17:11:10+0200 [-] [E] Found an already initialized database version: 63 2022-09-19 17:11:11+0200 [-] Starting factory <Site object at 0x7f7c9bab2550> 2022-09-19 17:11:11+0200 [-] GlobaLeaks is now running and accessible at the following urls: 2022-09-19 17:11:11+0200 [-] - [HTTP] --> http://0.0.0.0 2022-09-19 17:11:11+0200 [-] - [Tor]: --> http://d6sqxfwngfy3rsmksjg2fpcvzcggckzy76yzj775m4jxnh3bubyrarid.onion 2022-09-19 17:11:11+0200 [-] Starting factory _HTTP11ClientFactory(<function HTTPConnectionPool._newConnection.<locals>.quiescentCallback at 0x7f7c99818158>, <twisted.internet.endpoints._WrapperEndpoint object at 0x7f7c997d3eb8>) 2022-09-19 17:11:11+0200 [-] [E] Successfully connected to Tor control port 2022-09-19 17:11:11+0200 [-] [E] [1] Setting up the onion service d6sqxfwngfy3rsmksjg2fpcvzcggckzy76yzj775m4jxnh3bubyrarid.onion 2022-09-19 17:11:16+0200 [-] [E] Job ExitNodesRefresh died with runtime -1.0000 [low: -1.0000, high: -1.0000] 2022-09-19 17:11:16+0200 [-] Traceback (most recent call last): 2022-09-19 17:11:16+0200 [-] File "/usr/lib/python3/dist-packages/globaleaks/jobs/job.py", line 49, in run 2022-09-19 17:11:16+0200 [-] yield self.operation() 2022-09-19 17:11:16+0200 [-] twisted.internet.error.TimeoutError: User timeout caused connection failure. 2022-09-19 17:11:16+0200 [-] [E] exception mail suppressed for exception (<class 'twisted.internet.error.TimeoutError'>) [reason: special exception] 2022-09-19 17:11:16+0200 [-] Stopping factory _HTTP11ClientFactory(<function HTTPConnectionPool._newConnection.<locals>.quiescentCallback at 0x7f7c99818158>, <twisted.internet.endpoints._WrapperEndpoint object at 0x7f7c997d3eb8>) **Desktop (please complete the following information):** Ubuntu 20.04 Lts fresh install (**behind enteprise proxy**) **Globaleaks version** Latest (at today) **Notes** The problem is S.O. related? Ubuntu 20.04 Lts Vs Debian 11?
1medium
Title: [KOSMOS-G] How to prepare Laion dataset? Body: Thanks for your work, I cannot see how to prepare Laion dataset when training aligner
1medium
Title: Functionality for filtering events in Subscriptions Body: It would be nice if an equivalent of the `withFilter`-function as described in Apollo Server (https://www.apollographql.com/docs/apollo-server/data/subscriptions#filtering-events) is added to Strawberry.
1medium
Title: bert文本分类评估指标问题 Body: 欢迎您对PaddleHub提出建议,非常感谢您对PaddleHub的贡献! 在留下您的建议时,辛苦您同步提供如下信息: - 您想要增加什么新特性? - 什么样的场景下需要该特性? - 没有该特性的条件下,PaddleHub目前是否能间接满足该需求? - 增加该特性,PaddleHub可能需要变化的部分。 - 如果可以的话,简要描述下您的解决方案 在使用模型库中的bert-wwm进行多文本分类项目时发现,输出的评估指标为acc,但是我需要用macro-f1来作为我的评估指标,请问paddlehub考虑增加多分类F1 Score等评估指标吗,或者如果我需要自己写自定义评估方法的话,需要到哪个文件中进行修改,是在模型中或是其他的什么地方。
1medium
Title: improve inference time Body: Does anyone have a clue on how to fasten the inference time? I know other vocoders have been tried but they were not satisfacotry ... right?
1medium
Title: "ImportError: cannot import name 'json'" when debugging with pydevd Body: Hi! I'm receiving this error when attempting to debug with a pydevd -debugger. ``` Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2018.1.3\helpers\pydev\pydevd.py", line 1664, in <module> main() File "C:\Program Files\JetBrains\PyCharm 2018.1.3\helpers\pydev\pydevd.py", line 1658, in main globals = debugger.run(setup['file'], None, None, is_module) File "C:\Program Files\JetBrains\PyCharm 2018.1.3\helpers\pydev\pydevd.py", line 1068, in run pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm 2018.1.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:\Program Files\JetBrains\PyCharm 2018.1.3\helpers\pycharm\_jb_pytest_runner.py", line 31, in <module> pytest.main(args, plugins_to_load) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\config.py", line 54, in main config = _prepareconfig(args, plugins) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\config.py", line 167, in _prepareconfig pluginmanager=pluginmanager, args=args File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\__init__.py", line 617, in __call__ return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\__init__.py", line 222, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\__init__.py", line 216, in <lambda> firstresult=hook.spec_opts.get('firstresult'), File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\callers.py", line 196, in _multicall gen.send(outcome) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\helpconfig.py", line 89, in pytest_cmdline_parse config = outcome.get_result() File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\callers.py", line 76, in get_result raise ex[1].with_traceback(ex[2]) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\callers.py", line 180, in _multicall res = hook_impl.function(*args) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\config.py", line 981, in pytest_cmdline_parse self.parse(args) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\config.py", line 1146, in parse self._preparse(args, addopts=addopts) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\config.py", line 1098, in _preparse self.pluginmanager.load_setuptools_entrypoints("pytest11") File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\__init__.py", line 397, in load_setuptools_entrypoints plugin = ep.load() File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pkg_resources\__init__.py", line 2318, in load return self.resolve() File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pkg_resources\__init__.py", line 2324, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\assertion\rewrite.py", line 216, in load_module py.builtin.exec_(co, mod.__dict__) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pytest_flask\plugin.py", line 11, in <module> from flask import json ImportError: cannot import name 'json' ```
1medium
Title: [MNT]: ci: ubuntu-20.04 GitHub Actions runner will soon be unmaintained Body: ### Summary The `ubuntu-20.04` GitHub Actions runner image, currently [used in the `tests.yml` workflow](https://github.com/matplotlib/matplotlib/blob/3edda656dc211497de93b8c5d642f0f29c96a33a/.github/workflows/tests.yml#L60) will soon be unsupported, as notified at: https://github.com/actions/runner-images/issues/11101 ### Proposed fix Ubuntu 20.04 itself is a long-term-support release, however it is also nearing the end of that support cycle, and will no longer be supported by Canonical from May 2025: https://ubuntu.com/20-04 I'd suggest removing the `ubuntu-20.04` jobs from the `tests.yml` workflows at the end of this month (March 2025).
1medium
Title: Needed: mechanism for deriving property values from other properties Body: We need a way to specify that a property's values should be (a function of) another property. This is most relevant for assigning the outputs of statistical operations to properties. Tt's become an acute need with the introduction of the `Text` mark (#3051). It's impossible to annotate statistical results (e.g. to put bar counts above the bars. It's also impossible to assign x/y to the text annotation when using `Plot.pair`, even without a statistical transform. I've kicked around a few ideas for this. One would be to make this part of the stat config, e.g. through a method call like ``` Plot(x).add(so.Text(), so.Hist().assign(text="count")) ``` But that does not solve the `Plot.pair` problem. Another option would be some special within the variable assignment itself, akin to ggplot's `after_stat`. `Plot.add` accepts multiple transforms, so this would need to be "after all transforms"; I think it would be too complicated to specify that a variable should be assigned in the middle of the transform pipe. Will develop this idea further.
2hard
Title: Cannot get nested objects when converting to dict Body: ### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python import sqlalchemy as sa from sqlalchemy import func from sqlalchemy import orm from sqlmodel import Field, Relationship, SQLModel class Address(SQLModel, table=True): __tablename__ = 'addresses' id: Optional[int] = Field(default=None, primary_key=True, nullable=False) listing: 'Listing' = Relationship(back_populates='address') location: 'Location' = Relationship(back_populates='addresses') class ListingImage(SQLModel, table=True): __tablename__ = 'listing_images' id: Optional[int] = Field(default=None, primary_key=True, nullable=False) listing: 'Listing' = Relationship(back_populates='images') file: str = Field(max_length=255, nullable=False) listing_id: int = Field(foreign_key='listings.id', nullable=False) class Listing(SQLModel, table=True): __tablename__ = 'listings' created: Optional[datetime] = Field(sa_column_kwargs={'server_default': func.now()}, index=True) title: str = Field(max_length=100, nullable=False) price: int = Field(nullable=False) address_id: int = Field(foreign_key='addresses.id', nullable=True) id: Optional[int] = Field(default=None, primary_key=True, nullable=False) images: List['ListingImage'] = Relationship(back_populates='listing', sa_relationship_kwargs={'cascade': 'all, delete'}) address: 'Address' = Relationship(back_populates='listing', sa_relationship_kwargs={'cascade': 'all, delete', 'uselist': False}) __mapper_args__ = {"eager_defaults": True} async def get_multi( self, db: AsyncSession, *, skip: int = 0, limit: int = 100, order: Any = None, **filters ) -> List[Listing]: stmt = sa.select(Listing).options(orm.selectinload(Listing.images), orm.selectinload(Listing.address)).filter_by(**filters).order_by(order).offset(skip).limit(limit) result = await db.scalars(stmt) return result.all() ##### in some async function #### data = await get_multi(....) data[0].address shows Address(...) object normally data[0].dict() { "id":.. "created":... "title":... "price":... "address_id":... } no "images" or "address" nested object after converting to dict ``` ### Description When trying to convert an object using .dict() or .json() the resulting object does not include the nested fields like Address or Images in my example. I am using SQLAlchemy AsyncSession and doing eager loading of the objects when quering the DB. The nested object shows normally when trying to access it but it doesn't show up in resulting dict and json object and by contrast not being sent in FastAPI response. I can confirm this is not the case in Pydantic and models from Pydantic work fine when converted to dict. ### Operating System Windows ### Operating System Details _No response_ ### SQLModel Version 0.0.6 ### Python Version 3.8.8 ### Additional Context _No response_
1medium
Title: On Debug Mode my function is called twice Body: ```py import shutil import os from flask import Flask, render_template, request from datetime import datetime from os.path import isfile, join from sqlite3 import connect class Server(Flask): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.db_path = join("back-end", "database" ,"news.db") #Routes @self.route("/") @self.route("/home") def home(): return render_template("home.html") @self.route("/about") def about(): return render_template("about.html") @self.route("/table") def table(): return render_template("table.html") @self.route("/sabah") def sabah(): return render_template("sabah.html") @self.route("/news") def news(): pass @self.route("/science") def science(): pass #Functionality @self.template_global("getsemester") def gettime()->str: """This function gets current time and determines the semester to show.""" now = datetime.now() if isfile("static/cedvel/cedvel.pdf"): if datetime(now.year, 9, 15) <= now < datetime(now.year+1, 2, 16): return f"{now.year} PAYIZ SEMESTERİ üçün nəzərdə tutulub!" elif datetime(now.year, 2, 15) <= now < datetime(now.year, 6, 15): return f"{now.year} YAZ SEMESTERİ üçün nəzərdə tutulub!" else: return "Hal hazırda yay tetili ilə əlaqadər cedvəl hazır deyil." else: return "Hal hazırda cədvəl hazır deyil" @self.template_global("insertnews") def insertnews(header:str, main:str, date=datetime.now().strftime("%Y-%m-%d"))->str: """this function inserts news to the server""" print("FUNCTION CALLED!!!!") conn = connect(self.db_path) curr = conn.cursor() curr.execute(""" INSERT INTO news (header, main, date) VALUES (?, ?, ?) """, (header, main, date)) conn.commit() #getting latest news id id = curr.lastrowid print(id) conn.close() news_path = join("static", "media", "news", f"NEWS-{id}") os.makedirs(news_path) # image = request.files["image"] # # image.save(news_path, f"NEWS:{header}-{id}") return "XƏBƏR BAŞARIYLA YÜKLƏNDİ!" @self.template_global("deletenews") def deletenews(id: int)->str: """This function deletes news from server""" conn = connect(self.db_path) c = conn.cursor() c.execute("DELETE FROM news WHERE rowid=?", (id,)) conn.commit() conn.close() news_path = join("static", "media", "news") #deleting images of deleted news for dirpath, dirnames, filenames in os.walk(news_path): if f"NEWS-{id}" in dirnames: dir_to_delete = os.path.join(dirpath, f"NEWS-{id}") shutil.rmtree(dir_to_delete) return "XƏBƏRLƏR BAŞARIYLA SİLİNDİ" @self.template_global("getlatestnews") def getlatestnews()->list[str, str, str]: """this function returns latest 4 news from db""" conn = connect(self.db_path) c = conn.cursor() c.execute("SELECT * FROM news ORDER BY rowid DESC LIMIT 4") conn.commit() print(c.fetchall()) conn.close() @self.template_global("getnews") def getnews()->str: pass insertnews("BASLIQ1", "XEBER1") if __name__ == "__main__": server = Server( import_name=__name__, template_folder="../front-end/template", static_folder="../static") server.run() ``` <!-- it works right when i turn off DEBUG mode --> <!-- it should call my insertnews() function once --> Environment: - Python version: 3.11.0 - Flask version: 2.2.2
1medium
Title: add save to csv function Body:
1medium
Title: SmartDatalake persistently fails when asked to plot Body: ### System Info pandasai version: 1.5.13 ### 🐛 Describe the bug Asking for different plots the function keeps returning the matplotlib.pyplot module (plt) which is an unexpected return type. I persistently see this pattern trying different queries: ``` 2024-01-10 16:54:08 [INFO] Code running: .... plt.show() result = {'type': 'plot', 'value': plt} ``` 2024-01-10 16:54:08 [ERROR] Pipeline failed on step 4: expected str, bytes or os.PathLike object, not module
1medium
Title: Using cursors with SQLAlchemy Body: First of all, thank you for the great job publishing this package. I would like to know how to properly use a cursor using SQLAlchemy abstraction in `aiopg`. Looks like the `sa` subpackage is using internally a cursor. Would you mind sharing an example about how to use it?
1medium
Title: TypeError: get_orderbook_tickers() got an unexpected keyword argument 'symbol' Body: **Describe the bug** TypeError: get_orderbook_tickers() got an unexpected keyword argument 'symbol' **To Reproduce** `client.get_orderbook_tickers(symbol='BTCUSDT')` **Expected behavior** using symbol parameter should be able to get the orderbook ticker only for that symbol **Environment (please complete the following information):** - Python version: 3.8.10 - Virtual Env: Jupyter Notebook - OS: Ubuntu 20:04.4 LTS - python-binance version: v1.0.15 **Logs or Additional context** -
1medium
Title: 优化CRF分词效率的方法 Body: <!-- 注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。 --> ## 注意事项 请确认下列注意事项: * 我已仔细阅读下列文档,都没有找到答案: - [首页文档](https://github.com/hankcs/HanLP) - [wiki](https://github.com/hankcs/HanLP/wiki) - [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ) * 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。 * 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。 * [x] 我在此括号内输入x打钩,代表上述事项确认完毕。 ## 版本号 <!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 --> 当前最新版本号是:1.6.3 我使用的版本是:1.6.3 <!--以上属于必填项,以下可自由发挥--> ## 我的问题 [CRF分词](https://github.com/hankcs/HanLP#6-crf%E5%88%86%E8%AF%8D)和[感知机分词](https://github.com/hankcs/HanLP/wiki/%E7%BB%93%E6%9E%84%E5%8C%96%E6%84%9F%E7%9F%A5%E6%9C%BA%E6%A0%87%E6%B3%A8%E6%A1%86%E6%9E%B6)的流程相差不大(都是提取特征->查概率/权重->累加->Viterbi), 但Wiki上面的测试结果差距却很大。而且HanLP早期的CRF模型特征模板数量少于当前感知机的七个模板。 因此查看了一下HanLP构造CRF模型的逻辑,我发现了一个问题: CRF++生成的特征都是以“U[0-9]+:”开头的,而模型使用BinTrie索引特征概率,这就导致BinTrie加速的第一层只有一个字符“U”,,所有的特征都都走了二分查找,难怪速度会慢。 ## 解决思路 需要解决的是如何把汉字索引到第一级同时又不影响效率,我觉得可以考虑拆解重组特征模板和特征Key,或者直接reverse字符串。
1medium
Title: Add config option for list of stopwords to ignore with topic generation Body: Add an option to ignore words when generating topic names. This list is in addition to standard tokenizer stop words.
1medium
Title: How to replace gr.ImageEditor uploaded user image and still keep image editing features Body: ### Describe the bug **gradio==5.4.0** I want to auto resize / crop user uploaded image and replace processed image inside in gr.ImageEditor Here the code I use So after image edited the image editing features disappears Here what I mean ``` imgs = gr.ImageEditor(sources='upload', type="pil", label='Human. Mask with pen or use auto-masking', interactive=True) def process_editor_image(image_dict, enable_processing): if not enable_processing or image_dict is None: return image_dict if isinstance(image_dict, dict) and 'background' in image_dict: image_dict['background'] = process_image_to_768x1024(image_dict['background']) return image_dict def process_single_image(image, enable_processing): if not enable_processing or image is None: return image return process_image_to_768x1024(image) # Add image processing event handlers imgs.upload( fn=process_editor_image, inputs=[imgs, auto_process], outputs=imgs, ) def process_image_to_768x1024(img): if not isinstance(img, Image.Image): return img # Create a new white background image target_width, target_height = 768, 1024 new_img = Image.new('RGB', (target_width, target_height), 'white') # Calculate aspect ratios aspect_ratio = img.width / img.height target_aspect = target_width / target_height if aspect_ratio > target_aspect: # Image is wider than target new_width = target_width new_height = int(target_width / aspect_ratio) resize_img = img.resize((new_width, new_height), Image.Resampling.LANCZOS) paste_y = (target_height - new_height) // 2 new_img.paste(resize_img, (0, paste_y)) else: # Image is taller than target new_height = target_height new_width = int(target_height * aspect_ratio) resize_img = img.resize((new_width, new_height), Image.Resampling.LANCZOS) paste_x = (target_width - new_width) // 2 new_img.paste(resize_img, (paste_x, 0)) return new_img def process_uploaded_image(img, enable_processing): if not enable_processing or img is None: return img if isinstance(img, dict): # For ImageEditor if img.get('background'): img['background'] = process_image_to_768x1024(img['background']) return img return process_image_to_768x1024(img) ``` ![image](https://github.com/user-attachments/assets/a1f0685a-fe4c-44c9-879c-a17aafbce492) ![image](https://github.com/user-attachments/assets/288f16a7-0d6a-4641-81f6-350722c2aeda)
1medium
Title: RFE: Implement Maximum Execution Limit for Scheduled Successful Jobs Body: ### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. ### Feature type New Feature ### Feature Summary **Context:** The current version of AWX allows users to schedule job executions, but it does not offer a way to automatically disable these schedules after a certain number of successful executions. This enhancement proposes adding a feature to limit the maximum number of executions for a schedule. For example, a user could set a schedule to run a job three times every day, but after a total of nine successful executions, the schedule should automatically disable itself. This feature would be particularly useful in managing resources and ensuring that tasks do not run indefinitely. Consider a scenario where schedules are dynamically generated to perform specific checks a few times a day over several days. After the desired number of checks, it would be beneficial for the schedule to deactivate automatically. Schedules in AWX function similarly to a distributed cron job. By implementing this feature, it would be akin to having a distributed version of the "at" command, enhancing the flexibility and control over task executions in AWX. **Use Case:** This feature would be beneficial in scenarios where a task is required to run only a limited number of times, such as: - Temporary projects or jobs that are only relevant for a certain period or a specific number of executions. - Compliance or policy requirements that mandate certain tasks not exceed a specified number of runs. - Testing environments where jobs are needed for a finite number of runs to validate behavior under controlled repetitions. **Impact:** - Positive: Enhances control over job execution, prevents resource wastage, and improves manageability. - Negative: Slight increase in the complexity of the scheduling interface and additional validation required to manage the execution count. ### Select the relevant components - [X] UI - [X] API - [X] Docs - [X] Collection - [X] CLI - [ ] Other ### Steps to reproduce RFE ### Current results RFE ### Sugested feature result RFE ### Additional information _No response_
2hard
Title: Is there a way to control the aspect ratio of the navigation images? Body: #### Describe the functionality you would like to see. EELSSpectrum.plot() doesn't seem to take argument like plt.imshow(extent). Is there a way to control the aspect ratio of navigation images?
1medium
Title: Failed connections when running docker build to install new requirements.txt Body: Hi there! I followed your instructions on this video, very easy to follow and instructive, thank you! https://www.youtube.com/watch?v=DPBspKl2epk&t=849s However, I want to add new package dependancies to the docker image, so I first generated a new requirements.txt and then I modified the Dockerfile to uncomment: COPY requirements.txt / RUN pip install --no-cache-dir -U pip RUN pip install --no-cache-dir -U -r /requirements.txt However, I get these errors when I run the dockerfile: ``` (env) PS C:\Work\docker-azure-demo\flask-webapp-quickstart> docker build --rm -f .\Dockerfile -t apraapi.azurecr.io/flask-webapp-quickstart:latest . Sending build context to Docker daemon 808.1MB Step 1/7 : FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7 ---> cdec3b0d8f20 Step 2/7 : ENV LISTEN_PORT=8000 ---> Using cache ---> c5c66cc273b6 Step 3/7 : EXPOSE 8000 ---> Using cache ---> 15a504c395e6 Step 4/7 : COPY requirements.txt / ---> Using cache ---> 9ffd23cb8771 Step 5/7 : RUN pip install --no-cache-dir -U pip ---> Using cache ---> ca7dbaeb4b0e Step 6/7 : RUN pip install --no-cache-dir -U -r /requirements.txt ---> Running in ad5830d85dcb Collecting astroid==2.4.1 (from -r /requirements.txt (line 1)) Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3790869e8>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/astroid/ Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3790dc550>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/astroid/ Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3790dc588>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/astroid/ Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3790dc240>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/astroid/ Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3790dcda0>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/astroid/ Could not find a version that satisfies the requirement astroid==2.4.1 (from -r /requirements.txt (line 1)) (from versions: ) No matching distribution found for astroid==2.4.1 (from -r /requirements.txt (line 1)) The command '/bin/sh -c pip install --no-cache-dir -U -r /requirements.txt' returned a non-zero code: 1 (env) PS C:\Work\docker-azure-demo\flask-webapp-quickstart> ``` Do you know what's happening? Alternatively, I do a pip install from my venv and all seems to work fine. Thanks!
1medium
Title: [BUG] Getting Error: response status is 400 when trying to run GET /api/hybrid/video_data Body: ***Platform where the error occurred?*** Such as: TikTok ***The endpoint where the error occurred?*** Such as: API-V4/ WebApp ***Submitted input value?*** Such as: [video link ](https://www.tiktok.com/@nandaarsyinta/video/7298346423521742085) ***Have you tried again?*** Such as: Yes, the error still exists. ***Have you checked the readme or interface documentation for this project?*** Such as: Yes, and it is very sure that the problem is caused by the program. This is the error I am getting { "detail": { "code": 400, "message": "An error occurred.", "support": "Please contact us on Github: https://github.com/Evil0ctal/Douyin_TikTok_Download_API", "time": "2024-05-11 06:35:09", "router": "/api/hybrid/video_data", "params": { "url": "https://www.tiktok.com/@nandaarsyinta/video/7298346423521742085", "minimal": "false" } } }
1medium
Title: sanic pytest Body: ### Is there an existing issue for this? - [X] I have searched the existing issues ### Describe the bug When I made a test code with pytest, First test_client.get() function was passed. but next test_client.get() function has a trouble There is some error like this. ``` ERROR sanic.error:startup.py:960 Experienced exception while trying to serve Traceback (most recent call last): File "/py39//lib/python3.9/site-packages/sanic/mixins/startup.py", line 958, in serve_single worker_serve(monitor_publisher=None, **kwargs) File "/py39//lib/python3.9/site-packages/sanic/worker/serve.py", line 143, in worker_serve raise e File "/py39//lib/python3.9/site-packages/sanic/worker/serve.py", line 117, in worker_serve return _serve_http_1( File "/py39//lib/python3.9/site-packages/sanic/server/runners.py", line 222, in _serve_http_1 loop.run_until_complete(app._startup()) File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete File "/py39//lib/python3.9/site-packages/sanic/app.py", line 1729, in _startup raise ServerError(message) sanic.exceptions.ServerError: Duplicate route names detected: App.wrapped_handler. You should rename one or more of them explicitly by using the `name` param, or changing the implicit name derived from the class and function name. For more details, please see https://sanic.dev/en/guide/release-notes/v23.3.html#duplicated-route-names-are-no-longer-allowed INFO sanic.root:startup.py:965 Server Stopped ``` ### sanic versions ``` sanic==23.6.0 sanic-compress==0.1.1 sanic-ext==23.6.0 sanic-jinja2==2022.11.11 sanic-routing==23.6.0 sanic-testing==23.6.0 ``` ### Code snippet ```python import pytest from was import webapp from utils.logger import logger @pytest.fixture def app(): app = webapp.app return app def test_app_root(app): _, response = app.test_client.get("/") logger.info(response.status) assert response.status == 200 _, response = app.test_client.get("/") logger.info(response.status) # assert request.method.lower() == "get" assert response.status == 200 ``` ### Expected Behavior _No response_ ### How do you run Sanic? Sanic CLI ### Operating System MacOS ### Sanic Version Sanic 23.6.0; Routing 23.6.0 ### Additional context _No response_
1medium
Title: Bug with 0.18.0: Matplotlib clabels become NoneType object for figs with projection=ccrs.PlateCarree() Body: Calling to ax.clabel plots contour labels as desired, yet the clabel object itself is somehow 'None' for matplotlib figures initilaized with projection=ccrs.PlateCarree(). This becomes an issue because I then wipe my contour and contour label objects clean after each iteration of a long loop, as in [this answer](https://stackoverflow.com/questions/47049296/matplotlib-how-to-remove-contours-clabel), but a TypeError is thrown because of course it can't iterate over a NoneType object. By rolling back to cartopy 0.17.0 and seeing clabel producing an object as intended, I can confirm that this is a bug that appears with version 0.18.0. Below is an example script that illustrates that contour labels become NoneType objects when projection=ccrs.PlateCarree() is invoked with cartopy 0.18.0. I'm running with Anaconda on a Win64 PC. #### Code to reproduce ``` import matplotlib as mpl mpl.use('Agg') import matplotlib.pyplot as plt import numpy as np from time import time from datetime import datetime, timedelta from siphon.catalog import TDSCatalog import cartopy.crs as ccrs # Recreate the gridded data from the matplotlib contour example delta = 0.025 x = np.arange(-3.0, 3.0, delta) y = np.arange(-2.0, 2.0, delta) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X**2 - Y**2) Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2) Z = (Z1 - Z2) * 2 # Contour it and label it to show that labels work and can be removed as desired fig, ax = plt.subplots() cN = ax.contour(X, Y, Z) lbl = ax.clabel(cN) #plt.show() print("\n\nContour label for a basic 2D plot is: ") print(lbl) # Now remove those labels # Will work as intended for lbl for label in lbl: label.remove() # Now try a dataset that needs to be geographically referenced # Use siphon to get a weather model dataset # This dataset link will expire on approximately March 9, 2021 model_url = "https://www.ncei.noaa.gov/thredds/catalog/model-rap130/202009/20200909/catalog.xml?dataset=rap130/202009/20200909/rap_130_20200909_1800_000.grb2" vtime = datetime.strptime('2020090918','%Y%m%d%H') # Get the data model = TDSCatalog(model_url) ds = model.datasets[0] ncss = ds.subset() query = ncss.query() query.accept('netcdf') query.time(vtime) # Set to the analysis hour only query.add_lonlat() query.variables('Geopotential_height_isobaric') data = ncss.get_data(query) # Get the lats and lons and a data field from the file lats = data.variables['lat'][:,:] lons = data.variables['lon'][:,:] hght = data.variables['Geopotential_height_isobaric'][0,24,:,:] # 700 hPa is 24th element # Contour that weather data grid # This requires cartopy, which seems to be the problem # Redefine the figure, because this time we need to georeference it fig = plt.figure(5, figsize=(1600/96,1600/96)) ax = fig.add_subplot(111, projection=ccrs.PlateCarree()) cN2 = ax.contour(lons, lats, hght) lbl2 = ax.clabel(cN2) #plt.show() print("\n\nContour label for weather data plot is: ") print(lbl2) # Removing labels will not work for lbl2 because it can't iterate over a NoneType object # if using cartopy 0.18.0 for label in lbl2: label.remove() ``` #### Traceback ``` Traceback (most recent call last): File "clabel_bug.py", line 67, in <module> for label in lbl2: TypeError: 'NoneType' object is not iterable ```
1medium
Title: MultiIndex dropped by reset_index with default argument Body: **Describe the bug** A clear and concise description of what the bug is. - [X] I have checked that this issue has not already been reported. - [X] I have confirmed this bug exists on the latest version of pandera. - [ ] (optional) I have confirmed this bug exists on the master branch of pandera. **Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug. #### Code Sample, a copy-pastable example ```python import pandera as pa multi_index = pa.DataFrameSchema( columns={"test_col": pa.Column(int)}, index=pa.MultiIndex([pa.Index(int, name="index_1"), pa.Index(int, name="index_2")]), ) single_index = pa.DataFrameSchema( columns={"test_col": pa.Column(int)}, index=pa.Index(int, name="index_1") ) print(multi_index) print("-----") print(single_index) print("-----") print("-----") print(multi_index.reset_index()) print("-----") print(single_index.reset_index()) # By contrast, this will work as expected: print("-----") print(multi_index.reset_index(["index_1", "index_2"])) ``` #### Expected behavior The indices to become columns. #### Actual behavior The MultiIndex is completely dropped without being added to the columns. #### Desktop (please complete the following information): - OS: Win10 - Python 3.9.12 #### Screenshots ![image](https://user-images.githubusercontent.com/10162554/170385676-a33be121-e867-4814-8440-fc2afbfe8f06.png) ![image](https://user-images.githubusercontent.com/10162554/170386243-318b6215-9501-43bf-af1f-932f8a1263ee.png) #### Additional context Found in pandera-0.9.0 Exists as recently as pandera-0.11.0
1medium
Title: Bug: "Lockfile hash doesn't match pyproject.toml, packages may be outdated" warning in pdm Body: ### Description When running `pdm install` on `litestar` repo you get: ``` Run pdm install -G:all WARNING: Lockfile is generated on an older version of PDM WARNING: Lockfile hash doesn't match pyproject.toml, packages may be outdated Updating the lock file... ``` Link: https://github.com/litestar-org/litestar/actions/runs/11290808586/job/31403420888?pr=3784#step:5:13 I don't think that this is correct. ### URL to code causing the issue _No response_ ### MCVE _No response_ ### Steps to reproduce ```bash 1. Run `pdm install` on clean repo with no `venv` ``` ### Screenshots _No response_ ### Logs _No response_ ### Litestar Version `main` ### Platform - [X] Linux - [X] Mac - [X] Windows - [ ] Other (Please specify in the description above)
1medium
Title: Weird error while training a model with tabular data!!!! Some problem related self.log_dict Body: ### Bug description The code can be accessed at https://www.kaggle.com/code/vigneshwar472/notebook5a03168e34 I am working on multiclass classification task and want to train a nueral network with pytorch lightning on 2x T4 GPUs on kaggle notebook. Everything seems to work fine but I encounter this error when I fitted the trainer. Training Step of lightning module ``` def training_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = F.cross_entropy(logits, y) preds = F.softmax(logits, dim=1) preds.to(y) self.log_dict({ "train_Loss": loss, "train_Accuracy": self.accuracy(preds, y), "train_Precision": self.precision(preds, y), "train_Recall": self.recall(preds, y), "train_F1-Score": self.f1(preds, y), "train_F3-Score": self.f_beta(preds, y), "train_AUROC": self.auroc(preds, y), }, on_step=True, on_epoch=True, prog_bar=True, sync_dist=True) return loss ``` Initializing trainer ``` trainer = L.Trainer(max_epochs=5, devices=2, strategy='ddp_notebook', num_sanity_val_steps=0, profiler='simple', default_root_dir="/kaggle/working", callbacks=[DeviceStatsMonitor(), StochasticWeightAveraging(swa_lrs=1e-2), #EarlyStopping(monitor='train_Loss', min_delta=0.001, patience=100, verbose=False, mode='min'), ], enable_progress_bar=True, enable_model_summary=True, ) ``` trainer.fit(model, data_mod) => data_mod is LightningDataModule ``` W1116 14:03:37.546000 140135548491584 torch/multiprocessing/spawn.py:146] Terminating process 131 via signal SIGTERM INFO: [rank: 0] Received SIGTERM: 15 --------------------------------------------------------------------------- ProcessRaisedException Traceback (most recent call last) Cell In[14], line 1 ----> 1 trainer.fit(model, data_mod) File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:538, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path) 536 self.state.status = TrainerStatus.RUNNING 537 self.training = True --> 538 call._call_and_handle_interrupt( 539 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path 540 ) File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py:46, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs) 44 try: 45 if trainer.strategy.launcher is not None: ---> 46 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs) 47 return trainer_fn(*args, **kwargs) 49 except _TunerExitException: File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/launchers/multiprocessing.py:144, in _MultiProcessingLauncher.launch(self, function, trainer, *args, **kwargs) 136 process_context = mp.start_processes( 137 self._wrapping_function, 138 args=process_args, (...) 141 join=False, # we will join ourselves to get the process references 142 ) 143 self.procs = process_context.processes --> 144 while not process_context.join(): 145 pass 147 worker_output = return_queue.get() File /opt/conda/lib/python3.10/site-packages/torch/multiprocessing/spawn.py:189, in ProcessContext.join(self, timeout) 187 msg = "\n\n-- Process %d terminated with the following error:\n" % error_index 188 msg += original_trace --> 189 raise ProcessRaisedException(msg, error_index, failed_process.pid) ProcessRaisedException: -- Process 1 terminated with the following error: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 76, in _wrap fn(i, *args) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/launchers/multiprocessing.py", line 173, in _wrapping_function results = function(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 574, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 981, in _run results = self._run_stage() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1025, in _run_stage self.fit_loop.run() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py", line 205, in run self.advance() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py", line 363, in advance self.epoch_loop.run(self._data_fetcher) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 140, in run self.advance(data_fetcher) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 250, in advance batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 190, in run self._optimizer_step(batch_idx, closure) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 268, in _optimizer_step call._call_lightning_module_hook( File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 167, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 1306, in optimizer_step optimizer.step(closure=optimizer_closure) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/optimizer.py", line 153, in step step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/ddp.py", line 270, in optimizer_step optimizer_output = super().optimizer_step(optimizer, closure, model, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 238, in optimizer_step return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/plugins/precision/precision.py", line 122, in optimizer_step return optimizer.step(closure=closure, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/optim/optimizer.py", line 484, in wrapper out = func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/optim/optimizer.py", line 89, in _use_grad ret = func(self, *args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/optim/adamw.py", line 204, in step loss = closure() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/plugins/precision/precision.py", line 108, in _wrap_closure closure_result = closure() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 144, in __call__ self._result = self.closure(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 129, in closure step_output = self._step_fn() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 317, in _training_step training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values()) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 319, in _call_strategy_hook output = fn(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 389, in training_step return self._forward_redirection(self.model, self.lightning_module, "training_step", *args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 640, in __call__ wrapper_output = wrapper_module(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1636, in forward else self._run_ddp_forward(*inputs, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1454, in _run_ddp_forward return self.module(*inputs, **kwargs) # type: ignore[index] File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 633, in wrapped_forward out = method(*_args, **_kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn return fn(*args, **kwargs) File "/tmp/ipykernel_30/3650372019.py", line 74, in training_step self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True, sync_dist=True) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 437, in log apply_to_collection(value, dict, self.__check_not_nested, name) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 438, in torch_dynamo_resume_in_log_at_437 apply_to_collection( File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 484, in torch_dynamo_resume_in_log_at_438 results.reset(metrics=False, fx=self._current_fx_name) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 508, in torch_dynamo_resume_in_log_at_484 and is_param_in_hook_signature(self.training_step, "dataloader_iter", explicit=True) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 525, in torch_dynamo_resume_in_log_at_508 results.log( File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 403, in log metric = _ResultMetric(meta, isinstance(value, Tensor)) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 404, in torch_dynamo_resume_in_log_at_403 self[key] = metric File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 411, in torch_dynamo_resume_in_log_at_404 self[key].to(value.device) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 414, in torch_dynamo_resume_in_log_at_411 self.update_metrics(key, value, batch_size) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 419, in update_metrics result_metric.forward(value, batch_size) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 270, in forward self.update(value, batch_size) File "/opt/conda/lib/python3.10/site-packages/torchmetrics/metric.py", line 483, in wrapped_func update(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 225, in update self._forward_cache = self.meta.sync(value.clone()) # `clone` because `sync` is in-place File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 144, in sync assert self._sync is not None AssertionError ``` # Please Help me resolve this error. I am very confused what to do ### What version are you seeing the problem on? v2.4 ### How to reproduce the bug ```python Check out the kaggle notebook [](https://www.kaggle.com/code/vigneshwar472/notebook5a03168e34) ``` ### Error messages and logs ``` # Error messages and logs here please ``` ### Environment <details> <summary>Current environment</summary> ``` #- PyTorch Lightning Version (e.g., 2.4.0): #- PyTorch Version (e.g., 2.4): #- Python version (e.g., 3.12): #- OS (e.g., Linux): #- CUDA/cuDNN version: #- GPU models and configuration: #- How you installed Lightning(`conda`, `pip`, source): ``` </details> ### More info _No response_
2hard
Title: How to export cropped data Body: When I crop out the data I want on the interface, but when exporting, I can only export all the data. How can I export the cropped data.Thank you.
1medium
Title: Scheduled function truncated at 63 characters and fails to invoke Body: if I have a scheduled function with a name longer than 63 characters, then the name will be truncated in the CloudWatch event name/ARN: ``` { "production": { ... "events": [{ "function": "my_module.my_submodule.my_really_long_and_descriptive_function_name", "expressions": ["rate(1 day)"] }], ... } } ``` Event rule: `arn:aws:events:eu-west-2:000000000000:rule/-my_module.my_submodule.my_really_long_and_descriptive_function_` This results in the following exception when the event is handled by the lambda: ``` AttributeError: module 'my_module.my_submodule' has no attribute 'my_really_long_and_descriptive_function_' ``` ## Context <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> <!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.8/3.9/3.10/3.11/3.12 --> It looks like the `whole_function` value is parsed out of the event ARN here: https://github.com/zappa/Zappa/blob/39f75e76d28c1a08d4de6501e6f794fe988cbc98/zappa/handler.py#L410 Since the ARNs are limited in length, the long module path gets truncated to 63 characters (possibly because of the leading `-` making 64 total). It looks like the full module and function path remains non-truncated in the description of the event rule. ## Expected Behavior It should invoke the non-truncated function, or should refuse to deploy with handler functions that are too long. ## Actual Behavior It throws an exception and the scheduled task never executes. ## Possible Fix Either: 1. Have the handler read the non-truncated `whole_function` value from the event description. This might require an extra AWS API call that existing deployments may or may not have permission to perform. 2. During deployment, a mapping of truncated names to full names could be created and embedded in the deployed app bundle, then referenced when handling events. 3. Raise an error (early) during deployment if a handler function name is too long and would result in truncation. It would be better to explicitly fail during deployment than to have guaranteed failures later on that might go unnoticed. ## Steps to Reproduce 1. Create a scheduled function whose fully qualified handler is longer than 63 characters. 2. Deploy. 3. Observe the error logs for the `AttributeError` above. ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Zappa version used: 0.56.1 * Operating System and Python version: Amazon Linux (lambda), Python 3.9
2hard
Title: UnboundLocalError: local variable 'start' referenced before assignment Body: Dear Author: I just found an code logical bug in histogram. I think here should be raise an Exception. ```python def make_histogram(values, bins): """Convert values into a histogram proto using logic from histogram.cc.""" values = values.reshape(-1) counts, limits = np.histogram(values, bins=bins) limits = limits[1:] # void Histogram::EncodeToProto in histogram.cc for i, c in enumerate(counts): if c > 0: start = max(0, i - 1) break for i, c in enumerate(reversed(counts)): if c > 0: end = -(i) break counts = counts[start:end] limits = limits[start:end] sum_sq = values.dot(values) return HistogramProto(min=values.min(), max=values.max(), num=len(values), sum=values.sum(), sum_squares=sum_sq, bucket_limit=limits, bucket=counts) ``` if all the elements in counts is 0 .there will be error like this: ``` File "/home/shuxiaobo/TR-experiments/cli/train.py", line 62, in train writer.add_histogram(name + '/grad', param.grad.clone().cpu().data.numpy(), j) File "/home/shuxiaobo/python3/lib/python3.6/site-packages/tensorboardX/writer.py", line 395, in add_histogram self.file_writer.add_summary(histogram(tag, values, bins), global_step, walltime) File "/home/shuxiaobo/python3/lib/python3.6/site-packages/tensorboardX/summary.py", line 142, in histogram hist = make_histogram(values.astype(float), bins) File "/home/shuxiaobo/python3/lib/python3.6/site-packages/tensorboardX/summary.py", line 162, in make_histogram counts = counts[start:end] UnboundLocalError: local variable 'start' referenced before assignment ```
1medium
Title: DOC: Unclear Documentation for debugging Body: ### Issue with current documentation: Setting up spin and debugger setup is unclear and tedious. URL: https://numpy.org/devdocs/dev/development_environment.html I have followed the instructions here and after multiple attempts, the spin build system does not seem to be working for me. Maybe the documentation is missing something? ### Idea or request for content: _No response_
1medium
Title: Graph (Synchronous) [Highstock] widget not displaying past data beyond 24 h Body: ### Versions: - Mycodo Version: 8.15.8 - Raspberry Pi Version: 400 Rev 1.1 - Raspbian OS Version: Bullseye ### Reproducibility 1. Add new Graph (Synchronous) [Highstock] Widget Configuration and any parameters to track. 2. Save graph, confirm it's rendering data correctly, and wait for more than 24 hours. 3. Clicking on the "Full" button in the top nav only shows data for 1 day (Fig. 1). Also, note not being able to click on prior days in the calendar drop-down (Fig. 2). ### Expected behavior I would expect there to be more data available beyond 24 h since I've had the graph successfully render data every day for 5 days now. ### Screenshots Fig. 1 ![image](https://github.com/kizniche/Mycodo/assets/69597/5ab4893c-e227-442a-90be-12dd067b54bd) Fig. 2 ![image](https://github.com/kizniche/Mycodo/assets/69597/ee5e70ae-bc42-44dd-8d34-5142229baa85) ### Additional context Probably missing something obvious in the software's config. Free Space (Input 06a2dd8c) Input (RPiFreeSpace), 15.0 second interval Measurement | Timestamp 2809.61 MB (Disk) CH0 | 2023/7/26 0:09:23
2hard
Title: Training fail when using create_mnbn_model() and Sequential() Body: When using the following model and `create_mnbn_model()` in mnist example, i got an error. model: ``` class BNMLP(chainer.Sequential): def __init__(self, n_units, n_out): super().__init__( # the size of the inputs to each layer will be inferred L.Linear(784, n_units), # n_in -> n_units L.BatchNormalization(n_units), L.Linear(n_units, n_units), # n_units -> n_units L.BatchNormalization(n_units), L.Linear(n_units, n_out), # n_units -> n_out ) ``` How to create mnbn model: ``` model = chainermn.links.create_mnbn_model( L.Classifier(BNMLP(args.unit, 10)), comm) ```
2hard
Title: DHCP failure in Local Redirect mode (Windows) Body: #### Problem Description I've been experimenting with local redirect mode on a Windows 11 AWS machine, and noticed the machine loses all connectivity after mitmdump has been running for a while (could take up to an hour). In order to debug this issue I've set both proxy_debug=true & termlog_verbosity=debug, and noticed at the end of the log when the machine loses all network capabilities, there are several DHCP broadcasts (UDP ports 67 & 68) that looks like this: `*:68 -> udp -> 255.255.255.255:67` After observing this, I'm thinking WinDivert might be having some issues with re-injecting broadcast packets. I also found the following issue that could be relevant: https://github.com/basil00/Divert/issues/320 I recompiled the windows-redirector binary with a modified filter (could be a nice feature flag to customize the WinDivert filter with an argument), and it seemed that the problem ceased when the filter (on WinDivert) was set to exclude broadcasts. I can open a PR for this if you think the fix is appropriate but it's pretty straight forward, I changed these 2 filters: https://github.com/mitmproxy/mitmproxy_rs/blob/c30c9d8ffc41a453670a27909b2cb0d97abbbb81/mitmproxy-windows/redirector/src/main2.rs#L112 https://github.com/mitmproxy/mitmproxy_rs/blob/c30c9d8ffc41a453670a27909b2cb0d97abbbb81/mitmproxy-windows/redirector/src/main2.rs#L117 From "tcp || udp" to "remoteAddr != 255.255.255.255 && (tcp || udp)" #### Steps to reproduce the behavior: 1. Start up a Windows EC2 instance on AWS 2. Let mitmdump run for a while in local redirect mode 3. Wait for machine to lose connection (eg. RDP session will disconnect and no further connections will be possible) #### System Information EC2 instance on AWS Mitmproxy: 10.3.0 binary Python: 3.12.3 OpenSSL: OpenSSL 3.2.1 30 Jan 2024 Platform: Windows-11-10.0.22631-SP0
1medium
Title: Add support for asyncio (Python>=3.4) Body: It would be great if backoff would be available for use with asyncio's coroutines. This requires: 1. Handle coroutines in `on_predicate` and `on_exception` decorators. 2. Handle case when `on_success`/`on_backoff`/`on_giveup` are coroutines. 3. Use `asyncio.sleep()` instead of `time.sleep()`. 4. Conditionally installing/importing required deps on Python < 3.4; tests; CI update. Obviously sync and async versions of can't be trivially combined. This can be solved in one of the following ways: 1. Check in `on_predicate`/`on_exception` is wrapped function is coroutine and switch between sync and async implementations. Notice that in general `time.sleep` can't be used with asyncio, only in separate thread due to the nature of async code. This means that both implementations - sync and async - in single program will be used very rarely. Also I don't see easy way of sharing code between sync/async versions. At least tests will be completely duplicated. 2. Reimplement `backoff` using async primitives in separate library. Unfortunately this leads to code duplication. As starting point I forked `backoff` and reimplemented it with async primitives: https://github.com/rutsky/aiobackoff It passes all tests and now I'm trying to integrate it with my project. Please share ideas and intents of implementing asyncio support in backoff library, I would like to share efforts as much as possible. If there are no indents of adding asyncio support to `backoff` I can publish `aiobackoff` fork.
2hard
Title: delay_after_gen warning Body: Hello, I am using PyGAD version: 3.3.1 on Windows with python 3.10 within jupyter notebook. When I run my GA, I am getting the following user warning. This is not something I am setting. It seems to emanate from the internal pygad code. How can I avoid having this warning displayed? Thank you ``` C:\Users\wassimj\AppData\Local\Programs\Python\Python310\lib\site-packages\pygad\pygad.py:1139: UserWarning: The 'delay_after_gen' parameter is deprecated starting from PyGAD 3.3.0. To delay or pause the evolution after each generation, assign a callback function/method to the 'on_generation' parameter to adds some time delay. ```
0easy
Title: [BUG] `dash.get_relative_path()` docstring out of date Body: Docstrings for `dash.get_relative_path()` and `dash.strip_relative_path()` still refer to the `app` way of accessing those functions, which creates inconsistency in the docs: ![Screen Shot 2023-05-02 at 2 44 32 PM](https://user-images.githubusercontent.com/4672118/235759684-d386ad8c-cee1-48a4-ba6c-9b54fb442440.png) https://dash.plotly.com/reference#dash.get_relative_path
0easy
Title: Prepared statements being recreated on every call of fetch Body: <!-- Thank you for reporting an issue/feature request. If this is a feature request, please disregard this template. If this is a bug report, please answer to the questions below. It will be much easier for us to fix the issue if a test case that reproduces the problem is provided, with clear instructions on how to run it. Thank you! --> * **asyncpg version**: 0.18.3 * **PostgreSQL version**: 9.4 * **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce the issue with a local PostgreSQL install?**: I use the official Docker image with custom schema and test data. * **Python version**: 3.7 * **Platform**: MacOS * **Do you use pgbouncer?**: No * **Did you install asyncpg with pip?**: Yes * **If you built asyncpg locally, which version of Cython did you use?**: - * **Can the issue be reproduced under both asyncio and [uvloop](https://github.com/magicstack/uvloop)?**: Yes <!-- Enter your issue details below this comment. --> Hello, I am facing a peculiar problem with the way prepared statements are handled. I use the following architecture: aiohttp application, which initializes a pool of 1 to 20 db connections on init. Data is periodically refreshed from the DB (once in a few minutes for most tables). I have a special class which handles the loading of data from DB and caches them to memory and to Redis (since multiple containers of the same app are running and I would like to minimize fetches from DB). This class is instantiated by a factory method which creates (besides other arguments) a `load` coroutine, which gets query passed into it by the factory. The queries have no parameters and are static through out the runtime. `load` functions works by getting a connection from the pool, and calling `connection.fetch` on the given query. As per my understanding, the query should then be turned into a prepared statement, cached into a builtin LRU cache, and reused in later calls. However, it seems that each call to `load` (which is periodic) gets a new LRU cache for some reason, creating the prepared statements anew. But when I run `connection.fetch` on `SELECT * FROM pg_prepared_statements` I see that the number of prepared statements held by the connection increases in each call of `fetch`. Indeed, adding some prints to `connection.py` I found out that the statements get recreated and put into the cache on each call, since the cache is empty. I thought that perhaps it is because the connections I get from the pool differ, but since `pg_prepared_statements` is local to a session (a connection?) I think this is not the case. Indeed, limiting the size of the pool to `max_size=1` did not solve this issue. This causes my Postgres to slowly drain more and more memory until the connections are reset. Disabling the LRU cache with `statement_cache_size=0` avoids this, but I believe that this behaviour is not intended. I tried to make a minimal reproducer but haven't yet succeeded.
2hard
Title: feat: Deduplicated enqueue Body: I'm wondering if hatchet has any built in support for any sort of deduplicated enqueue, where a task/step/workflow could be enqueued in an idempotent way i.e. deduplicated based on its parameters. I realize that there are some tricky details here, but this would be super nice.
2hard
Title: Starry-eyed Supporter (150 Points) Body: ### What side quest or challenge are you solving? Get 5 people to star our repository ### Points 150 points ### Description _No response_ ### Provide proof that you've completed the task ![IMG-20241015-WA0002](https://github.com/user-attachments/assets/f6bbd793-136c-4baf-8b22-b3ca97d1b218) ![IMG-20241015-WA0003](https://github.com/user-attachments/assets/dde9eb69-c189-43d8-966c-009078e08612) ![IMG-20241014-WA0024](https://github.com/user-attachments/assets/32815a78-bc31-4200-8ac5-6f794efd4851) ![IMG-20241015-WA0009](https://github.com/user-attachments/assets/17981916-bbfe-4697-b891-5f103ac03cf7) ![IMG-20241015-WA0010(1)](https://github.com/user-attachments/assets/129de0b7-7d29-4d5a-b287-707431fb0cc9)
3misc
Title: [Bug]: Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. Body: ### Checklist - [X] The issue exists after disabling all extensions - [ ] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [ ] The issue exists in the current version of the webui - [ ] The issue has not been reported before recently - [ ] The issue has been reported before but has not been fixed yet ### What happened? Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ### Steps to reproduce the problem Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ### What should have happened? Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ### What browsers do you use to access the UI ? Google Chrome ### Sysinfo Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ### Console logs ```Shell Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` ### Additional information Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2hard
Title: DjangoObjectType duplicate models breaks Relay node resolution Body: I have exactly the same issue as #107. The proposed solution no longer works How to do this now in the current state ?
2hard
Title: tensorflow->caffe resnet_v2_152 conversion error Body: Platform :ubuntu16.04 Python version:2.7 Source framework with version ):Tensorflow 1.12.0with GPU Destination framework with version :caffe1.0.0 Pre-trained model path (webpath or webdisk path): Running scripts: mmconvert -sf tensorflow -in imagenet_resnet_v2_152.ckpt.meta -iw imagenet_resnet_v2_152.ckpt --dstNodeName MMdnn_Output -df caffe -om tf_resnet Traceback (most recent call last): File "/usr/local/bin/mmconvert", line 11, in <module> load_entry_point('mmdnn==0.2.3', 'console_scripts', 'mmconvert')() File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convert.py", line 102, in _main ret = convertToIR._convert(ir_args) File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 62, in _convert from mmdnn.conversion.tensorflow.tensorflow_parser import TensorflowParser File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/tensorflow/tensorflow_parser.py", line 15, in <module> from tensorflow.tools.graph_transforms import TransformGraph ImportError: No module named graph_transforms
1medium
Title: Multi-node multi-GPUs training is slower than single-node multi-GPUs training[BUG] Body: **Describe the bug** I utilized zero-stage3 of Deepspeed for fine-tuning the Qianwen2-1.5b model and observed that the **training speed of 2 nodes with total 4 A10 GPUs is one time slower than that of single node with total 2 A10 GPUs**. Here are some details. The training speed of 2 nodes with total 4 A10 GPUs: <img width="1521" alt="image (1)" src="https://github.com/user-attachments/assets/7b569170-4cdd-4851-8224-1ba404c83213"> <img width="671" alt="image" src="https://github.com/user-attachments/assets/edb69c6f-a2be-4c5a-acb0-7071fbd44ef8"> The training speed is about 8.68s/iter, and the forward latency, backward latency is 1.6s, 2.51s. However, the training speed of single node with total 2 A10 GPUs: ![image (2)](https://github.com/user-attachments/assets/2aeea642-7158-4763-986b-7a3ecb77004c) <img width="660" alt="image (3)" src="https://github.com/user-attachments/assets/463e50e5-8b67-45e5-9049-4a17557e8ef5"> The training speed is about 2.46s/iter, and the forward latency, backward latency is 357ms, 673ms. The above results show that the training speed of 2-node 4GPUs is much slower than that of single-node GPUs in **feedforward and feedback processes**. I thought it was a bandwidth problem of network, but my calculations showed it wasn't as follws: The average receiving and sending bandwidths during the training were 8.74Gbit/s and 9.28Gbit/s, respectively. model weights size: 1.5\*(10^8)\*16bit, gradient size: 1.5\*(10^8)\*16bit, the communication consume is: 3\*1.5\*(10^8)\*16bit/2/(8.74\*(10^9))=0.41s. So I want to know what's wrong with the results? I'd like to ask for people's help. Thanks!
2hard
Title: [Bug]: Calling start() multiple times on a macos timer doesn't stop the previous timer Body: ### Bug summary When calling `timer.start(); timer.start()` on an already running timer, the previous timer should be stopped before starting a new one. On the macosx backend this causes two timers to be running under the hood. ### Code for reproduction ```Python import time import matplotlib.pyplot as plt timer = plt.figure().canvas.new_timer(interval=1000) timer.add_callback(lambda: print(f"{time.ctime()}")) timer.start() timer.start() plt.pause(2) ``` ### Actual outcome 4 prints, 2 per second ``` Tue Nov 5 09:07:27 2024 Tue Nov 5 09:07:27 2024 Tue Nov 5 09:07:28 2024 Tue Nov 5 09:07:28 2024 ``` ### Expected outcome 2 prints, 1 per second ``` Tue Nov 5 09:07:27 2024 Tue Nov 5 09:07:28 2024 ``` ### Additional information _No response_ ### Operating system macos ### Matplotlib Version main ### Matplotlib Backend macosx ### Python version _No response_ ### Jupyter version _No response_ ### Installation None
1medium
Title: Basket may be reused if an error happens after an order was placed Body: ### Issue Summary If an error happens in `OrderPlacementMixin.handle_successful_order` (e.g. a mail can't be sent) then `PaymentDetailViews.submit` will subsequently thaw the basket (https://github.com/django-oscar/django-oscar/blob/d076d04593acf2c6ff9423e94148bb491cad8bd9/src/oscar/apps/checkout/views.py#L643). The basket will then remain open and thus re-used but the [default implementation](https://github.com/django-oscar/django-oscar/blob/d076d04593acf2c6ff9423e94148bb491cad8bd9/src/oscar/apps/order/utils.py#L34) of `OrderNumberGenerator.order_number` will prevent any order from being created from the same basket as that would result in [duplicate order numbers](https://github.com/django-oscar/django-oscar/blob/d076d04593acf2c6ff9423e94148bb491cad8bd9/src/oscar/apps/order/utils.py#L59). ### Steps to Reproduce 1. Create a basket 1. Make sure that you have a an order_placed email that would be sent 1. Ensure that sending an email will fail, e.g. by using the smtp backend with a server that doesn't exist 1. Submit the basket, observe an order object being created 1. Ensure that sending an email will succeed 1. Try submitting the basket again, subsequent submits fail because `There is already an order with number ...` 1. The user is now stuck with a basket that can't ever be submitted.
2hard
Title: [x] ERROR: [Error 32] El proceso no tiene acceso al archivo porque estß siendo utilizado por otro proceso: 'trape.config' Body: Despues de ingresar el ngrok token y el api key se supondria que me dejaria avanzar pero solo me tira el mensaje - Congratulations! Successful configuration, now enjoy Trape! [x] ERROR: [Error 32] El proceso no tiene acceso al archivo porque estß siendo utilizado por otro proceso: 'trape.config' a alguien mas le ha pasado y me podria comentar como lo soluciono, gracias.
1medium
Title: Gunicorn instrumentation with statsd_host Body: I would like to configure`--statsd-host` for [Gunicorn instrumentation](https://docs.gunicorn.org/en/stable/instrumentation.html) in `tiangolo/uvicorn-gunicorn-fastapi:python3.7-alpine3.8`. So far, I have not had success e.g. with `echo "statsd_host = 'localhost:9125'" >> /gunicorn_conf.py` in `/apps/prestart.sh`. Is there a better way to try this and is is possible at all?
1medium
Title: add feature scaling transformers Body: Whenever I want to scale some numerical features I'm always importing transformers from sklearn. preprocessing. We know that the sklearn transformers don't take our required variable names to scale (like feature engine) and they don't return a data frame. It becomes somewhat frustrating when using a Pipeline to preprocess the data as the next transformer may need a data frame to transform the variables. It would be very useful to users in the data preprocessing Pipeline if we can add some feature scaling transformers like MinMaxScaler, StandardScaler, RobustScaler, etc... -> As a simple alternative solution, we can create a custom transformer as given below ``` from sklearn.preprocessing import StandardScaler from sklearn.base import BaseEstimator class CustomScaler(BaseEstimator): def __init__(self, variables): self.variables = variables self.scaler = StandardScaler() def fit(self, X, y=None): self.scaler.fit(X[self.variables]) return self def transform(self, X): X[self.columns] = self.scaler.transform(X[self.columns]) return X ``` **Additional context** Feature scaling is also an important feature engineering step for linear models. We can easily handle scalable variables in a preprocessing pipeline.
1medium
Title: jwt decorator Body: Hi, I found an issue with `jwt_required` decorator, I don't understand why works when I used like: ``` @custom_api.route('/resellers/<token>/registrations', methods=['GET']) @jwt_required def get_resellers(token): ... ``` but NOT when: I'm using https://flask-restless.readthedocs.io/en/stable/ where I can use methods as preprocessor ``` @classmethod @jwt_required def get_many_preprocessor(cls, search_params=None, **kw): print "Here not work" ``` This worked me with `flask-jwt`, what could be?
1medium
Title: Refactor BERTTokenizer Body: The [BERTTokenizer](https://github.com/ludwig-ai/ludwig/blob/00c51e0a286c3fa399a07a550e48d0f3deadc57d/ludwig/utils/tokenizers.py#L1109) is using torchtext. We want to remove torchtext as a dependency so this Tokenizer has to be refactored not using it.
1medium
Title: FR: Type Hints/Stubs Body: <!-- In the following, please describe your issue in detail! --> <!-- If some sections do not apply, just remove them. --> ### Short description <!-- This should summarize the issue. --> Feature request: Could you please add type hints to the source (and/or in stub files) and add a `py.typed` marker file per [PEP 561](https://peps.python.org/pep-0561/) ### Code to reproduce <!-- Please provide a minimal working example that reproduces the issue in the code block below. Ideally, this should be a full example someone else could run without additional setup. --> N/A ### Tested environment(s) N/A ### Additional context This is a feature request.
1medium
Title: Out of Disk Space Body: I have had Mycodo running for a while without any issues. I haven't been keeping an eye on it. Today I connected to my Raspberry Pi where Mycodo is installed and found that I am completely out of disk space. I am using Rasbian Lite and can't imagine what besides Mycodo would have filled up all the disk space. I have tried deleting some old Mycodo backups but even after doing so, the RPi reports that no space is available (when I use "df -h"). What could be causing this issue?
1medium
Title: Add testing section to docs Body: It would be great to see the testing section in the documentation, especially would like to see service testing (di)
1medium
Title: Datasetbuilder Local Download FileNotFoundError Body: ### Describe the bug So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError. I debug the code and it seems there is a bug there: So first it creates a .incomplete folder and before moving its contents the following code deletes the directory [Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984) hence as a result I face with: ``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '``` ### Steps to reproduce the bug ``` from datasets import load_dataset_builder from pathlib import Path parquet_dir = "~/data/Parquet/" Path(parquet_dir).mkdir(parents=True, exist_ok=True) builder = load_dataset_builder( "rotten_tomatoes", ) builder.download_and_prepare(parquet_dir, file_format="parquet") ``` ### Expected behavior Downloads the files and saves as parquet ### Environment info Ubuntu, Python 3.10 ``` datasets 2.19.1 ```
1medium
Title: Parallel processing with n_jobs > 1 failing on Azure Linux VM Ubuntu 20.04 Body: Hello, Working with autosklearn has been a journey. Running small experiments for 1 hour on my local machine (Running WSL2 and the other with Majaro Linux) ends without problems. I have been using the argument `n_jobs=-1` to run it on all cores. Since I want to let the script running for 10 hour or so I want to be able to run this on a remote server that I can just leave on. After installing all the packages and setting an extra environment for autosklearn running the same script without `n_jobs` everithing runs fine. The erro that I am getting is the following ``` File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/context.py", line 283, in _Popen return Popen(process_obj) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. ``` the exceution freezes. the script do not continue after the error. I have to kill it and then this appears: ``` self.start(timeout=timeout) self.start(timeout=timeout) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/site-packages/distributed/client.py", line 949, in start File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/site-packages/distributed/client.py", line 949, in start sync(self.loop, self._start, **kwargs) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/site-packages/distributed/utils.py", line 337, in sync e.wait(10) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 558, in wait sync(self.loop, self._start, **kwargs) sync(self.loop, self._start, **kwargs) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/site-packages/distributed/utils.py", line 337, in sync File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/site-packages/distributed/utils.py", line 337, in sync signaled = self._cond.wait(timeout) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 306, in wait e.wait(10) e.wait(10) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 558, in wait File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 558, in wait gotit = waiter.acquire(True, timeout) KeyboardInterrupt signaled = self._cond.wait(timeout) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 306, in wait signaled = self._cond.wait(timeout) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 306, in wait gotit = waiter.acquire(True, timeout) gotit = waiter.acquire(True, timeout) KeyboardInterrupt ``` I am not able to debug this by myself. I am not an expert and do not know anything about dask or parallel computing. I would be happy if someone could help. Thanks in advance
1medium
Title: GoBot breaks when calling Tensorflow Body: The GoBot example from tutrorial says at runtime: 2020-05-12 08:01:47.577 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 101: [saving vocabulary to /home/sgladkoff/Documents/MyWork/assistant_bot/word.dict] WARNING:tensorflow:From /home/sgladkoff/Documents/MyWork/env/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py:37: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. WARNING:tensorflow:From /home/sgladkoff/Documents/MyWork/env/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py:222: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /home/sgladkoff/Documents/MyWork/env/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py:222: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. It looks like code in deeppavlov/core/models/tf_model.py does not correspond to current state of Tensorflow... Or am I doing smth wrong?
1medium
Title: Matplotlib version (>=3.0.0) backends don't support Yellowbrick Body: **Describe the issue** A clear and concise description of what the issue is. <!-- If you have a question, note that you can email us via our listserve: https://groups.google.com/forum/#!forum/yellowbrick --> <!-- This line alerts the Yellowbrick maintainers, feel free to use this @ address to alert us directly in follow up comments --> @DistrictDataLabs/team-oz-maintainers The error says YellowBrick 0.9 has requirement matplotlib <3.0 and >=1.5.1.So,the version of matplotlib without updating works fine.I will recommend users not to update matplotlib as its version 3.0.2's backends don't support yellowbrick.
1medium
Title: Docs: DTO tutorial should mention how to exclude from collection of nested union models Body: ### Summary I was going through the documentation on [this](https://docs.litestar.dev/2/tutorials/dto-tutorial/03-nested-collection-exclude.html) it mentions how to exclude the fields from a collection of nested model. However, it needs an update where it mentions how to exclude fields if someone uses a collection of nested union models. For e.g. ```py # Assuming correct imports are in place @dataclass class Address: street: str city: str country: str @dataclass class Person: name: str age: int email: str address: Address children: list['Person' | None] # The DTO will change like: class ReadDTO(DataclassDTO[Person]): config = DTOConfig(exclude={"email", "address.street", "children.0.0.email", "children.0.0.address"}) # We need to provide two zeroes instead of one. ``` Now there is a line in the document which states `Given a generic type, with an arbitrary number of type parameters (e.g., GenericType[Type0, Type1, ..., TypeN]), we use the index of the type parameter to indicate which type the exclusion should refer to. For example, a.0.b, excludes the b field from the first type parameter of a, a.1.b excludes the b field from the second type parameter of a, and so on.` However, it is not very clear and an example in the document might help. ### Working without union (as per example) <img width="1526" alt="Image" src="https://github.com/user-attachments/assets/a271c46d-af8a-4c60-8a23-bebbfce53665" /> ### Union introduced <img width="1438" alt="Image" src="https://github.com/user-attachments/assets/36233b8d-839f-40b9-9217-fd7b9fcc1298" /> ### Union introduced (add extra zero to exclude) <img width="1447" alt="Image" src="https://github.com/user-attachments/assets/0f9b517c-8044-4367-819a-6a66dfb63082" /> > [!IMPORTANT] > Order of union matters too and therefore key must be changed based on the order.
1medium
Title: TypeError: <flask_script.commands.Shell object at 0x0000000004D13C88>: 'dict' object is not callable Body: In windows 7 with Pycharm, For Chapter 5.10 "**Integration with the Python Shell**", register make_context: ‘def make_shell_context(): return dict(app=app,db=db,User=User,Role=Role) manager.add_command("shell", Shell(make_context=make_shell_context()))‘ when i run command "python app/welcome.py shell" in cmd or terminal in Pycharm, this error occurs: (venv) C:\Users\biont.liu\PycharmProjects\flasky>python app/welcome.py shell Traceback (most recent call last): File "app/welcome.py", line 106, in <module> manager.run() File "C:\Users\biont.liu\PycharmProjects\flasky\venv\lib\site-packages\flask_script\__init__.py", line 417, in run result = self.handle(argv[0], argv[1:]) File "C:\Users\biont.liu\PycharmProjects\flasky\venv\lib\site-packages\flask_script\__init__.py", line 386, in handle res = handle(*args, **config) File "C:\Users\biont.liu\PycharmProjects\flasky\venv\lib\site-packages\flask_script\commands.py", line 216, in __call__ return self.run(*args, **kwargs) File "C:\Users\biont.liu\PycharmProjects\flasky\venv\lib\site-packages\flask_script\commands.py", line 304, in run context = self.get_context() File "C:\Users\biont.liu\PycharmProjects\flasky\venv\lib\site-packages\flask_script\commands.py", line 293, in get_context return self.make_context() TypeError: <flask_script.commands.Shell object at 0x0000000004D13C88>: 'dict' object is not callable **when i return list/strings in make_shell_context(), it prompts "'list'/'str' object is not callable.”** **Further step:** 1. For command "python app/welcome.py db init": Creating directory C:\Users\biont.liu\PycharmProjects\flasky\migrations ... done Creating directory C:\Users\biont.liu\PycharmProjects\flasky\migrations\versions ... done Generating C:\Users\biont.liu\PycharmProjects\flasky\migrations\alembic.ini ... done Generating C:\Users\biont.liu\PycharmProjects\flasky\migrations\env.py ... done Generating C:\Users\biont.liu\PycharmProjects\flasky\migrations\README ... done Generating C:\Users\biont.liu\PycharmProjects\flasky\migrations\script.py.mako ... done Please edit configuration/connection/logging settings in 'C:\\Users\\biont.liu\\PycharmProjects\\flasky\\migrations\\alembic.ini' before proceeding. 2. For command "python app/welcome.py db migrate -m "initial migration": INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.env] No changes in schema detected. 3. For command "python app/welcome.py db upgrade": INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL.
1medium
Title: Add ability to document OPTIONS methods Body: Hey there, I'm trying to document a simple ModelViewSet on which I have overriden the OPTIONS method, but I can't figure out why the method does not appear in swagger. Can ```python class ViewpointViewSet(viewsets.ModelViewSet): serializer_class = ViewpointSerializer @swagger_auto_schema(operation_description="OPTIONS /viewpoints/") def options(self, request, *args, **kwargs): if self.request.user.is_anonymous: raise PermissionDenied viewpoint_labels = ViewpointLabelSerializer( Viewpoint.objects.all(), many=True ).data return Response({ 'viewpoints': viewpoint_labels, }) ```
1medium
Title: Add support for a `sort` argument in WordNet methods Body: WordNet object methods support a series of methods, such as `hypernyms`, `antonyms`, etc., that under the hood use a private method called `_related`, that supports an argument called `sort` which by default is `True`. This argument sorts the output objects by name. For example, see: https://github.com/nltk/nltk/blob/e2d368e00ef806121aaa39f6e5f90d9f8243631b/nltk/corpus/reader/wordnet.py#L134-L135 However, in some cases, we don't need the output to be sorted, and we may be performing these operations multiple times and on long lists, which incurs in considerable penalties because of multiple needless sorting going on under the hood. Thus, I believe it'd be important that such methods supported a `sort` argument (as `_related` does), whose default value is `True` (to avoid breaking backward compatibility).
1medium
Title: Smart things integration gives "reached max subscriptions limit" even after it has been "fixed" on the new version Body: ### The problem I know this have been an issue before, but it still happens at version 2025.3.3 even tho I waited 10 hours as people told me. It actually gives it no matter what I do with the device, turning on and off, turning volume up or down and etc. IF another issue exists for that(considering 2025.3.3) please redirect me there. ### What version of Home Assistant Core has the issue? core-2025.3.3 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant OS ### Integration causing the issue Smart Things ### Link to integration documentation on our website _No response_ ### Diagnostics information _No response_ ### Example YAML snippet ```yaml ``` ### Anything in the logs that might be useful for us? ```txt ``` ### Additional information _No response_
1medium
Title: dvc exp run (or dvc repro) in monorepo: inefficient crawling Body: # Bug Report <!-- ## Issue name Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug. Example: `repro: doesn't detect input changes` --> ## Description In a monorepo scenario with a `.dvc` directory at the root of te monorepo and multiple subdirectory projects (each with their own `dvc.yaml` file), `dvc repro` seems to be checking the entire monorepo even when explicitly given a `dvc.yaml` file from a subdirectory (and even when run from that subdirectory). I am not sure why it does that but with a particularly large monorepo this can slow things down considerably. For example, with the example repo below when set to 1000 projects this increases the time to run simple experiments from about 2 seconds to about 24 seconds (1000 projects is a lot but they are very simple and their directory structure is also). Even if the other directories don't have a `dvc.yaml` file in them at all, `dvc repro` is still trying to collect stages from there (whereas I would expect it not to even look outside of the PWD). With `dvc exp run` the pattern is the same, only a bit more is going on there since the command does more than just `dvc repro` <!-- A clear and concise description of what the bug is. --> ### Reproduce There is a testing repo [here](https://github.com/iterative/monorepo-dvc-repro) with instructions on how to test this and reproduce the issue in the [README](https://github.com/iterative/monorepo-dvc-repro/blob/main/README.md). <!-- Step list of how to reproduce the bug --> <!-- Example: 1. dvc init 2. Copy dataset.zip to the directory 3. dvc add dataset.zip 4. dvc run -d dataset.zip -o model ./train.sh 5. modify dataset.zip 6. dvc repro --> ### Expected I would be expecting `dvc repro` to only scan the PWD of the `dvc.yaml` file (and its subdirectories) and not go through the entire directory tree. The same for `dvc exp run`. <!-- A clear and concise description of what you expect to happen. --> **Additional Information (if any):** Here are some logs that I generated with verbose runs of `dvc repro` and `dvc exp`. The first two are outputs when this is run from a single project in a monorepo with 5 projects in total (all of them with their own `dvc.yaml`). The last one is run in a monorepo with 2 projects, one of which does not contain any `dvc.yaml` file at all [dvc_repro.log](https://github.com/iterative/dvc/files/15040741/dvc_repro.log) [dvc_exp_run.log](https://github.com/iterative/dvc/files/15040743/dvc_exp_run.log) [dvc_exp_run_projects_wo_dvc.log](https://github.com/iterative/dvc/files/15040742/dvc_exp_run_projects_wo_dvc.log) <!-- Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue. If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`. If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons. -->
1medium
Title: api: support records that expire automatically? Body:
1medium
Title: [QUESTION] docker build on ARM Body: Hi, didn't want to file a bug report for this, because I'm sure it is not a bug per se, but rather an issue relating to architecture. I wanted to try whoogle on a Raspberry Pi, running Raspbian (an ARM version of Debian). I tried starting a Docker container, but that did not work because the code is compiled for x64, not ARM. So then I tried a docker build (using the command as instructed on your webpage). That fails to build though. It fails trying to make _cffi_backend.o, because it cannot find the header file ffi.h. I see that the -I for gcc is both /usr/include/ffi and /usr/include/libffi. I have libffi-dev installed, but both of those paths do not exist on my system. My system does have ffi.h, it's in /usr/include/arm-linux-gnueabihf/ So two questions: - Do you plan to include armv71 builds? - What do I need to change to make the docker build look in the right place? Any hints are appreciated! Thanks!
1medium