text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: [BUG] An error occurred during TTS: Incorrect padding
Body: **Describe the bug**
[-] An error occurred during TTS: Incorrect padding
It Mooviepy then fails to find the mp3 file | 1medium
|
Title: Slow groupby after adding column from array
Body: I have an original file with 100M lines. I create a dfv by importing it from .csv via vaex.from_csv. I filter some of the data frame according to certain conditions to create dfv_filtered. I run groupby and aggregate via sum on one of the columns. This runs fine in about ~10 sec.
I now take dfv_filtered, and cast one of its columns to an array via dfc_filtered.x.values. I transform this array into a numpy array and manipulate it to my liking, then add it to dfv_filtered. I do so via dfv_filtered['new column'] = name_of_np_array. I then create yet another column by multipliying dfv_filtered['new_column'] * dfv_filtered['existing_column']. Now when I run groupby it takes several minutes. I don't understand why. The dtypes are all the same, the dataframe seems virtual still, why would it take much longer?
If I simply take dfv_filtered and copy one of its existing columns over and over and add it as a new column each time, and then run groupby, it still runs ~10 sec.
Which step of my process is the one making it slower? | 2hard
|
Title: Text block detection
Body: Hi Team..
Is there a way to detect text blocks using this tool?

In example above, I need the entire text content to be detected as a block (which is address) than separate text..
Any help will be appreciated!!
| 3misc
|
Title: Error 429 + Scraper gives up
Body: Many moons ago, Internet Archive added some rate limiting that seems to also affect Wayback Machine.
( See discussion on similar project here https://github.com/buren/wayback_archiver/issues/32 )
The scraper scrapes too fast, and gets IP banned for 5 minutes by Wayback Machine.
As a result, all the remaining URLs in the pipeline fail repeatedly, Scrapy gives up on all of them and says "we're done!"
```
...
2023-11-09 22:09:57 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://web.archive.org/cdx/search/cdx?url=www.example.com/blog/stuff&output=json&fl=timestamp,original,statuscode,digest> (failed 3 times): 429 Unknown Status
2023-11-09 22:09:57 [scrapy.core.engine] INFO: Closing spider (finished)
```
I see two issues here:
1. Add a global rate limit (I don't think the concurrency flag covers this?)
1.b. If we get a 429, increase the delay? (Ideally should not occur, as the limit appears to be constant? Although this page https://web.archive.org/429.html suggests that the error can occur randomly if Wayback is getting a lot of traffic from other people.)
Also, if we get a 429, that seems to mean the IP has been banned for 5 minutes, so we should just pause the scraper for that time? (Making any requests during this time may possibly extend the block?)
2. (Unnecessary if previous points handled?) Increase retry limit from 3 to something much higher? Again, if we approach scraping with a "backoff"
---
TODO:
1. Find out exactly what the rate limit is: May be 5 per minute, or may be 15 per minute? (12 or 4s delay respectively.)
They seem to have changed it several times. Not sure if there are official numbers.
https://archive.org/details/toomanyrequests_20191110
This page says it's 15. It only mentions _submitting_ URLs, but it appears to cover retrievals too.
2. Find out if this project already does rate limiting. Edit: Sorta, but not entirely sufficient for this use case? (e.g. no 5-minute backoff on 429, autothrottle does not guarantee <X/minute, etc.)
Seems to be using Scrapy's autothrottle, so the fix may be as simple as updating the start delay and default concurrency:
`__main__.py`
```
'AUTOTHROTTLE_START_DELAY': 4, # aiming for 15 per minute
```
and
```
parser.add_argument('-c', '--concurrency', default=1.0, help=(
```
This doesn't seem to be sufficient to limit to 15/minute though, as I am getting mostly >15/min with these settings (and as high as 29 sometimes). But Wayback did not complain, so it seems the limit is higher than that.
More work needed. May report back later.
Edit: AutoThrottle docs say `AUTOTHROTTLE_TARGET_CONCURRENCY` represents the **average,** not the maximum. Which means if Wayback has a hard limit of X req/sec, setting X as the target would lead by definition to exceeding that limit 50% of the time. | 2hard
|
Title: How to replace the optimizer, can you give specific steps?
Body: ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
How to replace the optimizer, can you give specific steps?
### Additional
_No response_ | 1medium
|
Title: YOLO + OpenCV: Stream Decoding Issues
Body: ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I am attempting to use YOLO to perform real-time object detection on an RTSP stream from my Raspberry Pi connected to a camera. When I process the stream in real-time (a direct stream input), there are no artifacts, and it runs fine. However, when I process the stream frame by frame, I get many artifacts and the error 'h264 error while decoding MB'. Could this be related to the rate at which frames are being processed? I am running on a powerful machine, so I can rule out hardware limitations. Is there a way I can process the stream frame by frame without experiencing these artifacts?

### Additional
_No response_ | 2hard
|
Title: [Regression] IterableDataset is broken on 2.20.0
Body: ### Describe the bug
In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times.
The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't.
### Steps to reproduce the bug
Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`)
```
#!/bin/bash
# List of dataset versions to test
versions=("2.17.0" "2.20.0")
# Loop through each version
for version in "${versions[@]}"; do
# Install the specific version of the datasets library
pip3 install -q datasets=="$version" 2>/dev/null
# Run the Python script
python3 - <<EOF
from datasets import IterableDataset
from datasets.features.features import Features, Value
def test_gen():
yield from [{"foo": i} for i in range(10)]
features = Features([("foo", Value("int64"))])
d = IterableDataset.from_generator(test_gen, features=features)
mapped = d.map(lambda row: {"foo": row["foo"] * 2})
column = mapped.select_columns(["foo"])
print("Version $version - Iterate Once:", list(column))
print("Version $version - Iterate Twice:", list(column))
EOF
done
```
The output looks like this:
```
Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Twice: []
```
### Expected behavior
The expected behavior is it version 2.20.0 should behave the same as 2.17.0.
### Environment info
`datasets==2.20.0` on any platform. | 2hard
|
Title: 已Start
Body: 已Start
本機器ID: 408D5C42CC08
_Originally posted by @lookoupai in https://github.com/yeongpin/cursor-free-vip/issues/4#issuecomment-2585373086_
| 3misc
|
Title: TorchModuleWrapper serialization issue
Body: I would like to open this issue to revive a discussion started in a previous issue [(#19226)](https://github.com/keras-team/keras/issues/19226). While the previous issue seems to be inactive, the potential bug seems to still be present. I hope this is fine.
The problem arises when trying to save a mdel containing a `TorchModuleWrapper` layer (therefore using PyTorch as backend).
I referenced the original issue and in particular my latest comment below for more details:
> This bug is currently still present. The following is a minimal snippet that can reproduce it:
> ```python
> import os
> os.environ["KERAS_BACKEND"] = "torch"
> import torch
> import keras
>
> torch_module = torch.nn.Linear(4,4)
> keras_layer = keras.layers.TorchModuleWrapper(torch_module)
>
> inputs = keras.Input(shape=(4,))
> outputs = keras_layer(inputs)
> model = keras.Model(inputs=inputs, outputs=outputs)
>
> model.save('./serialized.keras')
> ```
>
> The error is:
>
> ```
> UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
> ```
> generated in [keras.src.saving.serialization_lib.serialize_keras_object](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/saving/serialization_lib.py#L150)
>
> It is worth noting that manually using [`get_config`](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/utils/torch_utils.py#L141) and [`from_config`](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/utils/torch_utils.py#L151) to serialize and deserialize (in memory) produce the correct result:
>
> ```python
> torch_linear = torch.nn.Linear(4,4) # xA^T+b with initalized weights
> wrapped_torch = TorchModuleWrapper(torch_linear) # Wrap it
>
> # get its config, and rebuild it
> torch_linear_from_config = keras.layers.TorchModuleWrapper.from_config(wrapped_torch.get_config()).module
>
> # assert all parameters are the same
> assert (torch_linear.weight == torch_linear_from_config.weight).all()
> assert (torch_linear.bias == torch_linear_from_config.bias).all()
> ```
>
> What `get_config()` does is map `module` (a torch object) to its serialized string (coming from `torch.save(self.module, buffer)`). I believe it is wrong to use the utf-8 in [serialize_keras_object(obj)](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/saving/serialization_lib.py#L154), since that encoding is specifically meant for text and not arbitrary bytes.
>
> Does anybody have an idea about it?
> Thank you for any help on this!
>
> I got this error with both:
> - python 3.10, keras 3.7.0, torch 2.5.1+cu124
> - python 3.11, keras 3.8.0, torch 2.5.1+cu124
>
_Originally posted by @MicheleCattaneo in [#19226](https://github.com/keras-team/keras/issues/19226#issuecomment-2607726028)_
As I am highly interested in using Keras3 with PyTorch modules, I am willing to contribute to a potential solution to this issue. I would however appreciate some guidance, as I am not very familiar with the Keras code base.
Thank you for any help! | 1medium
|
Title: [FEATURE] Create a New Topbar Component
Body: #### **Overview**
The **default top bar** in every Preswald app is currently **hardcoded** inside `preswald/frontend/layout/`. This should be refactored into a **separate Preswald component**.
### Adding a new component
https://docs.preswald.com/addnewcomponent
#### **Changes Required**
1. **Move existing top bar code** from default layout into its own widget kind of like other components like selectbox
2. **Expose `topbar` as a Preswald component** that users can explicitly include:
```python
from preswald import topbar
topbar()
```
3. **Remove the sidebar toggle button** from the top bar (since it is now in the sidebar).
4. **Ensure default behavior remains unchanged** for apps that do not include `topbar()` explicitly.
#### **Testing**
- Create a sample preswald app using `preswald init` and include and don't include the topbar() and make sure it all works
#### **Update Documentation**
- Add `topbar` documentation to `docs/sdk/topbar.md`, including examples and screenshots.
- Update `preswald/tutorial` with an example of how to use the `topbar` component.
- Run `preswald tutorial` and verify that the top bar is included only when explicitly added.
| 1medium
|
Title: Dynamic configuration / environment variables / etc with book builds
Body: ### Describe the problem/need and solution
Currently we use a static configuration file (`_config.yml`) for all of the book's configuration. However, there are some cases where you want to dynamically choose configuration at build time. For example, "set a configuration value based on an environment variable."
This isn't currently possible with static configuration, but it *is* possible in Sphinx. We could find some way to allow a user to dynamically update their configuration (or run arbitrary Python code) at build time.
### Guide for implementation
**Current build process**
Here's where we invoke Sphinx:
https://github.com/executablebooks/jupyter-book/blob/aedee257645ee41906c4d64f66f71b7f0dc7acfa/jupyter_book/cli/main.py#L307-L321
In that case, we explicitly set `noconfig=True`, which means that Sphinx does not expect any `conf.py` file to exist.
We then generate a dictionary of Sphinx config, and pass it to the Sphinx build command as "overrides":
https://github.com/executablebooks/jupyter-book/blob/aedee257645ee41906c4d64f66f71b7f0dc7acfa/jupyter_book/sphinx.py#L114-L129
We also already have the ability to generate a `conf.py` file from a `_config.yml` file:
https://github.com/executablebooks/jupyter-book/blob/aedee257645ee41906c4d64f66f71b7f0dc7acfa/jupyter_book/cli/main.py#L458
### Three ideas for implementation
There a few ways we could add this functionality:
1. **Support `conf.py`**. We could allow users to add a `conf.py` (maybe we'd call it `_config.py`?) that we'd point to during the Sphinx build. This would behave no differently from how Sphinx currently handles it.
2. **Generate a `conf.py` at build time, and add a `extraConfig` block**. Instead of using configuration over-rides, we could generate a **temporary `conf.py` file** that was created via the function above. We could then support a configuration block that would contain arbitrary Python code to be run, and that could over-ride / set configuration values (by being added to the end of the `conf.py` file. This is similar to [how JupyterHub uses `extraConfig`](https://zero-to-jupyterhub.readthedocs.io/en/latest/resources/reference.html#hub-extraconfig).
3. **Pre-process config.yml with jinja**. We could also add a pre-processing step before we parse the `config.yml` file. This would let users to something like [ansible style variable injection](https://github.com/executablebooks/jupyter-book/issues/1673#issuecomment-1085388535).
### Suggestion
After some discussion below, it seems like path 3 above has the most support for adding this functionality. Especially if we followed patterns that were already common in other frameworks, it would be a way to provide some dynamic configuration without supporting the total flexibility of a `conf.py` file.
### Tasks and updates
_No response_ | 1medium
|
Title: Why is there no options to set node and edge nonnull on connection field?
Body: I've been trying to set nonnull to node and edge on connection field
because a frontend engineer told me that he've got a lot of things to handler if node and edge are nullable.
is there a specific reason node and edge set to nullable? | 1medium
|
Title: Fix Frontend Failing Test: paddle - tensor.torch.Tensor.__gt__
Body: | 1medium
|
Title: EELS remove background doesn't work
Body: Hi.
I want to remove EELS SI background using s.remove_background with fixed energy window.
A "interactive" option works well. (Setting energy window manually)
```highlossalign.remove_background(background_type = "Power law", zero_fill=True, fast=True)```
But when I set the energy window, It cannot remove background
```highlossalign.remove_background(signal_range=(825., 849.), background_type = "Power law", zero_fill=True, fast=True)```
(energy size(channel): 2048, offset 800eV)
How can I solve removing background without setting manually? | 1medium
|
Title: [BUG] ModuleNotFoundError: No module named 'mars.lib.sparse.coo'
Body: <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
It seems that the `SparseNDArray` type does not support `COONDArray` any more. The `SparseNDArray` is a special data type implemented by Mars, it likes tensor or dataframe and may be returned as the result of the operand.
For example, the `SparseNDArray` type may be returned to user:
``` python
raw = sps.random(10, 5, density=0.2)
arr = tensor(raw, chunk_size=3)
arr2 = arr.astype("i8")
res = arr2.execute().fetch() # {mars.lib.sparse.matrix.SparseMatrix: (10, 5)}
```
So, we should make it not only serializable by the Mars itself but also pickleable.
The `ModuleNotFoundError: No module named 'mars.lib.sparse.coo'` is raised when unpickling a `SparseMatrix`, the type `SparseMatrix` calls `__new__` first, and `SparseMatrix.__new__` calls super new `SparseNDArray.__new__`. But, the `SparseNDArray.__new__` is a special method, it construct different types according to the input params.
When unpickling, the input params of `SparseNDArray.__new__` is empty, so it goes to the stale code:
```python
def __new__(cls, *args, **kwargs):
shape = kwargs.get("shape", None)
if shape is not None and len(shape) == 1:
from .vector import SparseVector
return object.__new__(SparseVector)
if len(args) == 1 and issparse(args[0]) and args[0].ndim == 2:
from .matrix import SparseMatrix
return object.__new__(SparseMatrix)
else:
# When unpickling, it goes here.
from .coo import COONDArray
return object.__new__(COONDArray)
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version 3.7.7
2. The version of Mars you use Latest master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| 1medium
|
Title: DoH3 or HTTP3
Body: **Motivation**
We can utilize HTTP/3 in DoH implementation. Cloudflare, Google and NextDNS server already support this!
<img width="1125" alt="Screenshot 2024-01-03 at 01 23 10" src="https://github.com/rthalley/dnspython/assets/125150101/a760e0c8-8553-4f65-a303-faa393aeef97">
*NextDNS log*
**Describe the solution you'd like.**
Enable HTTP/3 in DoH by default, if not available, fallback to HTTP/2.
| 1medium
|
Title: Fix Ivy Failing Test: paddle - elementwise.divide
Body: | 2hard
|
Title: add JSON field example
Body: expose key, value as property and test response with pydantic | 1medium
|
Title: Fact caching with smart gathering can miss facts when plays use different gather_subset sets
Body: ### Summary
The lack of ability to set a global `gather_subset` means that when using fact caching with `gather_facts: smart`, the facts collected are determined by the `gather_subset` of the first play that runs. Subsequent plays that request different fact subsets via their own `gather_subset` configuration will not receive those additional facts because:
1. The first play/block/task caches its collected facts based on its `gather_subset`
2. Later plays/blocks/tasks see the facts are cached (due to smart gathering)
3. No new fact gathering occurs until the cache times out even though different subsets are requested
4. This leads to missing facts that were explicitly requested by later plays
This creates a potential issue where plays/blocks/tasks using the same cache location must maintain identical `gather_subset` configurations to ensure all required facts are available when using fact caching with smart gathering. It seems like there should either be a way to specify a global `gather_subset` or smart gathering should be able to determine if some new facts need to be added to the facts cache due to the subset being expanded on later plays.
### Issue Type
Bug Report
### Component Name
lib/ansible/module_utils/facts/collector.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.18.1]
config file = None
configured module search path = ['/home/raddessi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/raddessi/.conda/envs/ansible-3.11/lib/python3.11/site-packages/ansible
ansible collection location = /home/raddessi/.ansible/collections:/usr/share/ansible/collections
executable location = /home/raddessi/.conda/envs/ansible-3.11/bin/ansible
python version = 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (/home/raddessi/.conda/envs/ansible-3.11/bin/python3.11)
jinja version = 3.1.5
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = code
PAGER(env: PAGER) = less
```
### OS / Environment
not relevant but confirmed on fedora 41 and debian 10
### Steps to Reproduce
I've set up an integration test to document the failure that you can see [at this branch](https://github.com/ansible/ansible/compare/devel...raddessi:ansible:devel.gather_subset_caching?expand=1), here is a high level summary:
env settings
```bash
ANSIBLE_GATHERING=smart
ANSIBLE_CACHE_PLUGIN=jsonfile
ANSIBLE_CACHE_PLUGIN_CONNECTION=./cache
```
playbook1
```yaml
# First play, facts cached here will be minimal
- hosts: testhost
module_defaults:
ansible.builtin.gather_facts:
gather_subset: ["!all"] # can be changed to ["!all", "hardware"] to resolve the issue
tasks:
- name: ensure facts are gathered
assert:
that:
- ansible_facts is defined and 'fqdn' in ansible_facts
```
```yaml
# Second play, hardware facts not available despite being requested
- hosts: testhost
module_defaults:
ansible.builtin.gather_facts:
gather_subset: ["hardware"]
tasks:
- name: ensure the hardware facts are present
assert:
that:
- ansible_facts is defined and 'processor_cores' in ansible_facts
```
### Expected Results
I expected to be able to use facts that were specified but since the cache already exists it was returned as-is even though it only contains a subset of the facts that were requested.
### Actual Results
```console
TASK [ensure the hardware facts are present] ***********************************
fatal: [testhost]: FAILED! => {
"assertion": "ansible_facts is defined and 'processor_cores' in ansible_facts",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************
testhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
NOTICE: To resume at this test target, use the option: --start-at gather_subset_caching
FATAL: Command "./runme.sh" returned exit status 2.
FATAL: Command "podman exec ansible-test-controller-uYaUDvjx /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.13 /root/ansible/bin/ansible-test integration --allow-destructive --containers '{}' --truncate 187 --color yes --host-path test/results/.tmp/host-a2osvri4 --metadata test/results/.tmp/metadata-yyow8ew_.json -- gather_subset_caching" returned exit status 1.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | 1medium
|
Title: AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS'
Body: When I try to use easyocr on any image, I get this error:
AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS'
According to (https://stackoverflow.com/questions/76616042/attributeerror-module-pil-image-has-no-attribute-antialias), new version of PIL (10.0.0) has no ANTIALIAS, as it's deprecated.
Full error:
File "...", line 8, in convert_img_to_text
result = reader.readtext(img_path)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "...\venv\Lib\site-packages\easyocr\easyocr.py", line 464, in readtext
result = self.recognize(img_cv_grey, horizontal_list, free_list,\
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...\venv\Lib\site-packages\easyocr\easyocr.py", line 383, in recognize
image_list, max_width = get_image_list(h_list, f_list, img_cv_grey, model_height = imgH)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...\venv\Lib\site-packages\easyocr\utils.py", line 613, in get_image_list
crop_img,ratio = compute_ratio_and_resize(crop_img,width,height,model_height)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...\venv\Lib\site-packages\easyocr\utils.py", line 576, in compute_ratio_and_resize
img = cv2.resize(img,(int(model_height*ratio),model_height),interpolation=Image.ANTIALIAS) | 1medium
|
Title: Load elmo-constituency-parser from archive failed
Body: ## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [ ] I have verified that the issue exists against the `main` branch of AllenNLP.
- [ ] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [ ] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [ ] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [ ] I have included in the "Environment" section below the output of `pip freeze`.
- [ ] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
archive = load_archive(
"elmo-constituency-parser-2018.03.14.tar.gz"
)
predictor = Predictor.from_archive(archive, 'constituency-parser')
predictor.predict_json({"sentence": "This is a sentence to be predicted!"})
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
Traceback (most recent call last):
File "E:\Chuan\Documents\GitHub\allennlp\allennlp\models\archival.py", line 232, in load_archive
dataset_reader, validation_dataset_reader = _load_dataset_readers(
File "E:\Chuan\Documents\GitHub\allennlp\allennlp\models\archival.py", line 268, in _load_dataset_readers
dataset_reader = DatasetReader.from_params(
File "E:\Chuan\Documents\GitHub\allennlp\allennlp\common\from_params.py", line 638, in from_params
subclass, constructor_name = as_registrable.resolve_class_name(choice)
File "E:\Chuan\Documents\GitHub\allennlp\allennlp\common\registrable.py", line 207, in resolve_class_name
raise ConfigurationError(
allennlp.common.checks.ConfigurationError: 'ptb_trees' is not a registered name for 'DatasetReader'. If your registered class comes from custom code, you'll need to import the corresponding modules. If you're using AllenNLP from the command-line, this is done by using the '--include-package' flag, or by specifying your imports in a '.allennlp_plugins' file. Alternatively, you can specify your choices using fully-qualified paths, e.g. {"model": "my_module.models.MyModel"} in which case they will be automatically imported correctly.
python-BaseException
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: Windows 10
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.9.5
allennlp 2.4.0
| 1medium
|
Title: the program does not open, but there are no errors
Body: Hello,
I try to run the program, but it doesn't start, and there are no errors. IDE returns exit code 0. System applications such as notepad, calculator are launched.
I thought it was a matter of rights, I put all the necessary programs and tools in Program Files. Created a system environment variable for the program I want to run, but that didn't help. The process of the program I am trying to start does not start. It is not in the task manager
If I try to run the program through cmd, then just load and go to another line. If I start a notepad or calculator, then everything opens. I always run my Pycharm session as Admin.
The code:
`from pywinauto.application import Application`
`app = Application(backend="uia").start("C:\\Program Files\\kinderi\\Sintech.Arm.exe")`
The program does not open. Returns exit code 0. No errors.
Thanks for any help.
| 1medium
|
Title: OSError: cannot load library '/var/task/tartiflette/language/parsers/libgraphqlparser/cffi/libgraphqlparser.dylib': /var/task/tartiflette/language/parsers/libgraphqlparser/cffi/libgraphqlparser.dylib: cannot open shared object file: No such file or directory. Additionally, ctypes.util.find_library() did not manage to locate a library called
Body: I'm trying to load tartiflette in an aws lambda. This is what is happening.
`tartiflette = "^1.3.1"`
`python 3.8`
<details>
<summary>Click to expand</summary>
```
[ERROR] OSError: cannot load library '/var/task/tartiflette/language/parsers/libgraphqlparser/cffi/libgraphqlparser.dylib': /var/task/tartiflette/language/parsers/libgraphqlparser/cffi/libgraphqlparser.dylib: cannot open shared object file: No such file or directory. Additionally, ctypes.util.find_library() did not manage to locate a library called '/var/task/tartiflette/language/parsers/libgraphqlparser/cffi/libgraphqlparser.dylib'
Traceback (most recent call last):
File "/var/lang/lib/python3.8/imp.py", line 234, in load_module
return load_source(name, filename, file)
File "/var/lang/lib/python3.8/imp.py", line 171, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 702, in _load
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/var/task/data_request_form_api/lambda_handler.py", line 8, in <module>
from .service import setup_graphql
File "/var/task/data_request_form_api/service.py", line 1, in <module>
from tartiflette import Resolver, create_engine, Engine
File "/var/task/tartiflette/__init__.py", line 5, in <module>
from tartiflette.engine import Engine
File "/var/task/tartiflette/engine.py", line 19, in <module>
from tartiflette.execution.collect import parse_and_validate_query
File "/var/task/tartiflette/execution/collect.py", line 11, in <module>
from tartiflette.language.parsers.libgraphqlparser import parse_to_document
File "/var/task/tartiflette/language/parsers/libgraphqlparser/__init__.py", line 1, in <module>
from .parser import parse_to_document
File "/var/task/tartiflette/language/parsers/libgraphqlparser/parser.py", line 35, in <module>
_LIB = _FFI.dlopen(f"{_LIBGRAPHQLPARSER_DIR}/libgraphqlparser.dylib")
File "/var/task/cffi/api.py", line 150, in dlopen
lib, function_cache = _make_ffi_library(self, name, flags)
File "/var/task/cffi/api.py", line 832, in _make_ffi_library
backendlib = _load_backend_lib(backend, libname, flags)
File "/var/task/cffi/api.py", line 827, in _load_backend_lib
raise OSError(msg)
```
</details>
| 2hard
|
Title: IndexError: Invalid key: 0 is out of bounds for size 0
Body: ### Describe the bug
I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
When I use the dataset given in the example, the training gets successfully completed (example dataset can be found [here](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)).
However when I use my own dataset which is in the same format as the example dataset, I get the below error (my dataset can be found [here](https://huggingface.co/datasets/kk2491/finetune_dataset_002)).

I see the files are being read correctly from the logs:

### Steps to reproduce the bug
1. Clone the [vertex-ai-samples](https://github.com/GoogleCloudPlatform/vertex-ai-samples) repository.
2. Run the [llama2-7b peft fine-tuning](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
3. Change the dataset `kk2491/finetune_dataset_002`
### Expected behavior
The training should complete successfully, and model gets deployed to an endpoint.
### Environment info
Python version : Python 3.10.12
Dataset : https://huggingface.co/datasets/kk2491/finetune_dataset_002
| 1medium
|
Title: lineplot of empty dataframe with hue in seaborn 0.13.0
Body: MWE
```
df1 = pd.DataFrame({}, columns=["aa", "bb", "cc"]) # empty dataframe
# df1 = pd.DataFrame([(1, 2, 3), (2, 1, 3)], columns=["aa", "bb", "cc"]) # with this, it works
sns.lineplot(df1, x="aa", y="bb") # works
sns.lineplot(df1, x="aa", y="bb", hue="cc") # does not work
```
Error happens with seaborn 0.13.0, but not with 0.12.2:
Error:
```
File .../python3.10/site-packages/seaborn/relational.py:507, in lineplot(data, x, y, hue, size, style, units, palette, hue_order, hue_norm, sizes, size_order, size_norm, dashes, markers, style_order, estimator, errorbar, n_boot, seed, orient, sort, err_style, err_kws, legend, ci, ax, **kwargs)
504 color = kwargs.pop("color", kwargs.pop("c", None))
505 kwargs["color"] = _default_color(ax.plot, hue, color, kwargs)
--> 507 p.plot(ax, kwargs)
508 return ax
File .../python3.10/site-packages/seaborn/relational.py:274, in _LinePlotter.plot(self, ax, kws)
266 # TODO How to handle NA? We don't want NA to propagate through to the
267 # estimate/CI when some values are present, but we would also like
268 # matplotlib to show "gaps" in the line when all values are missing.
(...)
271
272 # Loop over the semantic subsets and add to the plot
273 grouping_vars = "hue", "size", "style"
--> 274 for sub_vars, sub_data in self.iter_data(grouping_vars, from_comp_data=True):
276 if self.sort:
277 sort_vars = ["units", orient, other]
File .../python3.10/site-packages/seaborn/_base.py:938, in VectorPlotter.iter_data(self, grouping_vars, reverse, from_comp_data, by_facet, allow_empty, dropna)
935 for var in grouping_vars:
936 grouping_keys.append(levels.get(var, []))
--> 938 iter_keys = itertools.product(*grouping_keys)
939 if reverse:
940 iter_keys = reversed(list(iter_keys))
TypeError: 'NoneType' object is not iterable
``` | 1medium
|
Title: Where is the update one click
Body: I cant see the new update avalibale on the one click installer website
IOPaint is outdated since v1

| 1medium
|
Title: flask-sqlalchemy session close seems not work
Body: I have a question about flask-sqlalchemy, precisely about sqlalchemy.
When executing one function, processes are recorded in database.
Whenever after recording db, I added db.session.close() to get session back to pool.
but while function is executed, I cannot connect to database. why is it happening ?
def func(self):
# stage 1:
self.sub_func1() ->update process to db
# stage 2:
self.sub_func2() ->update process to db
# stage 3:
self.sub_func3() ->update process to db
# stage 4:
self.sub_func4() ->update process to db
return result | 1medium
|
Title: Add TopK node to a pretrained Brevitas model
Body: We are working with FINN-ONNX, and we want the pretrained models from Brevitas that classify the MNIST images to output the index (class) instead of a probabilities tensor of dim 1x10.To our knowledge, the node responsible for this is the TopK.
Where do we have to add this layer, and what function can we add so the 'export_qonnx' would understand it as a TopK node?
The desired block is in the following image:

| 1medium
|
Title: PydanticOmit failing with duplicated union field
Body: ### Initial Checks
- [x] I confirm that I'm using Pydantic V2
### Description
When using a custom type that omits its JSON schema (by raising `PydanticOmit` in its `__get_pydantic_json_schema__` method), the schema generation behaves inconsistently. In a model with a single field of a union type, the JSON schema is generated successfully (omitting the custom type as intended). However, when the same custom type is used in multiple fields within one model, generating the JSON schema fails with a `PydanticOmit` exception.
### Example Code
```Python
from pydantic_core import PydanticOmit
from pydantic import BaseModel
class CustomSerializedType(BaseModel):
@classmethod
def __get_pydantic_json_schema__(
cls, core_schema, handler,
):
raise PydanticOmit
class SingleField(BaseModel):
first_field: list[float | CustomSerializedType]
class DuplicatedField(BaseModel):
first_field: list[float | CustomSerializedType]
second_field: list[float | CustomSerializedType]
# This is fine
SingleField.model_json_schema()
"""
{'properties': {'first_field': {'items': {'type': 'number'},
'title': 'First Field',
'type': 'array'}},
'required': ['first_field'],
'title': 'SingleField',
'type': 'object'}
"""
# This raises an error
DuplicatedField.model_json_schema()
"""
...
handler_func(schema_or_field, current_handler, js_modify_function)
535 def new_handler_func(
536 schema_or_field: CoreSchemaOrField,
537 current_handler: GetJsonSchemaHandler = current_handler,
538 js_modify_function: GetJsonSchemaFunction = js_modify_function,
539 ) -> JsonSchemaValue:
--> 540 json_schema = js_modify_function(schema_or_field, current_handler)
541 if _core_utils.is_core_schema(schema_or_field):
542 json_schema = populate_defs(schema_or_field, json_schema)
Cell In[20], line 9, in CustomSerializedType.__get_pydantic_json_schema__(cls, core_schema, handler)
5 @classmethod
6 def __get_pydantic_json_schema__(
7 cls, core_schema, handler,
8 ) -> JsonSchemaValue:
----> 9 raise PydanticOmit
PydanticOmit: PydanticOmit()
"""
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.6
pydantic-core version: 2.27.2
pydantic-core build: profile=release pgo=false
install path: .venv/lib/python3.12/site-packages/pydantic
python version: 3.12.7 (main, Oct 16 2024, 07:12:08) [Clang 18.1.8 ]
platform: macOS-15.2-arm64-arm-64bit
related packages: fastapi-0.115.6 mypy-1.15.0 pydantic-settings-2.6.1 typing_extensions-4.12.2
commit: unknown
``` | 1medium
|
Title: pytorch_lightning.utilities(module) and lightning_utilities (package)
Body: ### Outline & Motivation
In the future release, is it possible to recommend which one to use when both contains similar functions? e.g.,
usage of lightning-utilities 0.11.9 with strict linting/LSP support working
```
from lightning_utilities.core.rank_zero import rank_zero_only
```
usage of utilities in pytorch-lightning 2.5.0, not having linting/LSP support
```
pytorch_lighning.utilities.rank_zero_only # "utilities" is not a known attribute of module "pytorch_lightning"
```
### Pitch
_No response_
### Additional context
_No response_
cc @lantiga @justusschock | 1medium
|
Title: On mac, flask fab create-app fails until to deactivate / reactivate the venv
Body: If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
Responsible disclosure:
We want to keep Flask-AppBuilder safe for everyone. If you've discovered a security vulnerability
please report to [email protected].
### Environment
Flask-Appbuilder version: 3.4.4
pip freeze output:
apispec==3.3.2
attrs==21.4.0
Babel==2.9.1
click==7.1.2
colorama==0.4.4
defusedxml==0.7.1
dnspython==2.2.0
email-validator==1.1.3
Flask==1.1.4
Flask-AppBuilder==3.4.4
Flask-Babel==2.0.0
Flask-JWT-Extended==3.25.1
Flask-Login==0.4.1
Flask-OpenID==1.3.0
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.14.3
greenlet==1.1.2
idna==3.3
itsdangerous==1.1.0
Jinja2==2.11.3
jsonschema==4.4.0
MarkupSafe==2.0.1
marshmallow==3.14.1
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.26.1
prison==0.2.1
PyJWT==1.7.1
pyrsistent==0.18.1
python-dateutil==2.8.2
python3-openid==3.2.0
pytz==2021.3
PyYAML==6.0
six==1.16.0
SQLAlchemy==1.4.31
SQLAlchemy-Utils==0.38.2
Werkzeug==1.0.1
WTForms==2.3.3
### Describe the expected results
Tell us what should happen.
```python
flask fab create-app
```
and expect to provide app name etc
### Describe the actual results
Tell us what happens instead.
```pytb
"No such command: fab"
```
### Steps to reproduce
Do clean install on mac using pip install; activate the venv and try flask fab create-app
It fails
Then deactivate venv, and reactivate
Now it works | 1medium
|
Title: Change logs missing.
Body: Really appreciate the work being doing by the contributors.
### Issue:
The version [0.2.2](https://pypi.org/project/pyppeteer/#history) in pypi misses change logs. How different is it from the code of 0.0.25 is something which we need to find out by doing a diff.
### Desired Result.
A brief description regarding what all changes were made to the API would suffice. Details like `Addition`, `Fixes`, `Depreciation` following the https://keepachangelog.com/en/1.0.0/ will do a great benefit to the community here. | 1medium
|
Title: test_awatch_log is flaky
Body: The `test_awatch_log` is flaky and fails on slow systems and/or systems under heavy load. I can reproduce it by running two games (Krunker and SuperTuxKart) while simultaneously running the test on my laptop. What happens is that the number of messages containing "DEBUG" goes below 4 and the test thus fails.
You might wonder if this really is a problem - after all, you don't usually run multiple games while testing your code. The problem is that while packaging watchgod for Alpine Linux I experienced this test randomly failing on their continuous integration (CI) on certain arches (armhf, aarch64, s390x), presumably as a result of me not being the only person who uses these CI runners and the systems thus being under heavy load.
I don't have a proposed way to fix this, and I understand if it's something you don't want to fix, but I thought I would report it nonetheless. | 1medium
|
Title: Training crash when using XLA profiler on XLA accelerator and manual optimization
Body: ### Bug description
training loop crash when running on XLA profiler + manual optimization.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
Training on XLAProfile + Manual Optimization on XLA Machine
```
### Error messages and logs
```
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/process.py", line 246, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/process.py", line 205, in _process_chunk
return [fn(*args) for args in chunk]
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/process.py", line 205, in <listcomp>
return [fn(*args) for args in chunk]
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/runtime.py", line 95, in wrapper
return fn(*args, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 78, in _run_thread_per_device
replica_results = list(
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 71, in _thread_fn
return fn()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 187, in __call__
self.fn(runtime.global_ordinal(), *self.args, **self.kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/xla.py", line 141, in _wrapping_function
results = function(*args, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 579, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _run
results = self._run_stage()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1030, in _run_stage
self.fit_loop.run()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 205, in run
self.advance()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 363, in advance
self.epoch_loop.run(self._data_fetcher)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 140, in run
self.advance(data_fetcher)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 252, in advance
batch_output = self.manual_optimization.run(kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/manual.py", line 94, in run
self.advance(kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/manual.py", line 114, in advance
training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 311, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 390, in training_step
return self.lightning_module.training_step(*args, **kwargs)
File "/mnt/disks/persist/ldm/ldm/models/autoencoder.py", line 438, in training_step
opt1.step()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py", line 153, in step
step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/strategies/ddp.py", line 270, in optimizer_step
optimizer_output = super().optimizer_step(optimizer, closure, model, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 238, in optimizer_step
return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/xla.py", line 75, in optimizer_step
xm.mark_step()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/core/xla_model.py", line 1056, in mark_step
torch_xla._XLAC._xla_step_marker(
RuntimeError: Expecting scope to be empty but it is [Strategy]XLAStrategy.training_step.1
Exception raised from ResetScopeContext at ../torch/csrc/lazy/core/ir_metadata.cpp:77 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f812737a897 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f812732ab25 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: torch::lazy::ScopePusher::ResetScopes() + 0xa5 (0x7f81136f7c55 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: torch_xla::XLAGraphExecutor::MarkStep(torch::lazy::BackendDevice const&) + 0x57 (0x7f7fc6920a87 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #4: <unknown function> + 0x4aeb60a (0x7f7fc66eb60a in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #5: <unknown function> + 0x4aebab6 (0x7f7fc66ebab6 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #6: <unknown function> + 0x4abd006 (0x7f7fc66bd006 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #7: python() [0x4fdc87]
<omitting python frames>
frame #12: python() [0x5099ce]
frame #15: python() [0x509b26]
frame #17: python() [0x509b26]
frame #19: python() [0x5099ce]
frame #21: python() [0x509b26]
frame #23: python() [0x509b26]
frame #41: python() [0x5099ce]
frame #43: python() [0x509b26]
frame #45: python() [0x509b26]
frame #49: python() [0x5cf883]
frame #51: python() [0x5c87f7]
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/disks/persist/ldm/main.py", line 753, in <module>
trainer.fit(model, data, ckpt_path=opt.resume_from_checkpoint if "resume_from_checkpoint" in opt else None)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 543, in fit
call._call_and_handle_interrupt(
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/xla.py", line 98, in launch
process_context = xmp.spawn(
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/runtime.py", line 95, in wrapper
return fn(*args, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 38, in spawn
return pjrt.spawn(fn, nprocs, start_method, args)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 211, in spawn
run_multiprocess(spawn_fn, start_method=start_method)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/runtime.py", line 95, in wrapper
return fn(*args, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 171, in run_multiprocess
replica_results = list(
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 172, in <genexpr>
itertools.chain.from_iterable(
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/process.py", line 575, in _chain_from_iterable_of_lists
for element in iterable:
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
RuntimeError: Expecting scope to be empty but it is [Strategy]XLAStrategy.training_step.1
Exception raised from ResetScopeContext at ../torch/csrc/lazy/core/ir_metadata.cpp:77 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f812737a897 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f812732ab25 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: torch::lazy::ScopePusher::ResetScopes() + 0xa5 (0x7f81136f7c55 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: torch_xla::XLAGraphExecutor::MarkStep(torch::lazy::BackendDevice const&) + 0x57 (0x7f7fc6920a87 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #4: <unknown function> + 0x4aeb60a (0x7f7fc66eb60a in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #5: <unknown function> + 0x4aebab6 (0x7f7fc66ebab6 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #6: <unknown function> + 0x4abd006 (0x7f7fc66bd006 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #7: python() [0x4fdc87]
<omitting python frames>
frame #12: python() [0x5099ce]
frame #15: python() [0x509b26]
frame #17: python() [0x509b26]
frame #19: python() [0x5099ce]
frame #21: python() [0x509b26]
frame #23: python() [0x509b26]
frame #41: python() [0x5099ce]
frame #43: python() [0x509b26]
frame #45: python() [0x509b26]
frame #49: python() [0x5cf883]
frame #51: python() [0x5c87f7]
```
### Environment
<details>
<summary>Current environment</summary>
```
- PyTorch Lightning Version (2.4.0):
- PyTorch XLA Version (2.4.0):
- PyTorch Version (2.4):
- Python version (3.10):
```
</details>
### More info
_No response_ | 2hard
|
Title: [Proposal] Expand slot management to resource management
Body: ## Motivation
Currently Mars use slot for resource management and bands allocation which just consider cpu/gpu but no memory. Mars always allocate one slot which represents one core cpu or gpu for a subtask. It works well most time. But there are some shortcomings like:
* Subtasks need less cpu but assigned more which results in low cpu utilization and long execution time
* Subtasks need more memory and less cpu which leads node OOM
So we could develop more granular resource management and allocation to increase resource utilization, improve scheduling efficiency, and avoid OOM.
## Design
We propose a more common resource management which includes not only cpu/gpu but also memory, and even estimated time of a subtask.
A subtask of Mars needs one slot but no other resource by default. We could add more different types of resources to management.
Obviusly we can involve memory first as follows:
```
class Resource:
num_cpus: float
num_gpus: float
num_mem_bytes: float
```
With this we can expand slot management to resource management. And bands allocation needs to consider both cpu/gpu and memory.
So we should develop a more complex resource management from a simple resource(cpu/gpu) to multiple resources.
In addition, we can easily implement hbo if we have an external system which can recommend resources for subtasks by history information.
If no external system, we can set memory resource to 0 which degenerates to the original slot scheduler or set a value through configuration to avoid OOM.
And later we can estimated execution time of subtasks if the external HBO system can recommend subtask execution time.
## Plan
In order to implement this proposal, we plan to do:
* Add physical resource management which has been in #2731
* Add a logic id for subtask which represents a unique subtask and in different submits the same subtask has same logic id which has been in #2575
* Add a logic key for tileable graph which just like subtask logic key and this is for HBO in #2961
* Introduce resource management and bands allocation #2846
| 2hard
|
Title: How do you start a project from scratch?
Body: Are there docs for using `zappa init` without an existing python project? | 0easy
|
Title: SDXL InPainting: Mask blur option is negated by forced binarization.
Body: The SDXL InPainting pipeline's documentation suggests using `pipeline.mask_processor.blur()` for creating soft masks, but this functionality is effectively broken due to the implementation order. Please let me know if I'm missing something here. Based on my testing, whether I use a blurred mask or blur them with the built in method, they still show a solid seam as if there was no blur applied.
The mask processor is initialized with forced binarization:
```
self.mask_processor = VaeImageProcessor(
vae_scale_factor=self.vae_scale_factor,
do_normalize=False,
do_binarize=True, # Forces binarization
do_convert_grayscale=True
)
```
When processing masks, any blur effect is applied before binarization, which then converts all values back to pure black and white, completely negating the blur effect (if I'm not mistaken). The VaeImageProcessor defaults binarize to false, but when masks are initialized it sets binarize to true.
Relevant files:
diffusers/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py[326]
diffusers/image_processor.py[88][276][523]
### Reproduction
Inpaint pipeline config using either your own blurred image or the built in blur method according to documentation. It's fairly self explanatory and I don't have a minimal script to share.
### System Info
diffusers==0.32.2
### Who can help?
@yiyixuxu @DN6 | 1medium
|
Title: Local ChatGPT server connection randomly timing out
Body: ### Description
When i try to access my local OpenAI compatible API using my local ip and the port, sometimes the connection stops working and i have to restart the AGiXT backend to make it work again.
The API i'm using is RWKV Runner (the RWKV World model didn't work with oobabooga) which i noticed does not list the embedder in the models endpoint but supports embeddings.
The API does not know about the HTTP request and the backend just says connection timed out.
### Steps to Reproduce the Bug
1. Locally run an OpenAI compatible API (Preferably RWKV Runner)
2. Setup the agent
3. Create a conversation
4. Click "Send" in chat mode about 3-5 times and wait for response
### Expected Behavior
After clicking "Send", the connection should not repeatedly time out and the backend should connect to the API.
### Operating System
- [ ] Linux
- [X] Microsoft Windows
- [ ] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [x] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [X] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [X] Using docker compose
- [ ] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | 1medium
|
Title: How to compute loss using eval mode in val. py file for YOLOv5
Body: ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Due to research requirements, I need to calculate the loss function value for each input image in the `eval mode` of the YOLO V5 model
I modified the `run` function in the `val. py` file
The `compute_loss` variable was specified as `ComputeLoss (model)` in it
```python
# Configure
model.eval()
compute_loss = ComputeLoss(model)
cuda = device.type != "cpu"
is_coco = isinstance(data.get("val"), str) and data["val"].endswith(f"coco{os.sep}val2017.txt") # COCO dataset
nc = 1 if single_cls else int(data["nc"]) # number of classes
iouv = torch.linspace(0.5, 0.95, 10, device=device) # iou vector for [email protected]:0.95
niou = iouv.numel()
```
The following error will occur:
```python
Traceback (most recent call last):
File "val.py", line 626, in <module>
main(opt)
File "val.py", line 597, in main
run(**vars(opt))
File "xxxxxxxxx/.local/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "val.py", line 299, in run
compute_loss = ComputeLoss(model)
^^^^^^^^^^^^^^^^^^
File "utils/loss.py", line 115, in __init__
h = model.hyp # hyperparameters
^^^^^^^^^
File "xxxxxxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1931, in __getattr__
raise AttributeError(
AttributeError: 'DetectMultiBackend' object has no attribute 'hyp'
```
When I imitate the training mode and adding the 'hyp' attribute to the YOLOv5 model using 'data/hyps/hyp.satch-low-yaml' will result in the following error:
```python
Traceback (most recent call last):
File "val.py", line 626, in <module>
main(opt)
File "val.py", line 597, in main
run(**vars(opt))
File "xxxxxxx/.local/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "val.py", line 299, in run
compute_loss = ComputeLoss(model)
^^^^^^^^^^^^^^^^^^
File "utils/loss.py", line 129, in __init__
m = de_parallel(model).model[-1] # Detect() module
~~~~~~~~~~~~~~~~~~~~~~~~^^^^
TypeError: 'DetectionModel' object is not subscriptable
```
I really need the loss value.I look forward to your reply. I would be extremely grateful
If it's not possible to directly modify `val. py` to achieve the goal, use
```python
torch.hub.load("yolo.pt")
```
or
```python
from ultralytics import YOLO
Model=YOLO ("yolo5. pt")
```
and other methods can achieve the goal, I also look forward to your reply. I would greatly appreciate it
### Additional
_No response_ | 1medium
|
Title: [BUG] shift_auto_shape_color config for semantic segmentation on Windows 10 not working
Body: - OS: Windows 10
- Labelme Version: 4.2.10
The "shift_auto_shape_color" config doesn't seem to be working. I'm on Windows 10 and am running this command (based on the semantic segmentation example):
```
labelme data_annotated --labels labels.txt --nodata --validatelabel exact --config
'{shift_auto_shape_color: -2}'
```
I end up getting this error:
```
usage: labelme [-h] [--version] [--reset-config]
[--logger-level {debug,info,warning,fatal,error}]
[--output OUTPUT] [--config CONFIG] [--nodata] [--autosave]
[--nosortlabels] [--flags FLAGS] [--labelflags LABEL_FLAGS]
[--labels LABELS] [--validatelabel {exact,instance}]
[--keep-prev] [--epsilon EPSILON]
[filename]
labelme: error: unrecognized arguments: -2}'
```
Is this a bug or something wrong with how I'm using it? Thanks. | 1medium
|
Title: [Bug]: Matplotlib and Herbie
Body: ### Bug summary
I am having an issue regarding MPL when using Herbie lately. This has been happening the past few weeks
### Code for reproduction
```Python
!pip install xarray matplotlib pygrib numpy pandas basemap cartopy metpy Herbie-data eccodes==2.38.3
from herbie import Herbie
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
import cartopy
import math
import metpy
from herbie.toolbox import EasyMap, pc, ccrs
from herbie import paint
import metpy.calc as mpcalc
```
### Actual outcome
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-96bd97320032>](https://localhost:8080/#) in <cell line: 0>()
6 import math
7 import metpy
----> 8 from herbie.toolbox import EasyMap, pc, ccrs
9 from herbie import paint
10 import metpy.calc as mpcalc
3 frames
[/usr/local/lib/python3.11/dist-packages/mpl_toolkits/axes_grid1/inset_locator.py](https://localhost:8080/#) in InsetPosition()
16 @_api.deprecated("3.8", alternative="Axes.inset_axes")
17 class InsetPosition:
---> 18 @_docstring.dedent_interpd
19 def __init__(self, parent, lbwh):
20 """
AttributeError: module 'matplotlib._docstring' has no attribute 'dedent_interpd'
### Expected outcome
The outcome expected is that it would run smoothly.
### Additional information
It have worked in 3.8.4 before I believe.
### Operating system
_No response_
### Matplotlib Version
3.8.4
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
pip | 1medium
|
Title: Bug when using Lasagne `mask_input` parameter
Body: When initializing layers, the `incoming` and `incomings` arguments are resolved when they happen to be strings. However, those are not the only ones that may reference other layers. The `mask_input` parameter from recurrent layers also references another layer. Therefore, `initialize_layers` should resolve that too, elso the string will simply be passed on, causing a Lasagne error.
There may be other cases in Lasagne, I'm not sure. | 1medium
|
Title: Declarative partitioning support
Body: Is there a way to mention partition key in the model? | 1medium
|
Title: I want to implement a multi-task network for segmentation and keypoints .what do i need to do
Body: ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I want to implement a multi-task network for segmentation and keypoints .what do i need to do
### Additional
_No response_ | 1medium
|
Title: Fine-tune the model.
Body: Thank you very much for your guidance. I would like to fine-tune the model using my own custom dataset. Could you please provide the relevant training code? | 1medium
|
Title: 如何同时训练两个模型?
Body: ### Is there an existing issue for this bug?
- [X] I have searched the existing issues
### 🐛 Describe the bug
在官方文档中给出了训练一个model的例子:
```
colossalai.launch(...)
plugin = GeminiPlugin(...)
booster = Booster(precision='fp16', plugin=plugin)
model = GPT2()
optimizer = HybridAdam(model.parameters())
dataloader = plugin.prepare_dataloader(train_dataset, batch_size=8)
lr_scheduler = LinearWarmupScheduler()
criterion = GPTLMLoss()
model, optimizer, criterion, dataloader, lr_scheduler = booster.boost(model, optimizer, criterion, dataloader, lr_scheduler)
for epoch in range(max_epochs):
for input_ids, attention_mask in dataloader:
outputs = model(input_ids.cuda(), attention_mask.cuda())
loss = criterion(outputs.logits, input_ids)
booster.backward(loss, optimizer)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
```
如果在我的训练中,有两个模型model1和model2都需要被训练,应该如何使用训练呢?
### Environment
_No response_ | 1medium
|
Title: DOC: dtype member docstrings are not tested
Body: ### Issue with current documentation:
Over at #28001 we discovered that `np.dtype.kind` is not being tested via doctests. I think the problem is in doctests itself, where [it only checks certain items in `obj.__dict__`](https://github.com/python/cpython/blob/7900a85019457c14e8c6abac532846bc9f26760d/Lib/doctest.py#L1064):
- staticmethod, classmethod, property
- inspect.isroutine, inspect.isclass
In the case at hand, `np.dtype.kind` is a member, so it is not collected for testing.
### Idea or request for content:
We should find a work-around, as doctest is a part of the python stdlib, so we cannot simply upgrade the version. cc @ev-br | 1medium
|
Title: [PR] Speed up e2e tests and make them exiting gracefully
Body: > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2019-10-28 17:19:13+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/216
> Merged by [nolar](https://github.com/nolar) at _2019-11-06 14:31:06+00:00_
> Issue : #13 #59
## Description
Improve e2e tests to wait for the stop-words in the logs instead of just waiting for the time. It was quite common that the e2e tests do not fit into the empirically guessed timings, so the timings had to be increased far above what was normally needed — thus slowing the e2e tests.
This became even more important for the tests that contain the artificial delays, such as sleep, temporary errors with delays, or arbitrary exceptions with the default retry delay (even if mocked).
Now, they default delay is 10 seconds, but the tests continue as soon as they see the specially defined stop-words for each stage (creation, deletion; later: startup, cleanup).
In addition, the `KopfRunner` was improved to stop the background operator gracefully instead of the forced cancellation (which had no graceful period).
## Types of Changes
- Refactor/improvements
| 1medium
|
Title: Graphene Django is incompatible with django-filters 2.0.0
Body: When using graphene-django along with `django-filter` 2.0.0 I get an error trying to use the graphql endpoint.
A brief example:
```
class Query(object):
projects = DjangoFilterConnectionField(
MyProjectNode, filterset_class=MyProjectFilterSet)
```
This is the error:
```
`GrapheneMyProjectFilterSet.filter_for_reverse_field` has been removed. `GrapheneMyProjectFilterSet.filter_for_field` now generates filters for reverse fields. See: https://django-filter.readthedocs.io/en/master/guide/migration.html
``` | 1medium
|
Title: Server TLS handshake failed. Certificate verify failed: unable to get local issuer certificate
Body: #### Problem Description
IE and some apps encounts the "Server TLS handshake failed. Certificate verify failed: unable to get local issuer certificate" error. But it works in Chrome
#### Steps to reproduce the behavior:
1. Download the certificates through the mimt.it. Import the certificates to "Trused Root certificates".
2. Set the proxy in network. The proxy address is 127.0.0.1:8080.
3. Start the Mitmproxy by "mitmweb"
4. I can get the https record when I open the website in Chrome
5. But when I open IE or other app. The log shows "Server TLS handshake failed. Certificate verify failed: unable to get local issuer certificate"
#### System Information
Windows 10
Mitmproxy 10.1.3



| 1medium
|
Title: Headless Logout should return 200 instead of 401
Body: I find it a bit unusual that the Headless Logout endpoint returns 401 on a successful logout. Shouldn't it return 200 instead? I am not an expert on this topic by any means - so please feel free to enlighten me! :) | 1medium
|
Title: Adding intermediate information in a custom augmentation
Body: Apologies in advance if a version of this has been asked before, but I wasn't able to find any info. I have a custom augmentation that takes an image and a bounding box, expands the bbox randomly within limits and then crops. If i want to also access the expanded bbox that was used, how can I get that information from the output? For reference, here's the basic code skeleton. Assume `crop`, `expand` and `jitter_bbox` functions exist, and that cases where expansions protrude beyond image boundaries are handled:
```
class RandomExpansion(A.DualTransform):
def __init__(self,
expansion_limits=[0.0, 0.5],
always_apply=False,
p=0.5,
):
super(RandomExpansion, self).__init__(always_apply, p)
self.expansion_limits = expansion_limits
def apply(self, np_img, x_min, x_max, y_min, y_max, **params):
h, w = np_img.shape[:2]
exp_bbox = np.array([x_min, y_min, x_max, y_max])
return crop(np_img, exp_bbox)
def apply_to_bbox(self, bbox, **params):
x_min = np.clip(bbox[0], 0.0, 1.0)
y_min = np.clip(bbox[1], 0.0, 1.0)
x_max = np.clip(bbox[2], 0.0, 1.0)
y_max = np.clip(bbox[3], 0.0, 1.0)
return (x_min, y_min, x_max, y_max)
@property
def targets_as_params(self):
return ["image", "bboxes"]
def get_params_dependent_on_targets(self, params):
h, w = params["image"].shape[:2]
norm_bbox = params["bboxes"][0]
bbox = denormalize_bbox(norm_bbox, h, w)[:4]
# jitter the bbox randomly
bbox = jitter_bbox(bbox)
# expand randomly
exp_factor = np.random.uniform(
self.expansion_limits[0], self.expansion_limits[1]
)
exp_bbox = expand(bbox, exp_factor)
ex1, ey1, ex2, ey2 = exp_bbox
return {
"x_min": ex1,
"y_min": ey1,
"x_max": ex2,
"y_max": ey2,
}
```
Ideally, after calling wrapping this with `A.Compose`, I'd do something like `out = tfm(image=<np_img>, bboxes=<bbox>)` and would want `out` to also contain the `exp_bbox` being referenced above. Is there a way to do this?
| 1medium
|
Title: Project: UI Revamp
Body: ### Short Description
UI isn't up to date. Reworking UX is complex, it'll will take time and effort. If we're seeing value in modernizing UI so it's prettier, not getting deep in rethinking flows and interactions, it's a comparably low hanging fruit.
### Problem hypothesis
Old looking UI can scare off newcomers as it communicates a lag. User would feel better when working with modern UI.
### Value
1. Good [first impression](https://thestory.is/en/journal/good-first-impression-website/) is easier to achieve with modernized UI.
2. Customer satisfaction will grow as it's a pleasure to work with modern looking UI. ([Aesthetics role in user satisfaction](https://www.researchgate.net/publication/221325046_User_Satisfaction_Aesthetics_and_Usability_Beyond_Reductionism))
3. With nice looking UI we'd be able to present CKAN better to new users, maintainers.
### Desired outcomes
- Increase in customer satisfaction for data publishers
- Increase in conversion rate from Prospect to Customer
### User needs
TBD
### Technical needs & known limitations
😎 Need your input, guys on what designer should know before designing new screens. Or even better - what are prerequisites to design UI that will be easier to implement.
### Costs
TBD
### Validation
TBD
| 1medium
|
Title: chat frontend no longer active, fix readme
Body: From Readme:
> How To Try It Out
> Chatting with the AI
>
> The chat frontend is now live [here](https://open-assistant.io/chat). Log in and start chatting! Please try to react with a thumbs up or down for the assistant's responses when chatting.
but the link now leads to the `OpenAssistant has finished!` page, without allowing you to try the model | 0easy
|
Title: Parsing Dash initial_arguments broken in v1.6.1
Body: I've identified a regression in release 1.6.1 that causes a bug when parsing `initial_arguments` as a serialized string.
We have an app that uses `initial_arguments` as follows. The `dash_initial_arguments` is a *string* of serialized JSON that is generated via `views.py` and passed to the Django template via `context`.
```
{% plotly_app name="MyApp" ratio=1 initial_arguments=dash_initial_arguments %}
```
Prior to v1.6.1, this worked as intended. However, there was a [change to dash_wrapper.py](https://github.com/GibbsConsulting/django-plotly-dash/compare/v1.6.0...v1.6.1#diff-7b3d671859d84ee8816d9e86a0705b5e13e3f3b49dc12a2a6aa4caa7a290f89aR466) in v1.6.1 that removed the JSON deserialization logic. Specifically, this change occurred in https://github.com/GibbsConsulting/django-plotly-dash/commit/cddf57559a8dcd12d1cdbb42d95c48b29678ee11.

`initial_arguments` is still a string, but since the parsing has been removed, this now results in the following error:
```
ValueError: dictionary update sequence element #0 has length 1; 2 is required
```
@sdementen or @sebastiendementen do you have any context for why this parsing logic was removed? | 1medium
|
Title: Move the old Jina/Docarray relation docs from DocArray documenation to the jina one
Body: # Context
DocArray is moving into its own organization and reference to the jina project are slowly beeing removed.
Therefore we will loose this documentation at some point in docarray : https://docarray.jina.ai/fundamentals/jina-support/
We need to move to content to the Jina documentation as it is still relevant.
It is more than copy pasting, work need to be done in the working because it is not jina info in docarray but docarray info in jina. The content is mostly the same though | 1medium
|
Title: [CG, Core] Illegal memory access with Ray 2.44 and vLLM v1 pipeline parallelism
Body: ### What happened + What you expected to happen
We got the following errors when running vLLM v1 PP>1 with Ray 2.44. It was working fine with Ray 2.43.
```
ERROR 03-21 10:34:30 [core.py:343] File "/home/ray/default/vllm/vllm/v1/worker/gpu_model_runner.py", line 1026, in execute_model
ERROR 03-21 10:34:30 [core.py:343] self.intermediate_tensors[k][:num_input_tokens].copy_(
ERROR 03-21 10:34:30 [core.py:343] RuntimeError: CUDA error: an illegal memory access was encountered
ERROR 03-21 10:34:30 [core.py:343] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
ERROR 03-21 10:34:30 [core.py:343] For debugging consider passing CUDA_LAUNCH_BLOCKING=1
ERROR 03-21 10:34:30 [core.py:343] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions / Dependencies
- Python 3.11
- CUDA 12.4
- NVIDIA L4 / L40S GPUs
- Ray 2.44
- vLLM 0.8.1 (or any newer commits)
### Reproduction script
```python
from vllm import LLM, SamplingParams
# Sample prompts.
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.0, max_tokens=50)
# Create an LLM.
llm = LLM(
model="Qwen/Qwen2.5-0.5B-Instruct",
distributed_executor_backend="ray",
pipeline_parallel_size=2,
enforce_eager=False,
)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
Run the script with
```
VLLM_USE_V1=1 python run.py
```
### Issue Severity
High: It blocks me from completing my task. | 2hard
|
Title: Code for VisionT5Model
Body: ### Feature request
So right now you can't use T5 as decoder block in VisionEncoderDecoderModel , I wrote a code here which almost does that trying to get some help if it covers everything I need and can use it directly I am planning to use it for a OCR code base
``` python
import copy
import torch
import torch.nn as nn
from torch.nn import CrossEntropyLoss
from typing import Optional, Tuple, Union
from transformers import (
PreTrainedModel,
GenerationMixin,
VisionEncoderDecoderConfig,
AutoModel,
T5Config,
T5Stack,
ViTModel
)
from transformers.modeling_outputs import Seq2SeqLMOutput
class VisionT5Model(PreTrainedModel, GenerationMixin):
"""
A vision-text model using a ViT-like encoder and a T5 decoder stack.
It mimics the design of VisionEncoderDecoderModel but replaces the decoder
with a T5 decoder. Useful for tasks like OCR, image captioning, etc.
"""
config_class = VisionEncoderDecoderConfig
base_model_prefix = "vision_t5"
main_input_name = "pixel_values"
def __init__(self, config: VisionEncoderDecoderConfig):
"""
Args:
config (VisionEncoderDecoderConfig):
Configuration for the vision-encoder–text-decoder model.
- config.encoder should be a vision config (e.g. ViTConfig)
- config.decoder should be a T5Config
"""
super().__init__(config)
# ----------------------
# 1) Load the Vision Encoder
# ----------------------
self.encoder = ViTModel(config.encoder)
# Make sure it does NOT have a "head" for classification etc.
if self.encoder.get_output_embeddings() is not None:
raise ValueError("The encoder should not have a LM head; please use a bare vision backbone.")
# ----------------------
# 2) Build the T5 decoder stack (no encoder part!)
# ----------------------
# We copy the T5 config from config.decoder
# Then ensure is_decoder=True, is_encoder_decoder=False, etc.
t5_decoder_config = T5Config.from_dict(config.decoder.to_dict())
t5_decoder_config.is_decoder = True
t5_decoder_config.is_encoder_decoder = False
t5_decoder_config.num_layers = config.decoder.num_layers
# If you want cross-attention in T5, it must have `add_cross_attention=True`.
# Usually T5's is_decoder implies that anyway, but just to be safe:
t5_decoder_config.add_cross_attention = True
self.decoder = T5Stack(t5_decoder_config)
# Optionally, if the hidden sizes differ, we need a projection:
if self.encoder.config.hidden_size != t5_decoder_config.d_model:
self.enc_to_dec_proj = nn.Linear(
self.encoder.config.hidden_size, t5_decoder_config.d_model, bias=False
)
else:
self.enc_to_dec_proj = None
# ----------------------
# 3) Final LM head (same as T5's)
# ----------------------
self.lm_head = nn.Linear(t5_decoder_config.d_model, t5_decoder_config.vocab_size, bias=False)
if t5_decoder_config.tie_word_embeddings:
self.lm_head.weight = self.decoder.embed_tokens.weight
self.model_dim = t5_decoder_config.d_model # keep track if we want the T5 scaling
# Initialize weights, etc.
self.post_init()
def get_encoder(self):
return self.encoder
def get_decoder(self):
return self.decoder
def get_input_embeddings(self):
"""By convention, the 'input embeddings' come from the decoder if needed."""
return self.decoder.embed_tokens
def set_input_embeddings(self, new_embeddings):
self.decoder.set_input_embeddings(new_embeddings)
def get_output_embeddings(self):
return self.lm_head
def set_output_embeddings(self, new_embeddings):
self.lm_head = new_embeddings
def forward(
self,
pixel_values: torch.FloatTensor,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.BoolTensor] = None,
encoder_outputs: Optional[Tuple[torch.FloatTensor]] = None,
past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
return_dict: Optional[bool] = None,
**decoder_kwargs
) -> Union[Seq2SeqLMOutput, Tuple[torch.FloatTensor]]:
"""
pixel_values: (batch, channels, height, width)
The images to encode (e.g. from ViTFeatureExtractor).
decoder_input_ids: (batch, tgt_seq_len)
Input tokens to the T5 decoder.
labels: (batch, tgt_seq_len)
If given, we compute LM loss by teacher-forcing and produce CrossEntropyLoss.
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
use_cache = use_cache if use_cache is not None else self.config.decoder.use_cache
# 1) Run the vision encoder if needed
if encoder_outputs is None:
encoder_outputs = self.encoder(pixel_values=pixel_values, return_dict=True)
# encoder_outputs.last_hidden_state shape => (batch, seq_len, hidden_size)
hidden_states = encoder_outputs.last_hidden_state
# Possibly project to match T5 dimension
if self.enc_to_dec_proj is not None:
hidden_states = self.enc_to_dec_proj(hidden_states)
# 2) Prepare decoder inputs
# If we have labels but no decoder_input_ids, shift-right internally
if labels is not None and decoder_input_ids is None:
# Standard T5 shift-right:
decoder_input_ids = self._shift_right(labels)
# T5 decoder forward
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=hidden_states,
encoder_attention_mask=None, # If you want to mask out padding in hidden_states, pass it here
past_key_values=past_key_values,
use_cache=use_cache,
return_dict=True,
**decoder_kwargs,
)
sequence_output = decoder_outputs[0] # (batch, tgt_len, d_model)
# 3) Final LM head
# T5 typically scales by d_model^-0.5 if tie_word_embeddings = True,
# but you can do that if needed.
if self.config.decoder.tie_word_embeddings:
sequence_output = sequence_output * (self.model_dim ** -0.5)
logits = self.lm_head(sequence_output)
loss = None
if labels is not None:
# Compute standard LM loss
loss_fct = CrossEntropyLoss(ignore_index=-100)
loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))
if not return_dict:
# Return (loss, logits, past, decoder_outputs, encoder_outputs)
out = (logits,) + decoder_outputs[1:] + (encoder_outputs,)
return ((loss,) + out) if loss is not None else out
return Seq2SeqLMOutput(
loss=loss,
logits=logits,
past_key_values=decoder_outputs.past_key_values,
decoder_hidden_states=decoder_outputs.hidden_states,
decoder_attentions=decoder_outputs.attentions,
cross_attentions=decoder_outputs.cross_attentions,
encoder_last_hidden_state=hidden_states,
encoder_hidden_states=encoder_outputs.hidden_states,
encoder_attentions=encoder_outputs.attentions,
)
def prepare_inputs_for_generation(
self,
decoder_input_ids,
past_key_values=None,
encoder_outputs=None,
**kwargs,
):
"""
During generation, the `generate()` method calls this to assemble the inputs for each step.
"""
if past_key_values is not None:
# we only need to pass the last token of decoder_input_ids
decoder_input_ids = decoder_input_ids[:, -1:].clone()
return {
"pixel_values": None, # not needed if `encoder_outputs` is already computed
"decoder_input_ids": decoder_input_ids,
"past_key_values": past_key_values,
"encoder_outputs": encoder_outputs,
"use_cache": kwargs.get("use_cache"),
}
def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor) -> torch.Tensor:
return self._shift_right(labels)
def _reorder_cache(self, past_key_values, beam_idx):
# if decoder past is not included in output
# speedy decoding is disabled and no need to reorder
if past_key_values is None:
print("You might want to consider setting `use_cache=True` to speed up decoding")
return past_key_values
reordered_decoder_past = ()
for layer_past_states in past_key_values:
# get the correct batch idx from layer past batch dim
# batch dim of `past` is at 2nd position
reordered_layer_past_states = ()
for layer_past_state in layer_past_states:
# need to set correct `past` for each of the four key / value states
reordered_layer_past_states = reordered_layer_past_states + (
layer_past_state.index_select(0, beam_idx.to(layer_past_state.device)),
)
if reordered_layer_past_states[0].shape != layer_past_states[0].shape:
raise ValueError(
f"reordered_layer_past_states[0] shape {reordered_layer_past_states[0].shape} and layer_past_states[0] shape {layer_past_states[0].shape} mismatched"
)
if len(reordered_layer_past_states) != len(layer_past_states):
raise ValueError(
f"length of reordered_layer_past_states {len(reordered_layer_past_states)} and length of layer_past_states {len(layer_past_states)} mismatched"
)
reordered_decoder_past = reordered_decoder_past + (reordered_layer_past_states,)
return reordered_decoder_past
def _shift_right(self, labels: torch.LongTensor) -> torch.LongTensor:
"""
Same shifting that T5 does: pad -> start token -> ... -> y[0..-2]
"""
# In T5, the decoder_start_token_id is often the same as pad_token_id
# But check or override as needed.
decoder_start_token_id = self.config.decoder.decoder_start_token_id
if decoder_start_token_id is None:
# default fallback
decoder_start_token_id = self.config.decoder.pad_token_id
pad_token_id = self.config.decoder.pad_token_id
# create shifted ids
shifted = labels.new_zeros(labels.shape)
shifted[..., 1:] = labels[..., :-1].clone()
shifted[..., 0] = decoder_start_token_id
# replace -100 with pad_token_id
shifted.masked_fill_(shifted == -100, pad_token_id)
return shifted
```
### Motivation
For OCR Project
### Your contribution
T5 can be used a decoder block for vision models | 2hard
|
Title: 麻烦检查一下代码,basestring是什么鬼
Body:
```
if isinstance(text, basestring):
```
就这几行,确定是basestring,不是str吗?? | 1medium
|
Title: [BUG] Support React.memo() equal function in react functional component develope
Body: When I develop dash component in react functional component, if I set the `equal` function to prevent some unnecessary redraw, the component won't be generated after the build:


@T4rk1n | 1medium
|
Title: python Tabula : FileNotFoundError: [WinError 2] The system cannot find the file specified
Body: # Summary of your issue
I'm getting an error while reading a pdf file via tabula
# Environment
Write and check your environment.
- [ ] `python --version`:3 ?
- [ ] `java -version`: 8?
- [ ] OS and it's version: Win7 32bit ?
- [ ] Your PDF URL:
# What did you do when you faced the problem?
//write here
below is the code used
## Example code:
```
import tabula
df = tabula.read_pdf("D:/Users/rag/Documents/GE_Confidential/Projects/GE_Health_Care/pdf/test.pdf")
```
## Output:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-11-1c72e9de1c11> in <module>()
----> 1 df = tabula.read_pdf("D:/Users/rag/Documents/GE_Confidential/Projects/GE_Health_Care/pdf/test.pdf")
D:\Users\rag\AppData\Local\Continuum\Anaconda3\lib\site-packages\tabula\wrapper.py in read_pdf(input_path, output_format, encoding, java_options, pandas_options, multiple_tables, **kwargs)
73
74 try:
---> 75 output = subprocess.check_output(args)
76 finally:
77 if is_url:
D:\Users\rag\AppData\Local\Continuum\Anaconda3\lib\subprocess.py in check_output(timeout, *popenargs, **kwargs)
334
335 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
--> 336 **kwargs).stdout
337
338
D:\Users\rag\AppData\Local\Continuum\Anaconda3\lib\subprocess.py in run(input, timeout, check, *popenargs, **kwargs)
401 kwargs['stdin'] = PIPE
402
--> 403 with Popen(*popenargs, **kwargs) as process:
404 try:
405 stdout, stderr = process.communicate(input, timeout=timeout)
D:\Users\rag\AppData\Local\Continuum\Anaconda3\lib\subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors)
705 c2pread, c2pwrite,
706 errread, errwrite,
--> 707 restore_signals, start_new_session)
708 except:
709 # Cleanup if the child failed starting.
D:\Users\rag\AppData\Local\Continuum\Anaconda3\lib\subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session)
988 env,
989 cwd,
--> 990 startupinfo)
991 finally:
992 # Child is launched. Close the parent's copy of those pipe
FileNotFoundError: [WinError 2] The system cannot find the file specified
```
## What did you intend to be?
i want to read a pdf table and convert it to data-frame for further analysis...
if there is any other alternative please let me know how to do it..
Many thanks in advance...
| 1medium
|
Title: Why is undetected_chromedriver automatically updated?
Body: Why is undetected_chromedriver automatically updated? | 1medium
|
Title: FastCRUD seems to only be compatible with fastapi>=0.100.0,<0.112.0, is it intentional?
Body: When installing fastcrud via uv a get the following:
$ uv add fastcrud==0.15.0
× No solution found when resolving dependencies:
╰─▶ Because fastcrud==0.15.0 depends on fastapi>=0.100.0,<0.112.0 and your project depends on fastapi==0.114.0, we can conclude that your project and
fastcrud==0.15.0 are incompatible.
And because your project depends on fastcrud==0.15.0, we can conclude that your project's requirements are unsatisfiable.
| 1medium
|
Title: AttributeError: 'asyncpg.pgproto.pgproto.UUID' object has no attribute 'replace'
Body: * GINO version: 1.0.1
* Python version: 3.8.2
* asyncpg version: 0.20.1
* PostgreSQL version: 12.3 (Ubuntu 12.3-1.pgdg20.04+1)
### Description
I'm trying to use UUID value as unique Id in my model
```
from . import db
from uuid import uuid4
from sqlalchemy.dialects.postgresql import UUID
class User(db.Model):
__tablename__ = "users"
id = db.Column(UUID(as_uuid=True), primary_key=True, unique=True, index=True, nullable=False, default=uuid4)
login = db.Column(db.String(255), nullable=False, unique=True)
password = db.Column(db.String(255), nullable=True)
full_name = db.Column(db.String(255))
last_login = db.Column(db.DateTime, nullable=True)
is_superuser = db.Column(db.Boolean, nullable=False, default=False)
is_staff = db.Column(db.Boolean, nullable=False, default=True)
remark = db.Column(db.String)
```
My controller is
```
class UserModel(BaseModel):
login: str
password: str
full_name: str
is_superuser: bool = False
is_staff: bool = True
remark: str = None
@router.post("/users")
async def add_user(user: UserModel):
rv = await User.create(login=user.login,
password=user.password,
full_name=user.full_name,
is_superuser=user.is_superuser,
is_staff=user.is_staff,
remark=user.remark
)
return rv.to_dict()
```
### What I Did
When I'm trying to post a new user to db via swagger UI I got this error:
```
INFO: 127.0.0.1:38548 - "POST /users HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/petr/crm/.venv/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 386, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/fastapi/applications.py", line 181, in __call__
await super().__call__(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/gino_starlette.py", line 79, in __call__
await self.app(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
await route.handle(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
await self.app(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 196, in app
raw_response = await run_endpoint_function(
File "/home/petr/crm/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 147, in run_endpoint_function
return await dependant.call(**values)
File "./src/crm/views/users.py", line 30, in add_user
rv = await User.create(login=user.login,
File "/home/petr/crm/.venv/lib/python3.8/site-packages/gino/crud.py", line 444, in _create_without_instance
return await cls(**values)._create(bind=bind, timeout=timeout)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/gino/crud.py", line 478, in _create
for k, v in row.items():
File "/home/petr/crm/.venv/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 207, in items
return [(key, self[key]) for key in self.keys()]
File "/home/petr/crm/.venv/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 207, in <listcomp>
return [(key, self[key]) for key in self.keys()]
File "/home/petr/crm/.venv/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/base.py", line 1328, in process
value = _python_UUID(value)
File "/usr/lib/python3.8/uuid.py", line 166, in __init__
hex = hex.replace('urn:', '').replace('uuid:', '')
AttributeError: 'asyncpg.pgproto.pgproto.UUID' object has no attribute 'replace'
```
| 1medium
|
Title: Figure level plot BUG
Body: When I set the fontdict in g.set_yticklabels(fontdict={'fontsize': 16, 'fontweight': 'bold'}) , the labels of the y tick will loss while g.set_xticklabels(fontdict={'fontsize': 16, 'fontweight': 'bold'}) will not. Is this a BUG?

| 1medium
|
Title: JSON.parse error in examples/server
Body: I am trying out this example: https://github.com/jlaine/aiortc/tree/master/examples/server
I can get to the server. But when I try to start either audio/video I get the following error:
**SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data**
I am seeing this error on my system (Debian), I suspect it might have something to do with that:
**av.AVError: [Errno 1094995529] Invalid data found when processing input: 'demo-instruct.wav'**
Extra remarks:
- demo_instruct.wav is present
Can somebody help me further with this? | 1medium
|
Title: Bug Report: Kurtosis at constant columns values
Body: ### Current Behaviour
I am trying to generate a report but it but it throws an error.
```
187 descriptive_statistics = Table(
188 [
189 {
190 "name": "Standard deviation",
191 "value": fmt_numeric(summary["std"], precision=config.report.precision),
192 },
193 {
194 "name": "Coefficient of variation (CV)",
195 "value": fmt_numeric(summary["cv"], precision=config.report.precision),
196 },
197 {
198 "name": "Kurtosis",
--> 199 "value": fmt_numeric(
200 summary["kurtosis"], precision=config.report.precision
201 ),
202 },
File /opt/conda/lib/python3.10/site-packages/ydata_profiling/report/formatters.py:232, in fmt_numeric(value, precision)
221 @list_args
222 def fmt_numeric(value: float, precision: int = 10) -> str:
223 """Format any numeric value.
224
225 Args:
(...)
230 The numeric value with the given precision.
231 """
--> 232 fmtted = f"{{:.{precision}g}}".format(value)
233 for v in ["e+", "e-"]:
234 if v in fmtted:
TypeError: unsupported format string passed to NoneType.__format__
```
I think it is because pyspark.sql.functions.kurtosis function returns None for constant columns
```
df.select(kurtosis(df.column_name)).show()
+--------------+
|kurtosis(column_name)|
+--------------+
| null |
+--------------+
```
### Expected Behaviour
It was expected to generate the report.
### Data Description
My data has two columns that all the values are constants.
### Code that reproduces the bug
```Python
report_df = ProfileReport(df)
```
### pandas-profiling version
4.1.2
### Dependencies
```Text
pyspark==3.3.2
```
### OS
Linux
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | 1medium
|
Title: the link "https://www.jaided.ai/custom_model.md" is lost could you provide again?
Body: | 0easy
|
Title: [Tutorial 2.1 error] TypeError: where(): argument 'other' (position 3) must be Tensor, not int
Body: It happened in tutorial 2.1. Details are as follows:
Traceback (most recent call last):
File "condional_prompt.py", line 112, in <module>
loss = prompt_model(inputs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/openprompt/pipeline_base.py", line 449, in forward
return self._forward(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/openprompt/pipeline_base.py", line 467, in _forward
logits, labels = self.shift_logits_and_labels(logits, batch['loss_ids'], reference_ids)
File "/opt/conda/lib/python3.7/site-packages/openprompt/pipeline_base.py", line 434, in shift_logits_and_labels
shift_input_ids = torch.where(shift_loss_ids>0, shift_input_ids, -100)
TypeError: where(): argument 'other' (position 3) must be Tensor, not int | 1medium
|
Title: Adding spark to the latest dependency checker lowered the checker's reported pandas version
Body: Making an issue to track this for now. Possibly pyspark 3.3 will allow us to use pandas 1.4 with pyspark | 1medium
|
Title: New key management hard-crashes on a bad key
Body: Following on from #2994 discussions...
The new key management is v0.8.1 is handy, but it hard crashes if it encounters a bad key:
```
beat: Starting...
2025-02-14 14:18:13 da9ef9a76ffe augur[7] INFO Retrieved 16 github api keys for use
WARNING: The key 'redacted' is not a valid key. Hint: If valid in past it may have expired
WARNING: The key 'redacted' is not a valid key. Hint: If valid in past it may have expired
WARNING: The key 'redacted' is not a valid key. Hint: If valid in past it may have expired
Protocol Error: <class 'httpx.ProtocolError'>
augur backend start command setup failed
You are not connected to the internet.
Please connect to the internet to run Augur
Consider setting http_proxy variables for limited access installations.
```
Given a long-lived instance is pretty much *guaranteed* to hit a bad or expired key during it's lifetime, this should be handled and reported to the user, rather than causing a crash. | 1medium
|
Title: C-c (KeyboardInterrupt) hangs pudb when running event loop
Body: I am using a library (https://github.com/getsenic/gatt-python/blob/master/gatt/gatt_linux.py) that implements its own event loop. When I call the .run() method in this library, eventually I would like to interrupt it, but it seems that pudb does not either pass through the interrupt or gets hung up itself. This is printed to the console when running an app using this library after it has called .run(), when running with pudb:
```
^CTraceback (most recent call last):
File "/home/clayton/src/ride_track/venv/lib/python3.7/site-packages/gi/_ossighelper.py", line 107, in signal_notify
if condition & GLib.IO_IN:
File "/home/clayton/src/ride_track/venv/lib/python3.7/site-packages/gi/_ossighelper.py", line 107, in signal_notify
if condition & GLib.IO_IN:
File "/usr/lib64/python3.7/bdb.py", line 88, in trace_dispatch
return self.dispatch_line(frame)
File "/home/clayton/src/ride_track/venv/lib/python3.7/site-packages/pudb/debugger.py", line 189, in dispatch_line
raise bdb.BdbQuit
bdb.BdbQuit
```
I basically have to C-z to background the task and then send SIGKILL to get pudb to quit (which is obviously not ideal).
I'll try to poke around some more to figure out what might be going on here, but figured I would file this issue just in case I am doing something obviously incorrect. | 1medium
|
Title: Bash API description for Image component is wrong
Body: ### Describe the bug
The `Image` component creates a wrong description for the bash API documentation. Instead of using the `url` flag, it uses the `path` flag with an url.
The provided gradio sketch produces the following example bash message:
```bash
curl -X POST http://127.0.0.1:7860/gradio_api/call/predict -s -H "Content-Type: application/json" -d '{
"data": [
{"path":"https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png"}
]}' \
| awk -F'"' '{ print $4}' \
| read EVENT_ID; curl -N http://127.0.0.1:7860/gradio_api/call/predict/$EVENT_ID
```
First of all, the url has to be in the `url` part. However, if we would do so, the url is not a base64 data-url and fails to be parsed with this (not correct) error message:
```
Image path is None.
```
Either we have a better error message or we implement automatic image download (which would be possible using PIL). Is this not done due to security measures?
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import numpy as np
import gradio as gr
def greet(image: np.ndarray):
return f"Thanks for the image: {image.shape}"
demo = gr.Interface(fn=greet, inputs=gr.Image(), outputs="text")
demo.launch(share=False)
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.22.0
gradio_client version: 1.8.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.3.1
gradio-client==1.8.0 is not installed.
groovy: 0.1.2
httpx: 0.25.0
huggingface-hub: 0.29.3
jinja2: 3.1.2
markupsafe: 2.1.3
numpy: 1.26.0
orjson: 3.9.7
packaging: 23.1
pandas: 2.1.1
pillow: 10.0.1
pydantic: 2.4.2
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.1
ruff: 0.11.0
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.1
tomlkit: 0.13.2
typer: 0.15.2
typing-extensions: 4.8.0
urllib3: 2.0.5
uvicorn: 0.23.2
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2023.9.2
httpx: 0.25.0
huggingface-hub: 0.29.3
packaging: 23.1
typing-extensions: 4.8.0
websockets: 11.0.3
```
### Severity
I can work around it | 1medium
|
Title: Feature: add `pages` support in `project`
Body: ## Description of the problem, including code/CLI snippet
I was not able to determine how to access to [`/api/v4/projects/:id/pages`](https://docs.gitlab.com/ee/api/pages.html) with python-gitlab. I'm not sure to have searched well, but the [`Project`object](https://python-gitlab.readthedocs.io/en/stable/api/gitlab.v4.html#gitlab.v4.objects.Project) does not seem to provide such access.
Would it be possible to add it?
The use case behind is I want to provide a public script to add a badge linking to the deployed pages, which requires such API access.
## Specifications
- python-gitlab version: 4.4.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): 16.9.2
Thanks for you great API wrapper, it's a real improvement over standard GitLab API! | 1medium
|
Title: Error 500 upgrading from 8.2.0
Body: Error (Full Traceback):
Traceback (most recent call last):
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask_restplus/api.py", line 584, in error_router
return original_handler(e)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask_login/utils.py", line 261, in decorated_view
return func(*args, **kwargs)
File "/home/pi/Mycodo/mycodo/mycodo_flask/routes_admin.py", line 539, in admin_upgrade
is_internet=is_internet)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/templating.py", line 140, in render_template
ctx.app,
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/asyncsupport.py", line 76, in render
return original_render(self, *args, **kwargs)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb)
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/admin/upgrade.html", line 2, in top-level template code
{% set active_page = "upgrade" %}
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 1005, in render
return concat(self.root_render_func(self.new_context(vars)))
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/admin/upgrade.html", line 18, in root
objDiv.scrollTop = objDiv.scrollHeight;
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/layout.html", line 335, in root
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/admin/upgrade.html", line 119, in block_body
{{_('No upgrade is available. You are running the latest release, version')}} <a href="https://github.com/kizniche/Mycodo/releases/tag/v{{current_release}}" target="_blank">{{ current_release }}</a>
TypeError: '>' not supported between instances of 'NoneType' and 'int'
| 1medium
|
Title: 有办法发送语音吗
Body: 大概需求就是把本地的音频,当作语音发出去? | 1medium
|
Title: Feature request: Include Tooltip in Input style widgets
Body: When one is creating a form and wants to include a `pn.widgets.TooltipIcon`, they are required to manually set a label for that input for it to be properly aligned with the input.
Consider the code below. A set of inputs is created, where some of them have tooltips. I have noticed that the label is also part of the widget, and therefor aligning the tooltip to a widget with a label results in it being vertically misaligned when compared to the input itself.
Code for creating a `pn.widgets.TooltipIcon` and the current workaround:
```python
import panel as pn
pn.extension()
my_label = pn.widgets.StaticText(
value="Input with a tooltip",
align=('start', 'end'),
height_policy="min",
margin=(0, 0, 0, 10)
)
my_input = pn.widgets.FloatInput(
step=1e-2,
start=0.0,
end=1.0,
value=0.3,
sizing_mode='scale_width',
height_policy="min",
margin=(0, 0, 0, 10)
)
my_tooltip = pn.widgets.TooltipIcon(
value="Very useful tooltip.",
align="center"
)
pn.WidgetBox(
pn.widgets.FloatInput(name="First input",sizing_mode='scale_width'),
pn.Column(my_label,pn.Row(my_input,my_tooltip)),
pn.Row(
pn.widgets.TextInput(name="Some other input",sizing_mode='scale_width'),
pn.widgets.TooltipIcon(
value="Another tooltip.",
align="center"
)
),
width=200
)
```

My proposal would be to take a tooltip parameter in the initializer of these widgets where it could display a properly aligned tooltip icon.
```python
my_input = pn.widgets.FloatInput(
step=1e-2,
start=0.0,
end=1.0,
value=0.3,
tooltip="super useful tooltip"
)
``` | 2hard
|
Title: Add a CONTRIBUTING file
Body: https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/setting-guidelines-for-repository-contributors | 0easy
|
Title: 'Venv' is not used for some reason?
Body: ### Description
For some reason, `watchfiles` does not use `python` from activated `venv`?
Is it expected or not?
### Steps to reproduce
Yes, I'm sure I have activated venv, and it's being used.
Steps to reproduce:
1. Create `venv`
2. Activate it
3. Install `rich` (or any other dependency)
4. Create the following file (`mre.py`):
```
import sys
print("Python executable:", sys.executable)
import rich
print(rich.__all__)
```
5. Run file from console to be sure everything's okay:
```
$ python mre.py
Python executable: D:\Code\project\.venv\Scripts\python.exe
['get_console', 'reconfigure', 'print', 'inspect', 'print_json']
```
6. Run same command via `watchfiles`:
```
$ watchfiles "python mre.py" .
[05:21:25] watchfiles v0.23.0 👀 path="D:\Code\project" target="python mre.py" (command) filter=DefaultFilter...
Python executable: C:\Program Files\Python312\python.exe
Traceback (most recent call last):
File "D:\Code\project\mre.py", line 4, in <module>
import rich
ModuleNotFoundError: No module named 'rich'
```
### Operating System & Architecture
Windows-10-10.0.19045-SP0
10.0.19045
### Environment
I tested with both `cmd` and `bash`, and have the same output everywhere. I use `venv`, created via `PyCharm`
### Python & Watchfiles Version
python: 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)], watchfiles: 0.23.0
### Rust & Cargo Version
_No response_
### Additional info:
When I use `watchfiles ".venv/Scripts/python mre.py" .`, everything works as expected, since the python from `venv` is getting used.
`where python` command result:
```
$ where python
D:\Code\project\.venv\Scripts\python.exe
C:\Program Files\Python312\python.exe
``` | 1medium
|
Title: Blank Screen when load ComfyUI on M2 Mac
Body: ### Expected Behavior
Start ComfyUI
### Actual Behavior
When I start comfyUI I have blank screen
### Steps to Reproduce
Start comfyUI
### Debug Logs
```powershell
No errors
To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
In the console of the browser there is this error
Removing unpermitted intrinsics lockdown-install.js:1:52832
Same problem with all browsers
```
### Other
This arrive after trying updating comfyUI which give an error unable to update | 1medium
|
Title: [ActiveRecordMixin] model not persisted unless I call db.session.commit()
Body: When I call `model.create(...)`, it isn't persisted in the database unless I call `db.session.commit()` after. What am I doing wrong? | 1medium
|
Title: [Core] map_batches cannot guarantee a stable batch_size, but if drop_last=True is set, it can be guaranteed (although data will be lost). Can we consider adding this parameter to map_batches?
Body: ### Description
map_batches cannot guarantee a stable batch_size, but if drop_last=True is set, it can be guaranteed (although data will be lost). Can we consider adding this parameter to map_batches?
### Use case
```
import ray
import numpy as np
ray.init()
def square_root_batch(batch):
print("len", len(batch["value"]))
batch["sqrt_value"] = np.sqrt(batch["value"])
return batch
data = [{"value": float(np.random.randint(1, 100))} for _ in range(600004)]
ds = ray.data.from_items(data)
ds = ds.map_batches(
square_root_batch,
concurrency=4,
batch_size=16,
)
ds.take_all()
```
many batch length is not 16.
but when i change blocks_to_batches(data/_internal_block_batching/util.py) drop_last = True, each batch is 16.
can we add drop_last in map_batches? | 1medium
|
Title: [OTHER] Taipy unable to assign a state variable, apparent leak between clients?
Body: ### What went wrong? 🤔
I've been stuck on a really critical bug for the past week. It's preventing pages from loading, and the only current workaround is restarting the server. It occurs (seemingly) random, even with automated tests.
The issue arises a nested dict-like data structure object is saved to a state variable. This object is used to save datasets, figures, values, etc. The size of this data structure is dynamic and is different for each client session, defined by a query parameter in the URL called `client_handle` (important). The data structure is constructed during `on_init()` of the page without any issue, and then assigned to a state variable in a simple line: `state.risk_tree = tree` which causes the (occasional) error.
The first client to connect to the server (since server startup) always initializes without a problem. Subsequent clients can have the issue, *if they use a different `client_handle`*. For example, assuming each client connects from an isolated browser and creates its own Taipy session:
1. `client_handle = vm_groningen_dev` (initialization OK)
2. `client_handle = vm_gelderland_dev` (initialization OK)
3. `client_handle = vm_limburg_dev` (ERROR)
The error raised by during the initialization of connection #3 (`vm_limburg_dev`) is:
```
File "taipy/gui/utils/_attributes.py", line 26, in _getscopeattr_drill
return attrgetter(name)(gui._get_data_scope())
AttributeError("'types.SimpleNamespace' object has no attribute 'tp_TpExPr_gui_get_adapted_lov_risk_tree_vm_gelderland_dev_root_0_thema_2_rp_0_ind_0_var_CHART_AGG_LOV_str_TPMDL_9_0'")
```
This long attribute id refers to the key which is used to retrieve an element from the data structure (in this case an LOV).
**Note:** in the middle of the attribute id is a part `vm_gelderland_dev` (the `client_handle`). The currently connected client is `vm_limburg_dev`. **This indicates Taipy is trying to bind a callback from another client session, which it obviously cannot find in this session.**
### Expected Behavior
The state variable `state.risk_tree` should be set. 9/10 times this works without a problem.
### Steps to Reproduce Issue
The biggest difficulty is that this bug is not consistent. 9/10 times it works fine, even when I reproduce client page-visit combinations that previously caused an error. So the only way to debug this is by inspecting the logs. I realize this is very little to go off, but since I can't even reliably reproduce the error, creating a minimum example is pretty much impossible.
See [isolated_error.log](https://github.com/user-attachments/files/18706548/bug951_isolated_error.log), which also contains the variables. The actions to produce this:
- The first 5 entries show how a client with `client_handle = vm_gelderland_dev` initializes without an issue.
- Browser is then closed.
- New browser is opened, client with `client_handle = vm_limburg_dev` fails to initialize.
- *Note that the error contains the `client_handle` of the previous client*
The issue occurs in the page `on_init()`. Constructing the object works without an issue. Saving the object to a state variable causes the issue.
```python
<imports>
# Declare page state variable
risk_tree = None
def on_init(state):
# Create the basic tree structure based on client settings. Data is added later in Long-Running Callback
state_var_name = "risk_tree"
tree = TreeRoot(
client_subniveau=state.client_settings["subniveau"],
client_alt_subniveaus=state.client_settings["subniveau_alt"],
state_var_name=state_var_name,
client_handle=state.client_handle,
children=[
ThemaNode(
id=idx,
thema_naam=thema,
risico_profielen=risicos,
state_var_name=state_var_name,
)
for idx, (thema, risicos) in enumerate(state.client_settings["risico_themas"].items())
],
client_settings=state.client_settings,
client_data_ops_fn=apply_client_allpages_data_operations,
)
state.risk_tree = tree # ERROR OCCURS HERE
tpgui.invoke_long_callback(
state=state,
user_function=LRCB_load_risicotree_data,
user_function_args=[
state.get_gui(),
tpgui.get_state_id(state),
tree,
state.client_handle,
state.pg_uri,
],
user_status_function=status_LRCB_tree_loading, # invoked at the end of and possibly during the runtime of user_function
period=10000, # Interval in milliseconds to check the status of the long callback
)
```
After the Long-Running Callback is complete, a Taipy Partial generates the content of the page. As seen below, the GUI elements reference attributes inside the data structure.
```python
def create_taipy_partial_content(tree_node, page_context=None):
"""Create the content for the current node in the Taipy Page format
"""
class_name = f"{tree_node.css_class_name} content-block"
# All other nodes (RisicoNode, IndicatorNode, VariabeleNode)
with tgb.part(class_name=class_name) as content:
if is_thema_node:
# Just title
tgb.text(
f"## {tree_node.get_label_text().upper()}",
mode="markdown",
class_name="card-title",
)
else:
# Title and link to docs
with tgb.layout(columns="2 1"):
tgb.text(
f"**{tree_node.get_label_text().title()}**",
mode="markdown",
class_name="card-title",
)
tgb.button(
"{docs_icon}", # docs_icon is defined in main.py
on_action="{navigate_to_docs}", # navigate_to_docs is defined in main.py
class_name="docs-button",
hover_text="Open documentatie",
)
# Description
tgb.text(f"{tree_node.help}", mode="markdown")
# Figure
if tree_node.figure is not None and tree_node.selected:
tgb.toggle(
value="{"
+ f"{tree_node.state_var_name}['{tree_node.id}']" #client_handle is part of the tree_node.id
+ ".chart_agg_toggle}",
lov="{"
+ f"{tree_node.state_var_name}['{tree_node.id}']"
+ ".CHART_AGG_LOV}",
on_change=callback_chart_agg_toggle,
)
tgb.chart(
figure="{"
+ f"{tree_node.state_var_name}['{tree_node.id}']"
+ ".figure}"
)
# Build children in nested content blocks
for child in tree_node.children:
if child.count_selected() == 0:
continue
create_taipy_partial_content(child)
return content
```
### Runtime Environment
Docker Container: python:3.12-slim-bookworm
### Browsers
Chrome, Firefox, Safari
### Version of Taipy
4.0.2
### Additional Context
```bash
```
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [x] I am willing to work on this issue (optional) | 2hard
|
Title: web.DataReader + "fred": Failed Downloads
Body: Running this in jupyter notebook:
start = datetime.date.today() - datetime.timedelta(days=5 * 365)
end = datetime.date.today()
df = web.DataReader(["sp500", "NASDAQCOM", "CBBTCUSD"], "fred", start, end)
Gives me this error:
[*********************100%%**********************] 3 of 3 completed
3 Failed downloads:
['CBBTCUSD', 'NASDAQCOM', 'SP500']: Exception('%ticker%: No timezone found, symbol may be delisted')
What can I do to fix it? | 1medium
|
Title: IndexError: Cannot choose from an empty sequence
Body: ### Description
When I tried to train the a deterministic model in reinforcement learning project, I suddenly got this IndexError in 60000 steps. I didn't change any code in the T2T project. Now, I can't continue training. It shows
File "/home/guest/tensor2tensor/tensor2tensor/data_generators/gym_env.py", line 186, in start_new_epoch
self._load_epoch_data(load_data_dir)
File "/home/guest/tensor2tensor/tensor2tensor/data_generators/gym_env.py", line 531, in _load_epoch_data
raise ValueError("Some data is missing, the experiment might've been "
ValueError: Some data is missing, the experiment might've been interupted during generating data.
### Environment information
Linux version 4.18.0-18-generic (buildd@lcy01-amd64-006) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #19~18.04.1-Ubuntu SMP Fri Apr 5 10:22:13 UTC 2019
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.13.1
tensorboard==1.13.1
tensorflow==1.13.1
tensorflow-datasets==1.0.1
tensorflow-estimator==1.13.0
tensorflow-metadata==0.13.0
tensorflow-probability==0.6.0
$ python -V
Python 3.6.8 :: Anaconda, Inc.
INFO:tensorflow:Timing: 2:35:05.578154
INFO:tensorflow:Setting T2TModel mode to 'infer'
INFO:tensorflow:Setting hparams.dropout to 0.0
INFO:tensorflow:Setting hparams.label_smoothing to 0.0
INFO:tensorflow:Setting hparams.layer_prepostprocess_dropout to 0.0
INFO:tensorflow:Setting hparams.symbol_dropout to 0.0
INFO:tensorflow:Setting hparams.residual_dropout to 0.0
INFO:tensorflow:Using variable initializer: uniform_unit_scaling
INFO:tensorflow:Transforming feature 'input_action' with symbol_modality_6_64.bottom
INFO:tensorflow:Transforming feature 'input_reward' with symbol_modality_3_64.bottom
INFO:tensorflow:Transforming feature 'inputs' with video_modality.bottom
INFO:tensorflow:Transforming feature 'target_action' with symbol_modality_6_64.targets_bottom
INFO:tensorflow:Transforming feature 'target_reward' with symbol_modality_3_64.targets_bottom
INFO:tensorflow:Transforming feature 'targets' with video_modality.targets_bottom
INFO:tensorflow:Building model body
INFO:tensorflow:Transforming body output with video_modality.top
INFO:tensorflow:Transforming body output with symbol_modality_3_64.top
INFO:tensorflow:Restoring checkpoint /home/guest/t2t_train/mb_det_pong_random/world_model/model.ckpt-60000
INFO:tensorflow:Restoring parameters from /home/guest/t2t_train/mb_det_pong_random/world_model/model.ckpt-60000
Traceback (most recent call last):
File "/home/guest/miniconda3/envs/tensor2tensor/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/guest/miniconda3/envs/tensor2tensor/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/guest/tensor2tensor/tensor2tensor/rl/trainer_model_based.py", line 389, in <module>
tf.app.run()
File "/home/guest/miniconda3/envs/tensor2tensor/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/guest/tensor2tensor/tensor2tensor/rl/trainer_model_based.py", line 384, in main
training_loop(hp, FLAGS.output_dir)
File "/home/guest/tensor2tensor/tensor2tensor/rl/trainer_model_based.py", line 356, in training_loop
env, hparams, directories["world_model"], debug_video_path
File "/home/guest/tensor2tensor/tensor2tensor/rl/rl_utils.py", line 158, in evaluate_world_model
subsequence_length + frame_stack_size
File "/home/guest/tensor2tensor/tensor2tensor/rl/rl_utils.py", line 336, in random_rollout_subsequences
return [choose_subsequence() for _ in range(num_subsequences)]
File "/home/guest/tensor2tensor/tensor2tensor/rl/rl_utils.py", line 336, in <listcomp>
return [choose_subsequence() for _ in range(num_subsequences)]
File "/home/guest/tensor2tensor/tensor2tensor/rl/rl_utils.py", line 328, in choose_subsequence
rollout = random.choice(rollouts)
File "/home/guest/miniconda3/envs/tensor2tensor/lib/python3.6/random.py", line 260, in choice
raise IndexError('Cannot choose from an empty sequence') from None
IndexError: Cannot choose from an empty sequence
| 1medium
|
Title: State of the library
Body: Hello,
I notice that there have been only 2 commits merged in the past 2 years. There is nothing wrong with this, especially if nobody is being paid to maintain the library. However, if this library is not likely to see much future development for whatever reason, it might be good to document this prominently, with the following outcomes in mind:
* Allowing potential users to properly set their expectations, since most of the other projects under this organization are very actively maintained
* Attracting attention of anyone who might be willing to financially sponsor work on the library
* Attracting attention of anyone who might be willing to help maintain this library
* Promoting forks
Thank you @asvetlov for all of your work on this library, and elsewhere! You have had an incredible impact on the Python ecosystem. | 3misc
|
Title: Running Bolt for Python apps on IBM Cloud Functions (FaaS)
Body: I am trying to use IBM cloud faas. I tried some steps, but failed :( . Any help with an example will be appreciated.
IBM cloud faas documentation https://cloud.ibm.com/docs/openwhisk?topic=openwhisk-prep#prep_python_local_virtenv | 1medium
|
Title: [docs] Pyinstaller with Dynaconf raising `UnicodeDecodeError`
Body: Pyinstaller Compiles the Dynaconf Modules and loaders so when _dynaconf.loader.py_loader_ tries to load files from the inspect stack trace it tries to read compiled pyc files and fails on UnicodeDecodeError. The fix is to pacakage dynaconf and python-dotenv[cli] as a package without compiling it by using the **--collect-all** argument of pyinstaller
_Originally posted by @OmmarShaikh01 in https://github.com/dynaconf/dynaconf/issues/770#issuecomment-1193254565_ | 1medium
|
Title: Support for Free-Form Query Parameters
Body: **Is your feature request related to a problem? Please describe.**
Given an API that accepts arbitrarily-named query parameters like:
/my-endpoint/?dynamic_param1=value&dynamic_param2=value2
We'd like to be able to append arbitrary key/values to the query string search.
Given a current YAML snippet like:
```yaml
parameters:
- in: query
name: dynamicFields
schema:
type: object
additionalProperties: true
```
The parameter generated is `schema_field: Union[Unset, ModelNameSchemaField] = UNSET`, and it's also sent as the parameter named `schema_field` instead of using the arbitrary keys.
**Describe the solution you'd like**
Generate the parameter above with `additionalProperties` as `schema_field: Union[Unset, None, Dict[str, Any]] = UNSET`. When `schema_field` is a `dict`, it will then send values for all keys in the query parameters instead of as `schema_field`.
If multiple parameters are defined having `additionalProperties`, it will treat all of them as arbitrary keys. If two parameters were to define the same dynamically named key, we make no guarantees about which one is sent. I imagine it would be the last parameter encountered with `additionalProperties`. Alternatively, we could raise an exception instead of making silent assumptions about the collision.
**Describe alternatives you've considered**
Rather than modeling the field in Open API, allow every GET method to accept an `additional_properties: Dict[str, str]` parameter which would append all the keys as query parameters. The name `additional_properties` might need to be configurable to avoid collision with APIs using that parameter name already.
| 1medium
|
Title: [BUG] possible bug with trash and filename templates
Body: ### Description
With latest 2.13.0, I've encountered a few issues after configuring custom storage path with expression referencing custom field. I'm running paperless on Proxmox/bare metal installed via tteck's script. There are two issues I've encountered.
1. files aren't being properly deleted from media directory after deleting document and emptying trash
2. export/import ends with error documents.models.Document.DoesNotExist: Problem installing fixture '/root/export/manifest.json': Document matching query does not exist.
### Steps to reproduce
In order to reproduce the issue
1. import any pdf document
2. create new document type magazines
3. create custom field path
4. add new storage path with an expression {{ document_type }} / {{ custom_fields|get_cf_value('path')|replace("-", "/", 2) }} / {{ title }}
5. finally assign the document with
- document type
- add custom field path and set it to aaa/bbb
- configure storage path
So far everything works fine. In the media folder, document is renamed as expected in both archive and originals subdirectory.
Now perform export using
python3 manage.py document_export /root/export
Then delete the document and empty trash
-> document is not deleted, but it gets moved from aaa/bbb subdirectory to None subdirectory,
Also import ends with above mentioned error message
-> documents.models.Document.DoesNotExist: Problem installing fixture '/root/export/manifest.json': Document matching query does not exist
### Webserver logs
```bash
--
```
### Browser logs
```bash
--
```
### Paperless-ngx version
2.13.0
### Host OS
Debian 12
### Installation method
Bare metal
### System status
```json
{
"pngx_version": "2.13.0",
"server_os": "Linux-6.8.12-2-pve-x86_64-with-glibc2.36",
"install_type": "bare-metal",
"storage": {
"total": 10464022528,
"available": 2987577344
},
"database": {
"type": "postgresql",
"url": "paperlessdb",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0011_remove_mailrule_assign_tag_squashed_0024_alter_mailrule_name_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://localhost:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-10-27T21:14:47.000542Z",
"index_error": null,
"classifier_status": "WARNING",
"classifier_last_trained": null,
"classifier_error": "Classifier file does not exist (yet). Re-training may be pending."
}
}
```
### Browser
Firefox
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | 2hard
|
Title: ValueError: Unknown split "validation". Should be one of ['train'].
Body: I only run:
python train.py --config config/train_cord.yaml --pretrained_model_name_or_path "naver-clova-ix/donut-base" --dataset_name_or_paths '["naver-clova-ix/cord-v2"]' --exp_version "test_experiment"
It has such an error, why? Thanks for your help!
Traceback (most recent call last):
File "train.py", line 150, in <module>
train(config)
File "train.py", line 87, in train
sort_json_key=config.sort_json_key,
File "/home/donut/donut/util.py", line 64, in __init__
self.dataset = load_dataset(dataset_name_or_path, split=self.split)
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 1644, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 793, in as_dataset
disable_tqdm=False,
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 206, in map_nested
return function(data_struct)
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 817, in _build_single_dataset
in_memory=in_memory,
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 889, in _as_dataset
in_memory=in_memory,
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 213, in read
files = self.get_file_instructions(name, instructions, split_infos)
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 187, in get_file_instructions
name, split_infos, instruction, filetype_suffix=self._filetype_suffix
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 110, in make_file_instructions
absolute_instructions = instruction.to_absolute(name2len)
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 618, in to_absolute
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 618, in <listcomp>
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr
raise ValueError('Unknown split "{}". Should be one of {}.'.format(split, list(name2len)))
ValueError: Unknown split "validation". Should be one of ['train']. | 1medium
|
Title: Can I reuse a single client object for each event?
Body: Hi, I'm trying to make a slack bot in OOP-fashioned way.
```pyhotn
@app.event('reaction_added')
def handle_reaction_added_event(ack, say, event, client):
ack()
eventHandler.run(client, say, event)
class EventHandler:
def run(self, web_client: WebClient, say: Say, event: Dict[str, Any]):
obj_a = A(web_client)
obj_b = B(say, obj_a)
obj_c = C(event, obj_b)
# work with those objects
```
As seen above, everytime I get event from slack I inject `client`, `say`, `event` object to my handler.
In handler, I made up some object with them. Everytime.
So my question is, can I reuse my client when first event came in, and use it for later event?
In short, **I want some singletone objects that have slack_bolt arguments.**
Below is what I want to make:
```python
@app.event('reaction_added')
def handle_reaction_added_event(ack, say, event, client):
ack()
eventHandler.run(event) # client, say are already injected somehow
class EventHandler:
def run(self, event: Dict[str, Any]):
obj_c = C(event, obj_b) # obj_b is singleton
# ...
```
#### The `slack_bolt` version
slack-bolt==1.14.3
#### Python runtime version
3.9.13
#### OS info
ProductName: macOS
ProductVersion: 12.5
BuildVersion: 21G72
Darwin Kernel Version 21.6.0: Sat Jun 18 17:07:22 PDT 2022; root:xnu-8020.140.41~1/RELEASE_ARM64_T6000
| 1medium
|
Title: Loading of saved model returns Error: "This BERTopic instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator."
Body: **Edit: Nvm, I just forgot to actually save the loading into a variable...
Previously saved several models using the following code:
```python3
from sklearn.feature_extraction.text import CountVectorizer
from bertopic.representation import KeyBERTInspired, PartOfSpeech, MaximalMarginalRelevance
main_representation_model = KeyBERTInspired()
aspect_representation_model1 = PartOfSpeech("en_core_web_sm")
aspect_representation_model2 = [KeyBERTInspired(top_n_words=30), MaximalMarginalRelevance(diversity=.5)]
representation_model = {
"Main": main_representation_model,
"Aspect1": aspect_representation_model1,
"Aspect2": aspect_representation_model2
}
vectorizer_model = CountVectorizer(min_df=5, stop_words = 'english')
topic_mdl = BERTopic(nr_topics = 'auto', vectorizer_model = vectorizer_model,
representation_model = representation_model, verbose=True)
apps = ['Assetto Corsa', 'Assetto Corsa Competizione', 'Beat Saber', 'CarX Drift Racing Online', 'DCS World Steam Edition', 'DEVOUR', 'Golf It!', 'Gorilla Tag', 'Hand Simulator', 'Microsoft Flight Simulator 40th Anniversary Edition',
'No_Mans_Sky', 'Paint the Town Red', 'Pavlov VR', 'Phasmophobia', 'Rec Room', 'STAR WARS™: Squadrons', 'Tabletop Simulator', 'VRChat', 'VTOL VR', 'War Thunder']
app = apps[5]
docs = dfs_reviews[app]
topic, ini_probs = topic_mdl.fit_transform(docs)
topics_info = get_topic_stats(topic_mdl)
# Saving model
topic_mdl.save(f'./topic_models/{app}', serialization='safetensors', save_ctfidf=True)
```
Then when I tried loading it and visualise it using the barplot:
```python3
topic_mdl.load(f'./topic_models/{app[3]}')
topic_mdl.visualize_barchart(top_n_topics = 16, n_words = 10)
```
it gives the following error:
```
{
"name": "ValueError",
"message": "This BERTopic instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[25], line 5
1 app = ['Assetto Corsa', 'Assetto Corsa Competizione', 'Beat Saber', 'CarX Drift Racing Online', 'DCS World Steam Edition', 'DEVOUR', 'Golf It!', 'Gorilla Tag', 'Hand Simulator', 'Microsoft Flight Simulator 40th Anniversary Edition',
2 'No_Mans_Sky', 'Paint the Town Red', 'Pavlov VR', 'Phasmophobia', 'Rec Room', 'STAR WARS™: Squadrons', 'Tabletop Simulator', 'VRChat', 'VTOL VR', 'War Thunder']
4 topic_mdl.load(f'./topic_models/{app[3]}')
----> 5 topic_mdl.get_topic_info()
File ~/Uni Codes/Thesis/Web-Scraper/env/lib/python3.10/site-packages/bertopic/_bertopic.py:1514, in BERTopic.get_topic_info(self, topic)
1499 def get_topic_info(self, topic: int = None) -> pd.DataFrame:
1500 \"\"\" Get information about each topic including its ID, frequency, and name.
1501
1502 Arguments:
(...)
1512 ```
1513 \"\"\"
-> 1514 check_is_fitted(self)
1516 info = pd.DataFrame(self.topic_sizes_.items(), columns=[\"Topic\", \"Count\"]).sort_values(\"Topic\")
1517 info[\"Name\"] = info.Topic.map(self.topic_labels_)
File ~/Uni Codes/Thesis/Web-Scraper/env/lib/python3.10/site-packages/bertopic/_utils.py:76, in check_is_fitted(topic_model)
72 msg = (\"This %(name)s instance is not fitted yet. Call 'fit' with \"
73 \"appropriate arguments before using this estimator.\")
75 if topic_model.topics_ is None:
---> 76 raise ValueError(msg % {'name': type(topic_model).__name__})
ValueError: This BERTopic instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator."
}
```
I can't seem to find anything related to this online. I read that I shouldn't be fitting it again as that would defeat the whole point of saving it in the first place from this issue #1584 .
Any help would be greatly appreciated.
Thank you | 0easy
|
Title: use move_mean and set window&min_count =1, diff result
Body: ```
import bottleneck as bn
a = [0.008196721311475436, -0.01626016260162607, 0.012396694214876205, -0.016326530612245076, 0.008298755186722151, 0.004115226337448442, 0.0, -0.008196721311475436, -0.008264462809917252, -0.00416666666666668, -0.012552301255230165, -0.012711864406779568, 0.017167381974248982, 0.008438818565400736, 0.004184100418410055, -0.00833333333333336, 0.008403361344537843, -0.01666666666666672, 0.0, 0.016949152542372937, -0.00416666666666668, 0.0, 0.004184100418410055, -0.012499999999999907, -0.004219409282700569, 0.004237288135593369, -0.004219409282700569, -0.025423728813559268, -0.02173913043478254, 0.013333333333333234, 0.0, 0.039473684210526445, 0.05063291139240508, 0.012048192771084246, -0.007936507936507962, -0.003999999999999886, -0.00803212851405625, 0.01619433198380546, -0.01593625498007948, 0.0, -0.016194331983805717, 0.008230452674897144, 0.008163265306122474, -0.012145748987854418, -0.012295081967213154, -0.024896265560165793, -0.012765957446808685, 0.004310344827586358, -0.008583690987124627, 0.0, 0.05194805194805212, 0.008230452674897144, -0.016326530612245076, -0.008298755186721888, 0.0, -0.01673640167364009, 0.02127659574468078, -0.012499999999999907, 0.0, -0.004219409282700569, 0.004237288135593369, -0.012658227848101439, 0.004273504273504423, 0.008510638297872367, -0.00843881856540087, -0.012765957446808685, 0.0, -0.008620689655172306, 0.017391304347826004, -0.004273504273504152, 0.008583690987124491, 0.004255319148936049, 0.004237288135593369, 0.0, 0.004219409282700301, 0.004201680672268921, -0.004184100418410055, 0.008403361344537843, -0.020833333333333266, -0.012765957446808685, 0.004310344827586358, 0.008583690987124491, -0.008510638297872367, 0.0042918454935621094, -0.004273504273504152, -0.004291845493562381, 0.004310344827586358, -0.004291845493562381, 0.017241379310344883, 0.012711864406779702, -0.008368200836819977, 0.021097046413501908, 0.012396694214876205, 0.012244897959183583, 0.0, 0.0, -0.008064516129032284, -0.004065040650406388, -0.012244897959183841, -0.004132231404958691, 0.008298755186722151, -0.01646090534979429, 0.004184100418410055, 0.004166666666666548, 0.0, 0.008298755186722151, 0.041152263374485465, -0.02371541501976267, -0.016194331983805717, -0.024691358024691305, -0.02109704641350231, 0.0, 0.0129310344827588, 0.017021276595744598, -0.012552301255230165, 0.0, 0.012711864406779702, 0.03347280334728044, 0.0]
b = bn.move_mean(a, window=1, min_count=1)
for item1, item2 in zip(a[-60:], b[-60:]):
print(item1, item2)
``` | 1medium
|
Title: Introduce @xt.method Decorator for AI Code Generation Compatibility
Body:
We could introduce `@xt.method` decorator in Nextpy for defining event handlers within state classes. This feature is intended to enhance code readability, standardize the declaration of methods handling state changes, and align with AI code generation practices.
## Current Behavior
Currently, Nextpy requires methods within state classes to be defined directly, without specific decorators. This approach is functional but does not distinguish between regular methods and event handlers explicitly designed to modify the state.
## Proposed Behavior
The introduction of the `@xt.method` decorator would allow developers to clearly mark methods in the state class as event handlers. This not only improves code readability but also aligns with AI code generation patterns, where such decorators are often included by default. It could also facilitate additional framework optimizations or checks.
For example:
```python
@xt.method(ToDoState)
def delete_todo(state, todo):
state.todos.remove(todo)
```
## Benefits
- **Improved Code Readability and Maintainability**: Clearly distinguishes state-modifying methods from regular class methods.
- **Alignment with AI Code Generation**: Aligns with default practices of AI code generation tools, which often include method decorators in their outputs.
| 1medium
|
Title: `ak.flatten` raises `np.AxisError` for `unknown[unknown]`, but not for `unknown`
Body: ### Version of Awkward Array
2.1.1
### Description and code to reproduce
To reproduce the problem;
```python3
[ins] In [1]: import awkward as ak
...: ak.__version__
Out[1]: '2.1.1'
[ins] In [3]: empty = ak.Array([])
...: ak.flatten(empty[empty])
---------------------------------------------------------------------------
AxisError Traceback (most recent call last)
Cell In[3], line 1
----> 1 ak.flatten(empty[empty])
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/operations/ak_flatten.py:164, in flatten(array, axis, highlevel, behavior)
12 """
13 Args:
14 array: Array-like data (anything #ak.to_layout recognizes).
(...)
158 999]
159 """
160 with ak._errors.OperationErrorContext(
161 "ak.flatten",
162 {"array": array, "axis": axis, "highlevel": highlevel, "behavior": behavior},
163 ):
--> 164 return _impl(array, axis, highlevel, behavior)
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/operations/ak_flatten.py:232, in _impl(array, axis, highlevel, behavior)
229 return wrap_layout(out, behavior, highlevel, like=array)
231 else:
--> 232 out = ak._do.flatten(layout, axis)
233 return wrap_layout(out, behavior, highlevel, like=array)
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/_do.py:253, in flatten(layout, axis)
252 def flatten(layout: Content, axis: int = 1) -> Content:
--> 253 offsets, flattened = layout._offsets_and_flattened(axis, 1)
254 return flattened
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/contents/numpyarray.py:415, in NumpyArray._offsets_and_flattened(self, axis, depth)
412 return self.to_RegularArray()._offsets_and_flattened(axis, depth)
414 else:
--> 415 raise ak._errors.wrap_error(
416 np.AxisError(f"axis={axis} exceeds the depth of this array ({depth})")
417 )
AxisError: while calling
ak.flatten(
array = <Array [] type='0 * int64'>
axis = 1
highlevel = True
behavior = None
)
Error details: axis=1 exceeds the depth of this array (1)
```
where as by contrast, in the older versions;
```python3
[ins] In [1]: import awkward as ak
...: print(ak.__version__)
1.8.0
[nav] In [2]: empty = ak.Array([])
...: ak.flatten(empty[empty])
Out[2]: <Array [] type='0 * unknown'>
```
This does seem like a bug to me, we can't guarantee that every list that we call flatten on will have items in.
Not impossible that it's related to this bug; https://github.com/scikit-hep/awkward/issues/2207
I will check out the repo some time and see if that fix solves it.
| 1medium
|
Title: Can no longer install versions 1.5.10-1.6.5
Body: ### Bug description
Hey everyone,
I have been working on the same server for the past few months ( w/ RTX6000) without issue
Recently, I tried to re-install lightning 1.5.10 (new virtual environment, python 3.9.18), and got the error below
I tried versions up to 1.6.5 with the same error
I can't use the newest version, as that will require a torch upgrade (currently using 1.13.1 due to specific versioning issues)
This popped up in the last month, I'm wondering if anyone else is seeing this problem or if it is to be expected for some reason?
Thanks,
Jonathan
### What version are you seeing the problem on?
v1.x
### How to reproduce the bug
```python
Create a virtual environment with python 3.9.18
Activate
pip install pytorch-lightning==1.5.10
```
### Error messages and logs
```
ERROR: Could not find a version that satisfies the requirement pytorch-lightning==1.5.10 (from versions: 0.0.2, 0.2, 0.2.2, 0.2.3, 0.2.4, 0.2.4.1, 0.2.5, 0.2.5.1, 0.2.5.2, 0.2.6, 0.3, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.4.1, 0.3.5, 0.3.6, 0.3.6.1, 0.3.6.3, 0.3.6.4, 0.3.6.5, 0.3.6.6, 0.3.6.7, 0.3.6.8, 0.3.6.9, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.5.0, 0.5.1, 0.5.1.2, 0.5.1.3, 0.5.2, 0.5.2.1, 0.5.3, 0.5.3.1, 0.5.3.2, 0.5.3.3, 0.6.0, 0.7.1, 0.7.3, 0.7.5, 0.7.6, 0.8.1, 0.8.3, 0.8.4, 0.8.5, 0.9.0, 0.10.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, 1.1.8, 1.2.0rc0, 1.2.0rc1, 1.2.0rc2, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.2.8, 1.2.9, 1.2.10, 1.3.0rc1, 1.3.0rc2, 1.3.0rc3, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.3.7.post0, 1.3.8, 1.4.0rc0, 1.4.0rc1, 1.4.0rc2, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.6.0rc0, 1.6.0rc1, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.7.0rc0, 1.7.0rc1, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.8.0rc0, 1.8.0rc1, 1.8.0rc2, 1.8.0, 1.8.0.post1, 1.8.1, 1.8.2, 1.8.3, 1.8.3.post0, 1.8.3.post1, 1.8.3.post2, 1.8.4, 1.8.4.post0, 1.8.5, 1.8.5.post0, 1.8.6, 1.9.0rc0, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 2.0.0rc0, 2.0.0, 2.0.1, 2.0.1.post0, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.9.post0, 2.1.0rc0, 2.1.0rc1, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.2.0rc0, 2.2.0, 2.2.0.post0, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.3.0, 2.3.1, 2.3.2, 2.3.3, 2.4.0)
ERROR: No matching distribution found for pytorch-lightning==1.5.10
```
### Environment
<details>
<summary>Current environment</summary>
```
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA RTX 6000 Ada Generation
- available: True
- version: 11.6
* Lightning:
- pytorch-tabnet: 3.0.0
- torch: 1.13.1+cu116
- torchaudio: 0.13.1+cu116
- torchmetrics: 0.11.0
- torchvision: 0.14.1+cu116
* Packages:
- absl-py: 1.3.0
- aiohttp: 3.8.3
- aiosignal: 1.3.1
- alembic: 1.13.2
- aniso8601: 9.0.1
- antlr4-python3-runtime: 4.9.3
- association-metrics: 0.0.1
- asttokens: 2.2.1
- async-timeout: 4.0.2
- attrs: 22.2.0
- autocommand: 2.2.2
- backcall: 0.2.0
- backports.tarfile: 1.2.0
- brotlipy: 0.7.0
- cachetools: 5.2.0
- category-encoders: 2.2.2
- certifi: 2020.6.20
- cffi: 1.17.0
- charset-normalizer: 2.1.1
- click: 8.1.3
- cloudpickle: 3.0.0
- comm: 0.1.2
- configparser: 5.3.0
- contourpy: 1.0.6
- cycler: 0.11.0
- databricks-cli: 0.17.4
- databricks-sdk: 0.30.0
- datasets: 2.10.1
- debugpy: 1.6.5
- decorator: 5.1.1
- deprecated: 1.2.14
- dill: 0.3.6
- docker: 7.1.0
- docker-pycreds: 0.4.0
- einops: 0.3.0
- entrypoints: 0.4
- executing: 1.2.0
- filelock: 3.9.0
- flask: 2.2.3
- fonttools: 4.38.0
- frozenlist: 1.3.3
- fsspec: 2022.11.0
- future: 0.18.2
- gitdb: 4.0.10
- gitpython: 3.1.30
- google-auth: 2.15.0
- google-auth-oauthlib: 0.4.6
- gputil: 1.4.0
- graphene: 3.3
- graphql-core: 3.2.3
- graphql-relay: 3.2.0
- greenlet: 3.0.3
- grpcio: 1.51.1
- gunicorn: 22.0.0
- huggingface-hub: 0.13.0
- idna: 3.4
- importlib-metadata: 6.0.0
- importlib-resources: 6.4.0
- inflect: 7.3.1
- ipykernel: 6.19.4
- ipython: 8.8.0
- ipywidgets: 8.0.4
- itsdangerous: 2.1.2
- jaraco.context: 5.3.0
- jaraco.functools: 4.0.1
- jaraco.text: 3.12.1
- jedi: 0.18.2
- jinja2: 3.1.4
- joblib: 1.2.0
- jupyter-client: 7.4.8
- jupyter-core: 5.1.2
- jupyterlab-widgets: 3.0.5
- kiwisolver: 1.4.4
- kornia: 0.7.3
- kornia-rs: 0.1.5
- llvmlite: 0.43.0
- mako: 1.3.5
- markdown: 3.4.1
- markupsafe: 2.1.1
- matplotlib: 3.6.2
- matplotlib-inline: 0.1.6
- mlflow: 2.15.1
- mlflow-skinny: 2.15.1
- more-itertools: 10.3.0
- multidict: 6.0.4
- multiprocess: 0.70.14
- nest-asyncio: 1.5.6
- numba: 0.60.0
- numpy: 1.24.2
- oauthlib: 3.2.2
- omegaconf: 2.3.0
- opentelemetry-api: 1.26.0
- opentelemetry-sdk: 1.26.0
- opentelemetry-semantic-conventions: 0.47b0
- ordered-set: 4.1.0
- packaging: 22.0
- pandas: 1.1.5
- parso: 0.8.3
- patsy: 0.5.3
- pexpect: 4.8.0
- pickleshare: 0.7.5
- pillow: 9.4.0
- pip: 24.2
- platformdirs: 2.6.2
- plotly: 4.14.3
- ply: 3.11
- promise: 2.3
- prompt-toolkit: 3.0.36
- protobuf: 3.20.3
- psutil: 5.9.4
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- pyarrow: 11.0.0
- pyasn1: 0.4.8
- pyasn1-modules: 0.2.8
- pycparser: 2.22
- pydeprecate: 0.3.1
- pygments: 2.14.0
- pyjwt: 2.6.0
- pyparsing: 3.0.9
- pyqt5-sip: 12.11.0
- python-dateutil: 2.8.2
- pytorch-tabnet: 3.0.0
- pytz: 2022.7
- pyyaml: 5.4.1
- pyzmq: 24.0.1
- querystring-parser: 1.2.4
- regex: 2022.10.31
- requests: 2.28.1
- requests-oauthlib: 1.3.1
- responses: 0.18.0
- retrying: 1.3.4
- rsa: 4.9
- scikit-learn: 1.2.0
- scipy: 1.10.0
- seaborn: 0.12.2
- sentry-sdk: 1.12.1
- setuptools: 72.1.0
- shap: 0.45.0
- shortuuid: 1.0.11
- six: 1.16.0
- slicer: 0.0.7
- smmap: 5.0.0
- sqlalchemy: 2.0.32
- sqlparse: 0.5.1
- stack-data: 0.6.2
- statsmodels: 0.13.5
- subprocess32: 3.5.4
- tabulate: 0.9.0
- tensorboard: 2.11.0
- tensorboard-data-server: 0.6.1
- tensorboard-plugin-wit: 1.8.1
- threadpoolctl: 3.1.0
- tokenizers: 0.13.2
- tomli: 2.0.1
- torch: 1.13.1+cu116
- torchaudio: 0.13.1+cu116
- torchmetrics: 0.11.0
- torchvision: 0.14.1+cu116
- tornado: 6.2
- tqdm: 4.64.1
- traitlets: 5.8.0
- transformers: 4.26.1
- typeguard: 4.3.0
- typing-extensions: 4.12.2
- urllib3: 1.26.13
- wandb: 0.10.11
- watchdog: 2.2.1
- wcwidth: 0.2.5
- webencodings: 0.5.1
- werkzeug: 2.2.2
- wheel: 0.43.0
- widgetsnbextension: 4.0.5
- wrapt: 1.16.0
- xxhash: 3.2.0
- yarl: 1.8.2
- zipp: 3.11.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.9.18
- release: 6.2.0-37-generic
- version: #38~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 2 18:01:13 UTC 2
```
</details>
### More info
_No response_ | 1medium
|
Title: [Bug]: optimizer state not saved
Body: ### Describe the bug
Thank you for developing and maintaining this invaluable module!
We would like to save the state of the optimizer at the end of each epoch.
The `save_optimizer_state` parameter of the `fine_tune` function seems to be designed for this purpose.
However, the state of the optimizer is not saved even if we set `save_optimizer_state=True`.
Thank you!
### To Reproduce
```python
%pip install scipy==1.10.1 datasets transformers torch==2.0 flair==0.13.1
import torch
import flair
from flair.data import Corpus
from flair.datasets import TREC_6
from flair.embeddings import TransformerDocumentEmbeddings
from flair.models import TextClassifier
from flair.trainers import ModelTrainer
# 1. get the corpus
corpus: Corpus = TREC_6()
# 2. what label do we want to predict?
label_type = 'question_class'
# 3. create the label dictionary
label_dict = corpus.make_label_dictionary(label_type=label_type)
# 4. initialize transformer document embeddings (many models are available)
document_embeddings = TransformerDocumentEmbeddings('distilbert-base-uncased', fine_tune=True)
# 5. create the text classifier
classifier = TextClassifier(document_embeddings, label_dictionary=label_dict, label_type=label_type)
# 6. initialize trainer
trainer = ModelTrainer(classifier, corpus)
# 7. run training with fine-tuning
trainer.fine_tune('resources/taggers/question-classification-with-transformer',
learning_rate=5.0e-5,
mini_batch_size=4,
max_epochs=10,
save_optimizer_state=True,
save_model_each_k_epochs=1
)
checkpoint = torch.load('resources/taggers/question-classification-with-transformer/model_epoch_1.pt', map_location=flair.device)
```
### Expected behavior
When `save_optimizer_state` is `true`, the checkpoint contains the state_dict of the optimizer.
### Logs and Stack traces
_No response_
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
#### Versions:
##### Flair
0.13.1
##### Pytorch
2.0.0+cu117
##### Transformers
4.40.0
#### GPU
True | 1medium
|
Title: TODO : 3D 热力图
Body: 大佬,还有个小建议,就是这个统计图可以做成3D的么,就是根据自己每次跑步的距离长短来定义高度,像下面这样。

我是用的这个大佬的脚本生成的这种3D的统计图,当然我技术比较菜,不了解其中的原理,仅仅是一个小提议哈🙈
——————> https://github.com/yoshi389111/yoshi389111 <———————— | 1medium
|
Title: 网页多端访问的时候,某些文件会冲突,目前不支持并行多个运行是么
Body: 开启了两个页面,同时跑两个任务,其中一个成功,另外一个提示:
PermissionError: [WinError 32] 另一个程序正在使用此文件,进程无法访问。: 'final-1.mp4.tempTEMP_MPY_wvf_snd.mp3'
Traceback:
目前不支持并行多个同时运行是么? | 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.