text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Timezone in log.html same as the browser and not the one from the device under test
Body: Hello,
The logs generated in log.html doesn't contain a timezone specific, and when opening the logs they also are the same as the browser, and not the same as the device under test.
I saw this was commented in another thread in google groups, but I couldn't see more recent info:
https://groups.google.com/g/robotframework-users/c/Wtg8EYwNVJ8
Could you please check? Is this expected?
Many thanks | 1medium
|
Title: Add notebook info for Boto, the official AWS SDK for Python.
Body: | 1medium
|
Title: Cog push error โFailed to get last layer digest for cog base imageโ
Body: I have push the model successful once.
But failed after that, the error message like following:
```
Building Docker image from environment in cog.yaml as r8.im/ultimatech-cn/instant-id-basic...
โ Stripping patch version from Python version 3.10.6 to 3.10
โ Stripping patch version from Python version 3.10.6 to 3.10
[+] Building 4.0s (15/15) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 833B 0.0s
=> resolve image config for docker-image://docker.io/docker/dockerfile:1.4 0.0s
=> CACHED docker-image://docker.io/docker/dockerfile:1.4 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 787B 0.0s
=> [internal] load metadata for r8.im/cog-base:cuda12.1-python3.10 2.5s
=> [stage-0 1/8] FROM r8.im/cog-base:cuda12.1-python3.10@sha256:ab0faae83dd6f205e62ff3ef44cd91f47d86 0.0s
=> [internal] load build context 0.2s
=> => transferring context: 375.87kB 0.2s
=> CACHED [stage-0 2/8] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked apt-get update - 0.0s
=> CACHED [stage-0 3/8] COPY .cog/tmp/build20240923193006.8539553181594056/requirements.txt /tmp/req 0.0s
=> CACHED [stage-0 4/8] RUN pip install --no-cache-dir -r /tmp/requirements.txt 0.0s
=> CACHED [stage-0 5/8] RUN curl -o /usr/local/bin/pget -L "https://github.com/replicate/pget/releas 0.0s
=> CACHED [stage-0 6/8] RUN pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visua 0.0s
=> CACHED [stage-0 7/8] WORKDIR /src 0.0s
=> [stage-0 8/8] COPY . /src 0.7s
=> exporting to image 0.4s
=> => exporting layers 0.4s
=> => preparing layers for inline cache 0.0s
=> => writing image sha256:92c95bb5fc6a03683d9dedcf18df08d8470b01722612ed7ebca28a4113e55a1d 0.0s
=> => naming to r8.im/ultimatech-cn/instant-id-basic 0.0s
Validating model schema...
Adding labels to image...
โ
น Failed to get last layer digest for cog base image: Get "https://us-docker.pkg.dev/artifacts-downloads/namespaces/replicate-production/repositories/replicate-us/downloads/ALpRbLCfrWK_tufbyhxnC-........aMiUdTBCrLMaN5VJ_XssKaOIv8uNyVD9Ll24PrjxChAjFFYjjHTLJR0Gp_DA2s28ysxmEc1aszSaDLBMMH78O85KU7i4-xH16ohNh1D_LwJc72qzz8c3X2uU5ub0rjsnnZffm8MldtMAFjjJfirVfg_S39SlFxvybtlC76VZWn1Ka-QKX2wOrQ70jtqCzEHL1mh_GCg58TEvyJn08BJeLOO5TvOplu1QDDGqTq0baGtfkvcWVZNUnbiv6MgMg94wgCb_hZv2K7YybKljzwmscv3AbdSQwOICR763scZ2frRSNKicS6egBc9KK-MafGbSu-qSyTrJ": dial tcp 74.125.20.82:443: i/o timeout
```
Any solution for this? | 1medium
|
Title: Title of checkbox disappears when unchecked
Body:


Regression was introduced in 972058bb. Reverting changes for main.js returned previous behaviour.
| 1medium
|
Title: Dummy Camera Rendering
Body: I am trying to run render.py using only cameras.json from a trained GS model, without the original dataset. I created a dummy camera and applied all parameters, including R and T, as well as camera-related settings from cameras.json. I also confirmed that the T values are identical in the SIBR viewer, but the rendered result is almost invisible with render.py. It seems like there might be an issue with the rotation property. Shouldn't R|T be directly applied as is? | 1medium
|
Title: Pandas v2 adoption
Body: ### ๐ The feature
I have complex projects and pandas-ai is holding me back because of 1.5 requirement. Other major packages seem to have made the migration. I'm wondering what's holding it here.
Thanks.
### Motivation, pitch
NA
### Alternatives
_No response_
### Additional context
_No response_ | 1medium
|
Title: [BUG]would it be possible to have Python-3.12 wheels ?
Body: **Describe the bug**
Whish of Windows Python-3.12 wheel
| 1medium
|
Title: On import error: unable to open shape_predictor_68_face_landmarks.dat
Body: * face_recognition version: 1.3.0-py2.py3-none-any
* Python version: 3.8.5
* Operating System: Windows 7 Ultimate SP1 x64 (version 6.1, build 7601)
### Description
I wanted to install python for a single user, but python didn't want to do this, so I had to give it increased permissions. opencv installed without elevated rights works fine on this PC (installed only for one user). On another PC, face_recognition also works correctly. But here, for some reason, he refuses - he writes that he can't access a file that EXISTS in the user's folder, which means that it is available not only for reading, but also for writing.

### What I Did
*install python 3.8.5*
```
pip install opencv-python
pip install dlib
pip install face_recognition
```
Everything was installed successfully. Everything was installed in the user's folder because I run ps without elevated rights. Python itself was installed with elevated rights because it refused to be installed without them.
### P.S.
If you set face_recognition with elevated rights (for all users), it works. However, it should work without it. | 1medium
|
Title: Error restarting training and inability to enable checkpoint activations after compiling Fairseq model using torch.compile()
Body: ## โ Questions and Help
#### compiled fairseq model using torch.compile() and observed significant training speed . but observed few issues .
1) Error occurred while restarting the training from checkpoint_last .
2) not able to enable --checkpoint-activations while using torch.compile
#### Code
```
if cfg.distributed_training.ddp_backend == "fully_sharded":
with fsdp_enable_wrap(cfg.distributed_training):
model = fsdp_wrap(task.build_model(cfg.model))
else:
model = task.build_model(cfg.model)
model = torch.compile(model)
```
#### compiled model using torch.compile() ,enabled checkpoint activations using --checkpoint-activations and observed below error.
KeyError: packed_non_tensor_outputs
from user code:
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/modules/checkpoint_activations.py", line 67, in <graph break in _checkpointed_forward>
packed_non_tensor_outputs = parent_ctx_dict["packed_non_tensor_outputs"]
Also when I tried to restart training from the last checkpoint, the below error occurred.
2023-05-02 18:08:17 | INFO | fairseq.trainer | Preparing to load checkpoint checkpoint/checkpoint_last.pt
Traceback (most recent call last):
File "/home/santha-11585/miniconda3/envs/ddp/bin/fairseq-train", line 8, in <module>
sys.exit(cli_main())
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq_cli/train.py", line 632, in cli_main
distributed_utils.call_main(cfg, main)
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/distributed/utils.py", line 344, in call_main
torch.multiprocessing.spawn(
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes
while not context.join():
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 6 terminated with the following error:
Traceback (most recent call last):
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/trainer.py", line 567, in load_checkpoint
self.model.load_state_dict(
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/distributed/module_proxy_wrapper.py", line 53, in load_state_dict
return self.module.module.load_state_dict(*args, **kwargs)
TypeError: Module.load_state_dict() got an unexpected keyword argument 'model_cfg'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/distributed/utils.py", line 328, in distributed_main
main(cfg, **kwargs)
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq_cli/train.py", line 170, in main
extra_state, epoch_itr = checkpoint_utils.load_checkpoint(
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/checkpoint_utils.py", line 248, in load_checkpoint
extra_state = trainer.load_checkpoint(
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/trainer.py", line 579, in load_checkpoint
raise Exception(
Exception: Cannot load model parameters from checkpoint checkpoint/checkpoint_last.pt; please ensure that the architectures match.
#### What's your environment?
- fairseq Version (e.g., 1.0 or main): 0.12.2
- PyTorch Version (e.g., 1.0) : 2.0.0+cu117
- OS (e.g., Linux): ubuntu
- How you installed fairseq (`pip`, source): yes
- Build command you used (if compiling from source):
- Python version:3.10
- CUDA/cuDNN version: 11.7
- GPU models and configuration: A6000
- Any other relevant information:
| 1medium
|
Title: Looks like Cloudflare found out about SeleniumBase UC Mode
Body: The makers of the **Turnstile** have found out about **SeleniumBase UC Mode**:
<img width="480" alt="Screenshot 2023-11-08 at 5 47 30 PM" src="https://github.com/seleniumbase/SeleniumBase/assets/6788579/08fa67af-262e-48e4-8699-33e04c15ab54">
**To quote Dr. Emmett Brown from Back to the Future:**
> **"They found me. I don't how, but they found me."**

I guess that means they watched the **SeleniumBase UC Mode** video: https://www.youtube.com/watch?v=5dMFI3e85ig
--------
In other news, I'm working on more updates and demo pages for running tests.
Once the next release is shipped, I'll start going through the notification queue. | 1medium
|
Title: Replace keyword not working properly
Body: '''from flashtext import KeywordProcessor
kp = KeywordProcessor()
kp.add_keyword('+', 'plus')
kp.replace_keywords('c++')''' | 1medium
|
Title: Field/Type descriptions from schema could be added to the generated code.
Body: This will presumably result in a similar discussion to this. https://github.com/jhnnsrs/turms/pull/54
I've implemented this as docstrings on a local branch using similar logic to that used in turms. | 1medium
|
Title: Allow passing string values for ssl query string in PostgreSQL URL
Body: When using the PostgreSQL backend, you can pass the `ssl` key in the query string, with the values `true` or `false`. These are converted to boolean and passed as arguments to [asyncpg.create_pool](https://magicstack.github.io/asyncpg/current/api/index.html#asyncpg.pool.create_pool)
The asyncpg library accepts other values than only `True` and `False`, which can be used to choose how certificate validation is done (or not done).
~~For the record, when setting `ssl=true`, the ssl mode used is `prefer`, which will fallback to plain if SSL connection fails, so it is not a secure default.~~ (Edit: This is not true, certificate is checked with `ssl=true`, the documentation is not clear on that topic).
I'm going to send a PR that permits to send string values, but it will not change the default settings. | 1medium
|
Title: PyCharm: No autocomplete when creating new instances
Body: ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options ๐
### Example Code
```python
from datetime import datetime
from sqlmodel import Field, SQLModel
from typing import Optional
class MyModel(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
dev_email: str = Field(index=True)
account_id: str = Field(index=True)
other_id: Optional[str] = None
failed: Optional[bool] = False
created: datetime = Field(sa_column_kwargs={'default': datetime.utcnow})
updated: datetime = Field(sa_column_kwargs={'default': datetime.utcnow, 'onupdate': datetime.utcnow})
```
### Description
- Create Model
- Create new instance of model
- I expected to see autocompletion for model attributes as shown in the features section of docs
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.8
### Additional Context
No autocompletion in PyCharm when creating new instance. (only when fetching an instance from db)

| 1medium
|
Title: Is documentation correct about OIDC_ISS_ENDPOINT?
Body: The [OIDC_ISS_ENDPOINT](https://django-oauth-toolkit.readthedocs.io/en/latest/settings.html#oidc-iss-endpoint) documentation states that discovery is at `OIDC_ISS_ENDPOINT + /.well-known/openid-configuration/`. That would indicate that one should include the mount point of the `oauth2_provider.urls` right?
But `ConnectDiscoveryInfoView` uses `reverse` to get the path of the `oauth2_provider:authorize`, `oauth2_provider:token`, `oauth2_provider:user-info`, `oauth2_provider:jwks-info` which results in the doubling of the mount point info.
So a if `oauth2_provider.urls` is mounted at `/some-initial-path/o` all the endpoints, except `issuer`, included in the response has doubled the mount point information. So if the `OIDC_ISS_ENDPOINT` is `http://localhost:8001/some-initial-path/o` the issuer will be `http://localhost:8001/some-initial-path/o` but `authorization_endpoint` will be `http://localhost:8001/some-initial-path/o/some-initial-path/o/authorize/`. Same pattern for `token_endpoint`, `userinfo_endpoint`, and `jwks_uri`
Looking at the tests, there seems to be a little ambivalence about the topic. See below
https://github.com/jazzband/django-oauth-toolkit/blob/9d2aac2480b2a1875eb52612661992f73606bade/tests/test_oidc_views.py#L15
`test_get_connect_discovery_info` expects a url without a path and `test_get_connect_discovery_info_without_issuer_url` expects a url with a path `/o` (the default `oauth2` path?).
Anyways - I'm confused. Can anyone clarify if `OIDC_ISS_ENDPOINT` should just be the root url of the Django app or if it should include the mount point of the `oauth2_provider.urls`?
EDIT:
It looks as if the OIDC specification mentions [this](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfig). The correct pattern should follow the `django_oauth_toolkit` documentation. So `OIDC_ISS_ENDPOINT ` + `/.well-known/openid-configuration` should resolve.
If this is true, then the test `test_get_connect_discovery_info` should expect `http://localhost/o` instead of `http://localhost` as `issuer` - I think.
EDIT2:
If `OIDC_ISS_ENDPOINT ` is defined, couldn't it be located somewhere else (another domain) than where `ConnectDiscoveryInfoView` is located? If yes, isn't it then a mistake to base the location of `authorization_endpoint`, `token_endpoint`, `userinfo_endpoint`, and `jwks_uri` on the use of `reverse` for the url patterns on the same host where `ConnectDiscoveryInfoView` is located.
Why not just hardcode the endpoints to `OIDC_ISS_ENDPOINT` + `{/authorize/, /token/, /userinfo/, o/.well-known/jwks.json}`?
Or urlparse `OIDC_ISS_ENDPOINT` and use the scheme + netloc + reverse of all the endpoints to fill the output of `ConnectDiscoveryInfoView`. | 1medium
|
Title: Azure ML integration
Body: We need to create a similar integration + tutorial to this one we have with AWS Batch: https://soopervisor.readthedocs.io/en/latest/tutorials/aws-batch.html
The story is about:
Integrating the client with Azure ML batch functionality.
Creating a tutorial similar to the one above.
If possible create a short video to guide through the tutorial | 1medium
|
Title: Open Files
Body: ### Describe the bug
I tried to open a screenshot with this command and Open Interpreter Crashed:
`> open /Users/maxpetrusenko/Desktop/photo_2023-11-23_17-58-50.jpg please
Python Version: 3.10.12
Pip Version: 23.2.1
Open-interpreter Version: cmd:A, pkg: 0.1.15
OS Version and Architecture: macOS-14.1-arm64-arm-64bit
CPU Info: arm
RAM Info: 16.00 GB, used: 6.80, free: 0.19
Interpreter Info
Vision: False
Model: openai/gpt-4
Function calling: False
Context window: 3000
Max tokens: 1000
Auto run: False
API base: http://localhost:1234/v1
Local: True
Curl output: [Errno 2] No such file or directory: 'curl http://localhost:1234/v1'
Traceback (most recent call last):
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/respond.py", line 49, in respond
for chunk in interpreter._llm(messages_for_llm):
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/llm/convert_to_coding_llm.py", line 65, in coding_llm
for chunk in text_llm(messages):
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/llm/setup_text_llm.py", line 32, in base_llm
messages = tt.trim(
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/tokentrim/tokentrim.py", line 189, in trim
shorten_message_to_fit_limit(message, tokens_remaining, model)
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/tokentrim/tokentrim.py", line 95, in shorten_message_to_fit_limit
new_length = int(len(encoding.encode(content)) * ratio)
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/tiktoken/core.py", line 116, in encode
if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/maxpetrusenko/miniforge3/bin/interpreter", line 8, in <module>
sys.exit(cli())
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/core.py", line 24, in cli
cli(self)
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/cli/cli.py", line 268, in cli
interpreter.chat()
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/core.py", line 86, in chat
for _ in self._streaming_chat(message=message, display=display):
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/core.py", line 106, in _streaming_chat
yield from terminal_interface(self, message)
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/terminal_interface/terminal_interface.py", line 115, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/core.py", line 127, in _streaming_chat
yield from self._respond()
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/core.py", line 162, in _respond
yield from respond(self)
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/respond.py", line 97, in respond
raise Exception(
Exception: expected string or buffer
Please make sure LM Studio's local server is running by following the steps above.
If LM Studio's local server is running, please try a language model with a different architecture.`
### Reproduce
1. run interpreter --local
2. start server ( LM Studio tried with mistral instruct v0 1 cguf )
3. open "/path/to/screenshot" please
### Expected behavior
no crash
### Screenshots
_No response_
### Open Interpreter version
0.1.15
### Python version
3.11
### Operating System name and version
mac m2
### Additional context
_No response_ | 1medium
|
Title: Show Element Id when set to true breaks the code due to FreeTypeFont has no getsize attribute
Body: **Describe the bug**
I was trying to use show_element_id as True in draw_box but suddenly got an AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version, see the [Layout Parser Releases](https://github.com/Layout-Parser/layout-parser/releases/)
**To Reproduce**
Steps to reproduce the behavior:
1. What command or script did you run?
```
lp.draw_box(pdf_images[4], text_blocks, box_width=3, show_element_id=True) # Use the default font provided by the library
```
**Environment**
1. Used on windows with jupyter lab on conda
2. Using layoutparser version 0.3.4
3. All other libraries has been installed
**Error traceback**
AttributeError Traceback (most recent call last)
Cell In[17], line 1
----> 1 lp.draw_box(pdf_images[4], text_blocks,
2 box_width=3, show_element_id=True) # Use the default font provided by the library
File ~\miniconda3\Lib\site-packages\layoutparser\visualization.py:194, in image_loader.<locals>.wrap(canvas, layout, *args, **kwargs)
192 elif isinstance(canvas, np.ndarray):
193 canvas = Image.fromarray(canvas)
--> 194 out = func(canvas, layout, *args, **kwargs)
195 return out
File ~\miniconda3\Lib\site-packages\layoutparser\visualization.py:392, in draw_box(canvas, layout, box_width, box_alpha, box_color, color_map, show_element_id, show_element_type, id_font_size, id_font_path, id_text_color, id_text_background_color, id_text_background_alpha)
389 text = str(ele.type) if not text else text + ": " + str(ele.type)
391 start_x, start_y = ele.coordinates[:2]
--> 392 text_w, text_h = font_obj.getsize(text)
394 text_box_object = Rectangle(
395 start_x, start_y, start_x + text_w, start_y + text_h
396 )
397 # Add a small background for the text
AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
**Screenshots**
<img width="595" alt="image" src="https://github.com/Layout-Parser/layout-parser/assets/56075784/a0c616e8-ad07-4b2d-bbbf-d15f4e30a222">
| 1medium
|
Title: How can I get the structure graph about network,Any tool??
Body: | 1medium
|
Title: why failed to allocate GPU's memory
Body: i write a function:
```python
def updatepkl(business):
file_list = os.listdir("FR/dataset/" + business)
for file in file_list:
file_path = 'FR/dataset/' + business + '/' + file
if os.path.isdir(file_path):
for i in os.listdir(file_path):
DeepFace.find(img_path=file_path + '/' + i, db_path="FR/dataset/" + business + '/',
model_name='ArcFace', detector_backend='dlib',
distance_metric="euclidean_l2", enforce_detection=False) # update pkl file
```
When I execute this method, the GPU's memory is occupied greatly (14G, total:16G).
When I call it again, the following error occurs:
```
tensorflow.python.framework.errors_impl.ResourceExhaustedError: {{function_node __wrapped__AddV2_device_/job:localhost/replica:0/task:0/device:GPU:0}} failed to allocate memory [Op:AddV2]
```
Please explain why this situation occurs.
thanks
| 1medium
|
Title: Filter by Video/Photos on main screen
Body: **Describe the enhancement you'd like**
A clear and concise description of what you want to happen.
Add the hability to display only videos/photos (filtering).
**Describe why this will benefit the LibrePhotos**
A clear and concise explanation on why this will make LibrePhotos better.
Better organisation.
**Additional context**
Add any other context or screenshots about the enhancement request here.

| 1medium
|
Title: List of features in readme is out of date[DOCS]
Body: | 0easy
|
Title: `y_min` and `y_max` not correctly honored in `*ChartColumn`. Input Values not clamped, if value range outside y limits defined in `column_config`
Body: ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
The scale in `LineChartColumn` and `BarChartColumn` changes, if the values provided are outside of the defined `y_min` and `y_max`.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-9944)
```Python
data_df = pd.DataFrame(
{
"sales": [
[0,50,100],
[0,50,200]
],
}
)
st.data_editor(
data_df,
column_config={
"sales": st.column_config.BarChartColumn(
"Sales (last 6 months)",
help="The sales volume in the last 6 months",
y_min=0,
y_max=100,
),
},
hide_index=True,
)
```
### Steps To Reproduce
_No response_
### Expected Behavior
It is my understanding that the two input lists should generate the exact same charts, with `y_min=0` and `y_max=100`: i.e. the 200 value will just get clamped to 100.
### Current Behavior

The y limits change depending on the max and min value of the input list.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.40.01
- Python version: 3.12.7
- Operating System: Ubuntu 20.04
- Browser: Firefox
### Additional Information
_No response_ | 1medium
|
Title: [Usage]: Does vllm support inflight batch?
Body: ### Your current environment
### How would you like to use vllm
Does vllm support inflight batch?
trtllm supports it but I can't find any information on vllm documentation
Could some kind person explain it?
Thank you so much in advance
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 1medium
|
Title: [tabular] Add logging of inference throughput of best model at end of fit
Body: [From user](https://www.kaggle.com/competitions/playground-series-s4e5/discussion/499495#2789917): "It wasn't really clear that predict was going to be going for a long time"
I think we can make this a bit better by mentioning at the end of training the estimated inference throughput of the selected best model, which the user can refer to when gauging how long it will take to do inference on X rows. We have the number already calculated, we just haven't put it as part of the user-visible logging yet.
| 1medium
|
Title: ChromeDriver is up-to date still it says chromedriver version is 114
Body: I have downloaded the latest chromedriver but when I run the script, it still says:
`selenium.common.exceptions.WebDriverException: Message: unknown error: cannot connect to chrome at 127.0.0.1:59122
from session not created: This version of ChromeDriver only supports Chrome version 114
Current browser version is 131.0.6778.86`
```
def get_driver():
print(f'Opening webdriver...')
path = "chromedriver.exe"
options = uc.ChromeOptions()
options.add_argument("--start-maximized")
options.binary_location = path
options.headless = False
caps = DesiredCapabilities.CHROME
caps["acceptInsecureCerts"] = True
caps['goog:loggingPrefs'] = {'performance': 'ALL'}
options.set_capability(
"goog:loggingPrefs", {"performance": "ALL"}
)
# try:
driver = uc.Chrome(executable_path=path,options=options,desired_capabilities=caps)
print('Webdriver Opened.')
time.sleep(2)
return driver
``` | 1medium
|
Title: [Feature request] Adding output_dtype attribute to QuantizeLinear
Body: ### System information
Main top-of-tree.
### What is the problem that this feature solves?
QuantizeLinear supports output types UINT8, INT8, UINT16, INT16, UINT4, INT4, FLOAT8*.
In order to specify any type other than the default UINT8, the user should provide a zero-point tensor. The output dtype is derived from the zero-point tensor. This leads to defining the zero-point tensor just to signal the output datatype.
Using the zero-point solely to specify the output data type poses several problems:
1. Increased Model Size: The need to include a zero-point tensor, especially when dealing with block quantization, can lead to unnecessary inflation of the model size. This is because additional data_size/block_size zeros must be included, which do not contribute to the model's functionality but occupy storage and memory resources.
2. Computational Overhead: For backends processing the QuantizeLinear operation, the presence of large zero-point tensors (filled with zeros) requires either checking the values of the zero-point tensor are all zeros, or performing the addition operation.
3. Difficulty in Generating Non-standard Data Types: When exporting models from frameworks such as PyTorch, generating tensors for non-standard data types (e.g., FLOAT8) to serve as zero points is a challenge, limiting the accessibility of model quantization.
### Alternatives considered
_No response_
### Describe the feature
Add an optional output_dtypeย attribute to QuantizeLinear.
Theย output_dtypeย attribute will allow users to directly specify the desired output data type for the QuantizeLinear operation without the need to provide a zero_point tensor.
Supported data types will include UINT8, INT8, UINT16, INT16, UINT4, INT4, and FLOAT8, aligning with the current supported output types.
In case output_dtype is not supplied and zero_point is supplied - data type will be derived from zero_point.
In case neither output_dtype or zero_point are supplied, the default data type will be UINT8.
In case output_dtype and zero_point show conflicting data types - the model is invalid.
### Will this influence the current api (Y/N)?
Yes
Adding an attribute to QuantizeLinear
### Feature Area
Operators
### Are you willing to contribute it (Y/N)
Yes
### Notes
@xadupre | 1medium
|
Title: jsonify does not support integer keys
Body: The snippets below are self-descriptive and reflect the problem mentioned in the title.
Expected behavior: jsonify builds a response irrespective of the key/value data types (at least for basic types like int and str)
Actual behavior: keys of type `int` break `jsonify`
Personal suggestion: just typecast to str, but issue a warning
Minimal code to reproduce the issue:
```
from flask import Flask, jsonify
import json
d={32: "aa", "something":"else"}
print(json.dumps(d)) # works # <-------
app = Flask('app')
# app.config['JSON_SORT_KEYS'] = False #<-- makes no difference
with app.app_context():
print(jsonify(d)) # b0rks # <-------
```
Error log:
```
TypeError Traceback (most recent call last)
<ipython-input-12-d8fbf48063d9> in <module>
1 with app.app_context():
----> 2 jsonify(d)
3
~/.local/lib/python3.10/site-packages/flask/json/__init__.py in jsonify(*args, **kwargs)
168 .. versionadded:: 0.2
169 """
--> 170 return current_app.json.response(*args, **kwargs)
~/.local/lib/python3.10/site-packages/flask/json/provider.py in response(self, *args, **kwargs)
213
214 return self._app.response_class(
--> 215 f"{self.dumps(obj, **dump_args)}\n", mimetype=self.mimetype
216 )
~/.local/lib/python3.10/site-packages/flask/json/provider.py in dumps(self, obj, **kwargs)
178 kwargs.setdefault("ensure_ascii", self.ensure_ascii)
179 kwargs.setdefault("sort_keys", self.sort_keys)
--> 180 return json.dumps(obj, **kwargs)
181
182 def loads(self, s: str | bytes, **kwargs: t.Any) -> t.Any:
/usr/lib/python3.10/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
237 separators=separators, default=default, sort_keys=sort_keys,
--> 238 **kw).encode(obj)
239
240
/usr/lib/python3.10/json/encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
/usr/lib/python3.10/json/encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
TypeError: '<' not supported between instances of 'str' and 'int'
```
- Python version: Python 3.10.12
- Flask version: Flask 2.3.3
- Werkzeug 2.3.7
| 1medium
|
Title: Move UFOGen Pipeline and Scheduler to Research Projects
Body: **Is your feature request related to a problem? Please describe.**
After reviewing #6133 and a bit of searching has led me to the implementation of the UFOGen paper by the co-authors. By reading the repo ReadME and it seems there is no model checkpoints provided by the authors, only the training code. Does it make sense to move the pipeline to example/research projects?
Repo link: https://github.com/xuyanwu/SIDDMs-UFOGen
Please let me know.
cc: @yiyixuxu , @sayakpaul | 1medium
|
Title: Add 2 more special Automation Policies
Body: Like Default workstation and default server.
Have a "Post Installation" Automation Policy
Have a "Pre-Uninstall" Automation Policy
Then you can have onboarding, and offboarding scripts. I'm sure people will want Pre-Uninstall per Client/Site/Agent... | 1medium
|
Title: Maximum & minimum syntax in filter_query for conditional formatting
Body: | 1medium
|
Title: Bug: changelog docs page refers to 2023 for releases made during 2024
Body: **Describe the bug**
Was just getting acquainted with this project and noticed that latest release is shown as being from 2023-02-04 on the changelog page:
https://github.com/jowilf/starlette-admin/blob/7465db977d748baa43c3f39a20a307c3636bd7be/docs/changelog/index.md?plain=1#L19
All releases made during early 2024 have this typo in the changelog. | 0easy
|
Title: Clustergram row labels change heatmap values/colors
Body: Passing a list of labels to `row_labels` seem to change the values that are plotted in the heatmap rather than just adding a textual label on the side.
```python
import dash_bio as db
import plotly.express as px
iris = px.data.iris()
db.Clustergram(iris.select_dtypes('number').to_numpy())
```

In the plot above there it one value per row in each color. Below, rows are grouped together so that there are are only three values/colors per column.
```python
db.Clustergram(iris.select_dtypes('number').to_numpy(), row_labels=iris['species'].to_list())
```

The font size of the labels also don't adjust to the size of the plot. I have to change the height to 2000 before I can read what they say:

```
-----
dash 1.6.1
dash_bio 0.4.4
dash_core_components 1.5.1
dash_html_components 1.0.2
numpy 1.17.3
pandas 0.25.3
plotly 4.3.0
-----
IPython 7.9.0
jupyter_client 5.3.3
jupyter_core 4.6.1
notebook 6.0.1
-----
Python 3.8.0 | packaged by conda-forge | (default, Nov 22 2019, 19:11:38) [GCC 7.3.0]
``` | 1medium
|
Title: Gaussian splatting: assertion error (coeffs.shape[-2] == num_sh_bases(degree))
Body: **Describe the bug**
Running ns-train gaussian-splatting crashed with assertion failure
**To Reproduce**
I built a docker image using the gaussian-splatting branch.
1. `git clone https://github.com/nerfstudio-project/nerfstudio.git -b gaussian-splatting --recurse-submodules`
2. `docker build --build-arg CUDA_VERSION=11.8.0 --build-arg CUDA_ARCHITECTURES=86 --build-arg OS_VERSION=22.04 --tag nerfstudio-gs --file Dockerfile .`
3. `docker run --gpus all --privileged --network host --rm -it -v /home/user/workspace/:/workspace -v /mnt/data:/data --shm-size=32G --name nerfstudio-gs nerfstudio-gs`
4. Inside container: `ns-train gaussian-splatting --data data/posters_v3/`
**Expected behavior**
The training should not fail.
**Screenshots**
Logs here
```
user@uscnsl-exxact-server:/workspace/nerfstudio/nerfstudio_ws$ ns-train gaussian-splatting --data data/posters_v3/
[22:49:38] Using --data alias for --data.pipeline.datamanager.data train.py:230
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Config โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
TrainerConfig(
_target=<class 'nerfstudio.engine.trainer.Trainer'>,
output_dir=PosixPath('outputs'),
method_name='gaussian-splatting',
experiment_name=None,
project_name='nerfstudio-project',
timestamp='2023-12-08_224938',
machine=MachineConfig(seed=42, num_devices=1, num_machines=1, machine_rank=0, dist_url='auto', device_type='cuda'),
logging=LoggingConfig(
relative_log_dir=PosixPath('.'),
steps_per_log=10,
max_buffer_size=20,
local_writer=LocalWriterConfig(
_target=<class 'nerfstudio.utils.writer.LocalWriter'>,
enable=True,
stats_to_track=(
<EventName.ITER_TRAIN_TIME: 'Train Iter (time)'>,
<EventName.TRAIN_RAYS_PER_SEC: 'Train Rays / Sec'>,
<EventName.CURR_TEST_PSNR: 'Test PSNR'>,
<EventName.VIS_RAYS_PER_SEC: 'Vis Rays / Sec'>,
<EventName.TEST_RAYS_PER_SEC: 'Test Rays / Sec'>,
<EventName.ETA: 'ETA (time)'>,
<EventName.GAUSSIAN_NUM: 'Number of Gaussians'>
),
max_log_size=10
),
profiler='basic'
),
viewer=ViewerConfig(
relative_log_filename='viewer_log_filename.txt',
websocket_port=None,
websocket_port_default=7007,
websocket_host='0.0.0.0',
num_rays_per_chunk=32768,
max_num_display_images=512,
quit_on_train_completion=False,
image_format='jpeg',
jpeg_quality=70,
make_share_url=False
),
pipeline=VanillaPipelineConfig(
_target=<class 'nerfstudio.pipelines.base_pipeline.VanillaPipeline'>,
datamanager=FullImageDatamanagerConfig(
_target=<class 'nerfstudio.data.datamanagers.full_images_datamanager.FullImageDatamanager'>,
data=PosixPath('data/posters_v3'),
masks_on_gpu=False,
images_on_gpu=False,
dataparser=ColmapDataParserConfig(
_target=<class 'nerfstudio.data.dataparsers.colmap_dataparser.ColmapDataParser'>,
data=PosixPath('.'),
scale_factor=1.0,
downscale_factor=None,
scene_scale=1.0,
orientation_method='up',
center_method='poses',
auto_scale_poses=True,
train_split_fraction=0.9,
depth_unit_scale_factor=0.001,
images_path=PosixPath('images'),
masks_path=None,
depths_path=None,
colmap_path=PosixPath('colmap/sparse/0'),
load_3D_points=True,
max_2D_matches_per_3D_point=-1
),
camera_res_scale_factor=1.0,
eval_num_images_to_sample_from=-1,
eval_num_times_to_repeat_images=-1,
eval_image_indices=(0,),
cache_images='cpu'
),
model=GaussianSplattingModelConfig(
_target=<class 'nerfstudio.models.gaussian_splatting.GaussianSplattingModel'>,
enable_collider=True,
collider_params={'near_plane': 2.0, 'far_plane': 6.0},
loss_coefficients={'rgb_loss_coarse': 1.0, 'rgb_loss_fine': 1.0},
eval_num_rays_per_chunk=4096,
prompt=None,
warmup_length=500,
refine_every=100,
resolution_schedule=250,
num_downscales=2,
cull_alpha_thresh=0.1,
cull_scale_thresh=0.5,
reset_alpha_every=30,
densify_grad_thresh=0.0002,
densify_size_thresh=0.01,
n_split_samples=2,
sh_degree_interval=1000,
cull_screen_size=0.15,
split_screen_size=0.05,
stop_screen_size_at=4000,
random_init=False,
extra_points=0,
ssim_lambda=0.2,
stop_split_at=15000,
sh_degree=4,
camera_optimizer=CameraOptimizerConfig(
_target=<class 'nerfstudio.cameras.camera_optimizers.CameraOptimizer'>,
mode='off',
trans_l2_penalty=0.0001,
rot_l2_penalty=0.0001,
optimizer=None,
scheduler=None
)
)
),
optimizers={
'xyz': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.00016,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=1.6e-06,
warmup_steps=0,
max_steps=30000,
ramp='cosine'
)
},
'color': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.0005,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=0.0001,
warmup_steps=0,
max_steps=30000,
ramp='cosine'
)
},
'opacity': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.05,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': None
},
'scaling': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.005,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=0.001,
warmup_steps=0,
max_steps=30000,
ramp='cosine'
)
},
'rotation': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.001,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': None
},
'camera_opt': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.001,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=5e-05,
warmup_steps=0,
max_steps=30000,
ramp='cosine'
)
}
},
vis='viewer_beta',
data=PosixPath('data/posters_v3'),
prompt=None,
relative_model_dir=PosixPath('nerfstudio_models'),
load_scheduler=True,
steps_per_save=2000,
steps_per_eval_batch=100,
steps_per_eval_image=100,
steps_per_eval_all_images=100000,
max_num_iterations=30000,
mixed_precision=False,
use_grad_scaler=False,
save_only_latest_checkpoint=True,
load_dir=None,
load_step=None,
load_config=None,
load_checkpoint=None,
log_gradients=False,
gradient_accumulation_steps={'camera_opt': 100, 'color': 10, 'shs': 10}
)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Saving config to: outputs/posters_v3/gaussian-splatting/2023-12-08_224938/config.yml experiment_config.py:141
Saving checkpoints to: trainer.py:135
outputs/posters_v3/gaussian-splatting/2023-12-08_224938/nerfstudio_models
Using image downscale factor of 2 colmap_dataparser.py:471
[22:49:40] Caching / undistorting train images full_images_datamanager.py:128
[22:49:45] Caching / undistorting eval images full_images_datamanager.py:199
โญโโโโโโโโโโโโโโโ viser โโโโโโโโโโโโโโโโฎ
โ โท โ
โ HTTP โ http://0.0.0.0:7007 โ
โ Websocket โ ws://0.0.0.0:7007 โ
โ โต โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
[NOTE] Not running eval iterations since only viewer is enabled.
Use --vis {wandb, tensorboard, viewer+wandb, viewer+tensorboard} to run with eval.
No Nerfstudio checkpoint to load, so training from scratch.
Disabled comet/tensorboard/wandb event writers
/home/user/.local/lib/python3.10/site-packages/torchvision/transforms/functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
warnings.warn(
Printing profiling stats, from longest to shortest duration in seconds
Trainer.train_iteration: 0.9369
VanillaPipeline.get_train_loss_dict: 0.9365
Traceback (most recent call last):
File "/home/user/.local/bin/ns-train", line 8, in <module>
sys.exit(entrypoint())
File "/home/user/nerfstudio/nerfstudio/scripts/train.py", line 262, in entrypoint
main(
File "/home/user/nerfstudio/nerfstudio/scripts/train.py", line 247, in main
launch(
File "/home/user/nerfstudio/nerfstudio/scripts/train.py", line 189, in launch
main_func(local_rank=0, world_size=world_size, config=config)
File "/home/user/nerfstudio/nerfstudio/scripts/train.py", line 100, in train_loop
trainer.train()
File "/home/user/nerfstudio/nerfstudio/engine/trainer.py", line 253, in train
loss, loss_dict, metrics_dict = self.train_iteration(step)
File "/home/user/nerfstudio/nerfstudio/utils/profiler.py", line 127, in inner
out = func(*args, **kwargs)
File "/home/user/nerfstudio/nerfstudio/engine/trainer.py", line 471, in train_iteration
_, loss_dict, metrics_dict = self.pipeline.get_train_loss_dict(step=step)
File "/home/user/nerfstudio/nerfstudio/utils/profiler.py", line 127, in inner
out = func(*args, **kwargs)
File "/home/user/nerfstudio/nerfstudio/pipelines/base_pipeline.py", line 306, in get_train_loss_dict
model_outputs = self._model(ray_bundle) # train distributed data parallel model if world_size > 1
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/nerfstudio/nerfstudio/models/base_model.py", line 143, in forward
return self.get_outputs(ray_bundle)
File "/home/user/nerfstudio/nerfstudio/models/gaussian_splatting.py", line 588, in get_outputs
rgbs = SphericalHarmonics.apply(n, viewdirs, colors_crop)
File "/home/user/.local/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/user/.local/lib/python3.10/site-packages/gsplat/sh.py", line 39, in forward
assert coeffs.shape[-2] == num_sh_bases(degree)
AssertionError
```
Any help would be much appreciated. | 1medium
|
Title: Add mlflow tracking
Body: First of all thanks for the project, it's an interesting way to take a stab at reducing the amount of boilerplate needed even for fairly simple models. Secondly, it would be interesting to implement experiment/run tracking using [MLflow][1].
Have a working example on the `Image classification_PyTorch/` template, happy to submit a PR if you consider this of any interest.
[1]: https://www.mlflow.org/docs/latest/index.html | 1medium
|
Title: PowerTransformer overflow warnings
Body: ### Describe the bug
I'm running into overflow warnings using PowerTransformer in some not-very-extreme scenarios. I've been able to find at least one boundary of the problem, where a vector of `[[1]] * 354 + [[0]] * 1` works fine, while `[[1]] * 355 + [[0]] * 1` throws up ("overflow encountered in multiply"). Also, an additional warning starts happening at `[[1]] * 359 + [[0]] * 1` ("overflow encountered in reduce").
Admittedly, I haven't looked into the underlying math of Yeo-Johnson, so an overflow might make sense in that light. (If that's the case, though, perhaps this is an opportunity for a clearer warning?)
### Steps/Code to Reproduce
```python
import sys
from sklearn.preprocessing import PowerTransformer
for n in range(350, 360):
print(f"[[1]] * {n}, [[0]] * 1", file=sys.stderr)
_ = PowerTransformer().fit_transform([[1]] * n + [[0]] * 1)
print(file=sys.stderr)
```
### Expected Results
```
[[1]] * 350, [[0]] * 1
[[1]] * 351, [[0]] * 1
[[1]] * 352, [[0]] * 1
[[1]] * 353, [[0]] * 1
[[1]] * 354, [[0]] * 1
[[1]] * 355, [[0]] * 1
[[1]] * 356, [[0]] * 1
[[1]] * 357, [[0]] * 1
[[1]] * 358, [[0]] * 1
[[1]] * 359, [[0]] * 1
```
### Actual Results
```
[[1]] * 350, [[0]] * 1
[[1]] * 351, [[0]] * 1
[[1]] * 352, [[0]] * 1
[[1]] * 353, [[0]] * 1
[[1]] * 354, [[0]] * 1
[[1]] * 355, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 356, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 357, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 358, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 359, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:205: RuntimeWarning: overflow encountered in reduce
ret = umr_sum(x, axis, dtype, out, keepdims=keepdims, where=where)
```
### Versions
```shell
System:
python: 3.11.9 (main, May 16 2024, 15:17:37) [Clang 14.0.3 (clang-1403.0.22.14.1)]
executable: /Users/*****/.pyenv/versions/3.11.9/envs/disposable/bin/python
machine: macOS-15.2-arm64-arm-64bit
Python dependencies:
sklearn: 1.6.1
pip: 24.0
setuptools: 65.5.0
numpy: 2.2.3
scipy: 1.15.2
Cython: None
pandas: None
matplotlib: None
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: openmp
internal_api: openmp
num_threads: 8
prefix: libomp
filepath: /Users/*****/.pyenv/versions/3.11.9/envs/disposable/lib/python3.11/site-packages/sklearn/.dylibs/libomp.dylib
version: None
``` | 1medium
|
Title: Facing Issue while runing this code utf-8' codec can't decode byte 0xa4 in position 14: invalid start byte
Body: utf-8' codec can't decode byte 0xa4 in position 14: invalid start byte
Full code is attached in Word file with csv file too
[mcqs.csv](https://github.com/minimaxir/textgenrnn/files/7950821/mcqs.csv)
[quiz.docx](https://github.com/minimaxir/textgenrnn/files/7950823/quiz.docx)
| 1medium
|
Title: TimeSeries not working
Body: This code does not work:
```py
ts = TimeSeries(key=ALPHAVANTAGE_API_KEY, output_format='pandas')
data, meta_data = ts.get_intraday(symbol='MSFT',interval='1min', outputsize='full')
```
It creates the following error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-49784970f022> in <module>()
3
4 ts = TimeSeries(key=ALPHAVANTAGE_API_KEY, output_format='pandas')
----> 5 data, meta_data = ts.get_intraday(symbol='MSFT',interval='1min', outputsize='full')
6
7 data['close'].plot()
D:\Programs\Anaconda3\lib\site-packages\alpha_vantage\alphavantage.py in _format_wrapper(self, *args, **kwargs)
171 def _format_wrapper(self, *args, **kwargs):
172 call_response, data_key, meta_data_key = func(
--> 173 self, *args, **kwargs)
174 if 'json' in self.output_format.lower() or 'pandas' \
175 in self.output_format.lower():
D:\Programs\Anaconda3\lib\site-packages\alpha_vantage\alphavantage.py in _call_wrapper(self, *args, **kwargs)
156 else:
157 url = '{}&apikey={}'.format(url, self.key)
--> 158 return self._handle_api_call(url), data_key, meta_data_key
159 return _call_wrapper
160
D:\Programs\Anaconda3\lib\site-packages\alpha_vantage\alphavantage.py in _retry_wrapper(self, *args, **kwargs)
75 except ValueError as err:
76 error_message = str(err)
---> 77 raise ValueError(str(error_message))
78 return _retry_wrapper
79
ValueError: Invalid API call. Please retry or visit the documentation (https://www.alphavantage.co/documentation/) for TIME_SERIES_INTRADAY.
```
I am using Python 3.6.4 on Jupyter Notebook. | 1medium
|
Title: Deserializing Error when loading models from '.keras' files in Keras 3, issue with dense layers
Body: I am using Google Colab with the Tensorflow v2.17 and Keras v 3.4.1 libraries.
I need to save and load my models, but I haven't been able to make the '.keras' file format load correctly.
Here is the line for saving the model:
```model.save(os.path.join(model_path, 'model_' + model_name + '.keras'))```
Here is the line for loading the model:
```model = keras.models.load_model(os.path.join(model_path, 'model_' + model_name + '.keras'), custom_objects=custom_objects)```
This is my error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-9-882590e77519>](https://localhost:8080/#) in <cell line: 10>()
8
9 # Load the model
---> 10 model = keras.models.load_model(os.path.join(model_path, 'model_' + model_name + '.keras'), custom_objects=custom_objects)
11
12
3 frames
[/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py](https://localhost:8080/#) in _raise_loading_failure(error_msgs, warn_only)
454 warnings.warn(msg)
455 else:
--> 456 raise ValueError(msg)
457
458
ValueError: A total of 2 objects could not be loaded. Example error message for object <Dense name=z_mean, built=True>:
Layer 'z_mean' expected 2 variables, but received 0 variables during loading. Expected: ['kernel', 'bias']
List of objects that could not be loaded:
[<Dense name=z_mean, built=True>, <Dense name=z_log_var, built=True>]
```
This is the model that I trained:
```
latent_dim = 32
# Encoder
encoder_input = Input(shape=(height, width, channels), name='encoder_input')
x = Conv2D(64, (3, 3), activation='relu', padding='same')(encoder_input)
# Flatten layer
shape_before_flattening = K.int_shape(x)[1:]
x = Flatten()(x)
z_mean = Dense(latent_dim, name='z_mean')(x)
z_log_var = Dense(latent_dim, name='z_log_var')(x)
# Reparameterization trick
@keras.saving.register_keras_serializable()
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim), mean=0., stddev=1.0)
return z_mean + K.exp(z_log_var / 2) * epsilon
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])
# Decoder
decoder_input = Input(K.int_shape(z)[1:])
x = Dense(np.prod(shape_before_flattening))(decoder_input)
x = Reshape(shape_before_flattening)(x)
decoder_output = Conv2D(channels, (3, 3), activation='sigmoid', padding='same')(x)
@register_keras_serializable('CustomLayer')
class CustomLayer(keras.layers.Layer):
def __init__(self, beta=1.0, **kwargs):
self.is_placeholder = True
super(CustomLayer, self).__init__(**kwargs)
self.beta = beta
self.recon_loss_metric = tf.keras.metrics.Mean(name='recon_loss')
self.kl_loss_metric = tf.keras.metrics.Mean(name='kl_loss')
def vae_loss(self, x, z_decoded, z_mean, z_log_var):
recon_loss = keras.losses.binary_crossentropy(K.flatten(x), K.flatten(z_decoded))
kl_loss = -0.5 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return recon_loss, self.beta * kl_loss
def call(self, inputs):
x = inputs[0]
z_decoded = inputs[1]
z_mean = inputs[2]
z_log_var = inputs[3]
recon_loss, kl_loss = self.vae_loss(x, z_decoded, z_mean, z_log_var)
self.add_loss(K.mean(recon_loss + kl_loss))
self.recon_loss_metric.update_state(recon_loss)
self.kl_loss_metric.update_state(kl_loss)
return x
def compute_output_shape(self, input_shape):
return input_shape[0]
def get_metrics(self):
return {'recon_loss': self.recon_loss_metric.result().numpy(),
'kl_loss': self.kl_loss_metric.result().numpy()}
# Models
encoder = Model(encoder_input, [z_mean, z_log_var, z], name='encoder')
decoder = Model(decoder_input, decoder_output, name='decoder')
vae_output = decoder(encoder(encoder_input)[2])
y = CustomLayer()([encoder_input, vae_output, z_mean, z_log_var])
model = Model(encoder_input, y, name='vae')
```
This model was just used for testing the bug. I have used ```tf.keras``` as an alternative for loading the model, but I received the same error. Interestingly, when I run the code for the first time, this is included in the error output. When The same code is run again, the line is no longer included:
```
/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py:576: UserWarning: Skipping variable loading for optimizer 'adam', because it has 30 variables whereas the saved optimizer has 22 variables.
saveable.load_own_variables(weights_store.get(inner_path))
```
I have tested the code on the latest Keras v3.5, and have gotten similiar results:
```
/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py:713: UserWarning: Skipping variable loading for optimizer 'adam', because it has 30 variables whereas the saved optimizer has 22 variables.
saveable.load_own_variables(weights_store.get(inner_path))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-9-00610835a4a5>](https://localhost:8080/#) in <cell line: 10>()
8
9 # Load the model
---> 10 model = keras.models.load_model(os.path.join(model_path, 'model_' + model_name + '.keras'), custom_objects=custom_objects)
11
12
3 frames
[/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py](https://localhost:8080/#) in _raise_loading_failure(error_msgs, warn_only)
591 warnings.warn(msg)
592 else:
--> 593 raise ValueError(msg)
594
595
ValueError: A total of 2 objects could not be loaded. Example error message for object <Dense name=z_mean, built=True>:
Layer 'z_mean' expected 2 variables, but received 0 variables during loading. Expected: ['kernel', 'bias']
List of objects that could not be loaded:
[<Dense name=z_mean, built=True>, <Dense name=z_log_var, built=True>]
```
I have tested the bug again by saving and loading the model into separate weights and json files:
```
# saving
with open(os.path.join(model_path, 'model_' + model_name + '.json'), 'w') as json_file:
json_file.write(model.to_json())
model.save_weights(os.path.join(model_path, 'model_' + model_name + '.weights.h5'))
# loading
with open(os.path.join(model_path, 'model_' + model_name + '.json'), 'r') as json_file:
model_json = json_file.read()
model = model_from_json(model_json, custom_objects=custom_objects)
model.load_weights(os.path.join(model_path, 'model_' + model_name + '.weights.h5'))
```
The error is at least slightly different:
```
/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py:713: UserWarning: Skipping variable loading for optimizer 'adam', because it has 34 variables whereas the saved optimizer has 22 variables.
saveable.load_own_variables(weights_store.get(inner_path))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-14-52bd158e3e0f>](https://localhost:8080/#) in <cell line: 11>()
9 model_json = json_file.read()
10 model = model_from_json(model_json, custom_objects=custom_objects)
---> 11 model.load_weights(os.path.join(model_path, 'model_' + model_name + '.weights.h5'))
12
13 # Load the encoder architecture and weights
1 frames
[/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py](https://localhost:8080/#) in _raise_loading_failure(error_msgs, warn_only)
591 warnings.warn(msg)
592 else:
--> 593 raise ValueError(msg)
594
595
ValueError: A total of 3 objects could not be loaded. Example error message for object <Conv2D name=conv2d, built=True>:
Layer 'conv2d' expected 2 variables, but received 0 variables during loading. Expected: ['kernel', 'bias']
List of objects that could not be loaded:
[<Conv2D name=conv2d, built=True>, <Dense name=z_mean, built=True>, <Dense name=z_log_var, built=True>]
```
Ultimately it would be a lot better to find out that I've been doing something wrong and I can fix this problem myself. I've been hung up on this for awhile, and I have a thesis to write. | 2hard
|
Title: ArcLayer example fails
Body: ## Context
What results were you expecting? <br/>
ArcLayer example fails due to lack of geometry data in `pyarrow` table when running the `U.S. County-to-County Migration` python example.
## Resulting behaviour, error message or logs
The example works perfectly all the way up to the creation of the `ArcLayer` object layer for the map generation.
Log:
```python
---------------------------------------------------------------------------
TraitError Traceback (most recent call last)
Cell In[13], line 6
1 # value = np.array([arc["value"] for arc in arcs])
2 # get_source_position = np.array([arc["source"] for arc in arcs])
3 # get_target_position = np.array([arc["target"] for arc in arcs])
4 # table = pa.table({"value": value})
----> 6 arc_layer = ArcLayer(
7 table=table,
8 get_source_position=get_source_position,
9 get_target_position=get_target_position,
10 get_source_color=SOURCE_COLOR,
11 get_target_color=TARGET_COLOR,
12 get_width=1,
13 opacity=0.4,
14 pickable=False,
15 extensions=[brushing_extension],
16 brushing_radius=brushing_radius,
17 )
File [~/python3.10/site-packages/lonboard/_layer.py:359](http://localhost:8888/lab/tree/~/python3.10/site-packages/lonboard/_layer.py#line=358), in BaseArrowLayer.__init__(self, table, _rows_per_chunk, **kwargs)
355 self._rows_per_chunk = rows_per_chunk
357 table_o3 = table_o3.rechunk(max_chunksize=rows_per_chunk)
--> 359 super().__init__(table=table_o3, **kwargs)
File [~/python3.10/site-packages/lonboard/_layer.py:95](http://localhost:8888/lab/tree/~/python3.10/site-packages/lonboard/_layer.py#line=94), in BaseLayer.__init__(self, extensions, **kwargs)
88 def __init__(self, *, extensions: Sequence[BaseExtension] = (), **kwargs):
89 # We allow layer extensions to dynamically inject properties onto the layer
90 # widgets where the layer is defined. We wish to allow extensions and their
91 # properties to be passed in the layer constructor. _However_, if
93 extension_kwargs = remove_extension_kwargs(extensions, kwargs)
---> 95 super().__init__(extensions=extensions, **kwargs)
97 # Dynamically set layer traits from extensions after calling __init__
98 self._add_extension_traits(extensions)
File [~/python3.10/site-packages/lonboard/_base.py:25](http://localhost:8888/lab/tree/~/python3.10/site-packages/lonboard/_base.py#line=24), in BaseWidget.__init__(self, **kwargs)
22 if provided_trait_name not in layer_trait_names:
23 raise TypeError(msg.format(provided_trait_name=provided_trait_name))
---> 25 super().__init__(**kwargs)
File [~/python3.10/site-packages/ipywidgets/widgets/widget.py:503](http://localhost:8888/lab/tree/~/python3.10/site-packages/ipywidgets/widgets/widget.py#line=502), in Widget.__init__(self, **kwargs)
501 """Public constructor"""
502 self._model_id = kwargs.pop('model_id', None)
--> 503 super().__init__(**kwargs)
505 Widget._call_widget_constructed(self)
506 self.open()
File [~/python3.10/site-packages/traitlets/traitlets.py:1355](http://localhost:8888/lab/tree/~/python3.10/site-packages/traitlets/traitlets.py#line=1354), in HasTraits.__init__(self, *args, **kwargs)
1353 for key, value in kwargs.items():
1354 if self.has_trait(key):
-> 1355 setattr(self, key, value)
1356 changes[key] = Bunch(
1357 name=key,
1358 old=None,
(...)
1361 type="change",
1362 )
1363 else:
1364 # passthrough args that don't set traits to super
File [~/python3.10/site-packages/traitlets/traitlets.py:716](http://localhost:8888/lab/tree/~/python3.10/site-packages/traitlets/traitlets.py#line=715), in TraitType.__set__(self, obj, value)
714 if self.read_only:
715 raise TraitError('The "%s" trait is read-only.' % self.name)
--> 716 self.set(obj, value)
File [~/python3.10/site-packages/traitlets/traitlets.py:690](http://localhost:8888/lab/tree/~/python3.10/site-packages/traitlets/traitlets.py#line=689), in TraitType.set(self, obj, value)
689 def set(self, obj: HasTraits, value: S) -> None:
--> 690 new_value = self._validate(obj, value)
691 assert self.name is not None
692 try:
File [~/python3.10/site-packages/traitlets/traitlets.py:722](http://localhost:8888/lab/tree/~/python3.10/site-packages/traitlets/traitlets.py#line=721), in TraitType._validate(self, obj, value)
720 return value
721 if hasattr(self, "validate"):
--> 722 value = self.validate(obj, value)
723 if obj._cross_validation_lock is False:
724 value = self._cross_validate(obj, value)
File [~/python3.10/site-packages/lonboard/traits.py:204](http://localhost:8888/lab/tree/~/python3.10/site-packages/lonboard/traits.py#line=203), in ArrowTableTrait.validate(self, obj, value)
201 geom_col_idx = get_geometry_column_index(value.schema)
203 if geom_col_idx is None:
--> 204 return self.error(obj, value, info="geometry column in table")
206 # No restriction on the allowed geometry types in this table
207 if allowed_geometry_types:
File [~/python3.10/site-packages/lonboard/traits.py:153](http://localhost:8888/lab/tree/~/python3.10/site-packages/lonboard/traits.py#line=152), in FixedErrorTraitType.error(self, obj, value, error, info)
145 else:
146 e = "The '{}' trait expected {}, not {}.".format(
147 self.name,
148 # CHANGED:
(...)
151 describe("the", value),
152 )
--> 153 raise TraitError(e)
TraitError: The 'table' trait of an ArcLayer instance expected geometry column in table, not the Table arro3.core.Table
-----------
value: Int64
```
## Environment
- OS: mac os 14.5, python venv Python 3.10.14
- Browser:Chrome
- Lonboard Version: 0.10.4
- geoarrow-c==0.1.2
- geoarrow-pandas==0.1.1
- geoarrow-pyarrow==0.1.2
- geoarrow-rust==0.1.0
- geoarrow-rust-compute==0.3.0
- geoarrow-rust-core==0.4.0b3
- geoarrow-rust-io==0.3.0
- pyarrow==19.0.0
## Steps to reproduce the bug
Describe the actions that led you to encounter the bug. Example:
1. Run the `U.S. County-to-County Migration` jupyter notebook example ( https://github.com/developmentseed/lonboard/blob/main/examples/migration.ipynb )
Thank you for all the great work and effort put into this library and the geo* libraries too!
Have a good day. | 1medium
|
Title: Is it possible to run this project on an Intel GPU with OpenCL ?
Body: Will a scratch implementation of this project (highly optimized for intel gpus) be any faster than that of Cuda implementation.
Or will it be any faster than intel "cpu-only" implementation when rewritten to work with plaidml. | 3misc
|
Title: ๅๅฐไธไผ ๅพ็๏ผๅฐๅ้่ฏฏ
Body: ๅๅฐไธไผ ๅพ็๏ผๅฐๅๆฏmedia๏ผurls่ฎพ็ฝฎๅจ้่ฐ่ฏๆจกๅผไธ้ขไธๅค็ๅพ็ใnginxๆฒก้
็ฝฎ๏ผ่ฟ็ฎbugไบๅง | 1medium
|
Title: Convert invokes FFmpeg with redundant & conflicting arguments
Body: **Crash reports MUST be included when reporting bugs.**
**Describe the bug**
FaceSwap convert invokes FFmpeg on the writer side with 2 sets of conflicting output codec options. The first set is generated by write_frames in imageio-ffmpeg, the second by output_params in convert's ffmpeg module.
/mnt/data/homedir/miniconda3/envs/faceswap/bin/ffmpeg -y -f rawvideo -vcodec rawvideo -s 3840x2160 -pix_fmt rgb24 -r 29.97 -i - -an **-vcodec libx264 -pix_fmt yuv420p -crf 25** -v error -vf scale=3840:2160 **-c:v libx264 -crf 23 -preset medium** /mnt/data/workspace/18/output.mp4
https://github.com/deepfakes/faceswap/blob/183aee37e93708c0ae73845face5b4469319ebd3/plugins/convert/writer/ffmpeg.py#L95
**To Reproduce**
Steps to reproduce the behavior:
1. Run a convert
2. Inspect ffmpeg arguments with `ps aux | grep ffmpeg`
**Expected behavior**
FFmpeg invocation should not have redundant/conflicting arguments.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: CentOS 8
- Python Version 3.6.8
- Conda Version [e.g. 4.5.12]
- Commit ID 09c7d8aca3c608d1afad941ea78e9fd9b64d9219
| 1medium
|
Title: freetype font and pillow
Body: It seems in Pillow library version over 9.5 getsize method has been removed. So either change the method or limit the library to be installed to 9.5.
Here is the log I get for now.
```
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/vahidajalluian/yolov5-7.0/utils/plots.py", line 305, in plot_images
annotator.box_label(box, label, color=color)
File "/home/vahidajalluian/yolov5-7.0/utils/plots.py", line 91, in box_label
w, h = self.font.getsize(label) # text width, height
AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
Exception in thread Thread-6 (plot_images):
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/vahidajalluian/yolov5-7.0/utils/plots.py", line 305, in plot_images
annotator.box_label(box, label, color=color)
File "/home/vahidajalluian/yolov5-7.0/utils/plots.py", line 91, in box_label
w, h = self.font.getsize(label) # text width, height
AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
```
| 1medium
|
Title: Installation in Kaggle give error
Body: !pip install autogluon
Give me error in Kaggle notebook. How to resolve this?
File /opt/conda/lib/python3.10/site-packages/sklearn/feature_selection/_base.py:14
11 from scipy.sparse import csc_matrix, issparse
13 from ..base import TransformerMixin
---> 14 from ..utils import (
15 _is_pandas_df,
16 _safe_indexing,
17 check_array,
18 safe_sqr,
19 )
20 from ..utils._set_output import _get_output_config
21 from ..utils._tags import _safe_tags
ImportError: cannot import name '_is_pandas_df' from 'sklearn.utils' (/opt/conda/lib/python3.10/site-packages/sklearn/utils/__init__.py) | 1medium
|
Title: Error unknown option
Body: Hello, I have this error every time I use fuck `history: Unknown option '--exact'`
How to reproduce: use the fuck
The Fuck 3.14 using Python 2.7.10
fish, version 2.3.1
mac OS Sierra 10.12.2 | 1medium
|
Title: ERROR: Exception in ASGI application
Body: I am installing Kohya on Kaggle Guid
It gives below error can't make any sense
It looks like gradio related since entire code is about gradio
both gradio live share and local run on Kaggle gives error so i doubt related to graido live share
```
gradio==5.4.0
gradio_client==1.4.2
fastapi==0.115.8
uvicorn==0.34.0
starlette==0.45.3
anyio==3.7.1
python 3.10.12
```
```
* Running on local URL: http://127.0.0.1:7860/
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py", line 790, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 214, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/usr/local/lib/python3.10/dist-packages/starlette/concurrency.py", line 37, in run_in_threadpool
return await anyio.to_thread.run_sync(func)
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 549, in main
gradio_api_info = api_info(request)
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 579, in api_info
api_info = utils.safe_deepcopy(app.get_blocks().get_api_info())
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2982, in get_api_info
python_type = client_utils.json_schema_to_python_type(info)
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 911, in json_schema_to_python_type
type_ = _json_schema_to_python_type(schema, schema.get("$defs"))
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 965, in _json_schema_to_python_type
des = [
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 966, in <listcomp>
f"{n}: {_json_schema_to_python_type(v, defs)}{get_desc(v)}"
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 973, in _json_schema_to_python_type
f"str, {_json_schema_to_python_type(schema['additionalProperties'], defs)}"
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 919, in _json_schema_to_python_type
type_ = get_type(schema)
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 880, in get_type
if "const" in schema:
TypeError: argument of type 'bool' is not iterable
ERROR: Exception in ASGI application
```

| 2hard
|
Title: [BUG] Right Enviroment for custom Qwen2-VL quantization using AutoGPTQ
Body: Hi,
Since last couple of weeks I am struggling to quantize my custom Qwen2-VL model using GTPQ.
There is a lot of confusion regarding the correct version of CUDA, PyTorch, Auto-GPTQ, transformers and tokenizers required to successfully quantize the mode.
If anyone can help me out for the same, that would be great.
For now my environment is:
CUDA : 12.1
Python : 3.12
Pytorch : 2.4
auto_gptq : 0.5.0 (also tried 0.6.0 and 0.7.0 but not working)
transformers : 4.46.3
tokenizers : 0.20.3
My quantization code :
```
from transformers import AutoTokenizer, TextGenerationPipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import logging
import ast
logging.basicConfig(
format="%(asctime)s %(levelname)s [%(name)s] %(message)s", level=logging.INFO, datefmt="%Y-%m-%d %H:%M:%S"
)
pretrained_model_dir = "/home/bhavya/Desktop/bhavya/llm/LLaMA-Factory/models/qwen2_vl_lora_sft"
quantized_model_dir = "/home/bhavya/Desktop/bhavya/llm/LLaMA-Factory/models/qwen2_vl_7b_4bit_gptq"
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True)
# opening the file in read mode
my_file = open("/home/bhavya/Desktop/bhavya/llm/quantize_code/dataset_caliber.txt", "r")
# reading the file
data = my_file.read()
data_into_list = data.split("\n")
datasetlist = data_into_list[:-1]
# printing the data
print(len(datasetlist))
print(type(datasetlist[0]))
dataset = []
for x in datasetlist:
print('x')
print(x)
x1 = ast.literal_eval(x)
dataset.append(x1)
print('dataset')
print(dataset[0])
print(type(dataset[0]))
quantize_config = BaseQuantizeConfig(
bits=4, # quantize model to 4-bit
group_size=128, # it is recommended to set the value to 128
desc_act=False, # set to False can significantly speed up inference but the perplexity may slightly bad
)
# load un-quantized model, by default, the model will always be loaded into CPU memory
model = AutoGPTQForCausalLM.from_pretrained(pretrained_model_dir, quantize_config)
# quantize model, the examples should be list of dict whose keys can only be "input_ids" and "attention_mask"
model.quantize(dataset)
# save quantized model
# model.save_quantized(quantized_model_dir)
# save quantized model using safetensors
model.save_quantized(quantized_model_dir, use_safetensors=True)
# push quantized model to Hugging Face Hub.
# to use use_auth_token=True, Login first via huggingface-cli login.
# or pass explcit token with: use_auth_token="hf_xxxxxxx"
# (uncomment the following three lines to enable this feature)
# repo_id = f"YourUserName/{quantized_model_dir}"
# commit_message = f"AutoGPTQ model for {pretrained_model_dir}: {quantize_config.bits}bits, gr{quantize_config.group_size}, desc_act={quantize_config.desc_act}"
# model.push_to_hub(repo_id, commit_message=commit_message, use_auth_token=True)
# alternatively you can save and push at the same time
# (uncomment the following three lines to enable this feature)
# repo_id = f"YourUserName/{quantized_model_dir}"
# commit_message = f"AutoGPTQ model for {pretrained_model_dir}: {quantize_config.bits}bits, gr{quantize_config.group_size}, desc_act={quantize_config.desc_act}"
# model.push_to_hub(repo_id, save_dir=quantized_model_dir, use_safetensors=True, commit_message=commit_message, use_auth_token=True)
# load quantized model to the first GPU
# model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0")
# download quantized model from Hugging Face Hub and load to the first GPU
# model = AutoGPTQForCausalLM.from_quantized(repo_id, device="cuda:0", use_safetensors=True, use_triton=False)
# inference with model.generate
# print(tokenizer.decode(model.generate(**tokenizer("auto_gptq is", return_tensors="pt").to(model.device))[0]))
# or you can also use pipeline
# pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer)
# print(pipeline("auto-gptq is")[0]["generated_text"])
```
Error:
```
Traceback (most recent call last):
File "/home/bhavya/Desktop/bhavya/llm/quantize_code/qwen2_quantize_gptq.py", line 45, in <module>
model = AutoGPTQForCausalLM.from_pretrained(pretrained_model_dir, quantize_config)
File "/home/bhavya/anaconda3/envs/autogptq-env/lib/python3.10/site-packages/auto_gptq/modeling/auto.py", line 75, in from_pretrained
model_type = check_and_get_model_type(pretrained_model_name_or_path, trust_remote_code)
File "/home/bhavya/anaconda3/envs/autogptq-env/lib/python3.10/site-packages/auto_gptq/modeling/_utils.py", line 305, in check_and_get_model_type
raise TypeError(f"{config.model_type} isn't supported yet.")
TypeError: qwen2_vl isn't supported yet.
```
Please help me out. | 2hard
|
Title: pull: produces empty directory
Body: # Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description
I have DVC set up with S3 remotes, probably misconfigured. When I do `dvc pull` it creates empty directories for the data and claims to have succeeded even though the `file.dvc` file lists many files taking up much space. Subsequent use of `dvc pull` claims everything is up to date.
<!--
A clear and concise description of what the bug is.
-->
### Reproduce
1. Clone the git repository of a project successfully using dvc
2. `dvc add remote s3://something-i-probably-dont-have-access-to`
3. `dvc pull`
4. Confirm that no data has been obtained and no error message has been emitted.
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
6. Copy dataset.zip to the directory
7. dvc add dataset.zip
8. dvc run -d dataset.zip -o model ./train.sh
9. modify dataset.zip
10. dvc repro
-->
### Expected
Either data is downloaded from the remote or an error message is emitted.
The size, number of files, and md5sum in the `.dvc` file match what is present after a `dvc pull`. Mismatches lead to error messages with commands like `dvc pull` and `dvc update`.
<!--
A clear and concise description of what you expect to happen.
-->
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.1.0 (pip)
------------------------
Platform: Python 3.8.10 on Linux-5.4.0-147-generic-x86_64-with-glibc2.29
Subprojects:
dvc_data = 2.0.2
dvc_objects = 0.23.0
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.0.4
Supports:
http (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.6.0, boto3 = 1.26.76)
Config:
Global: /home/anne/.config/dvc
System: /etc/xdg/dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: s3, s3
Workspace directory: ext4 on /dev/mapper/ubuntu--vg-ubuntu--lv
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/af052bc392ee89f0efbc7a8ac0aa350b
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
```console
$ dvc pull --verobse
2023-06-22 13:59:55,569 DEBUG: v3.1.0 (pip), CPython 3.8.10 on Linux-5.4.0-147-generic-x86_64-with-glibc2.29
2023-06-22 13:59:55,569 DEBUG: command: /home/anne/.cache/pypoetry/virtualenvs/explore-dvc-lhhzaVKj-py3.8/bin/dvc pull --verbose
Everything is up to date.
2023-06-22 13:59:55,803 DEBUG: Analytics is enabled.
2023-06-22 13:59:55,827 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', '/tmp/tmpqg__r3dm']'
2023-06-22 13:59:55,828 DEBUG: Spawned '['daemon', '-q', 'analytics', '/tmp/tmpqg__r3dm']'
``` | 2hard
|
Title: ๐๐Real-time Face Recognition in TensorLayer
Body: ## A discussion for:
- Face recognition algorithm
- Face recognition history
- Face recognition implementation using TensorLayer and TensorFlow.
**Feel free to add more papers and discuss here or in the [Slack channel](https://join.slack.com/t/tensorlayer/shared_invite/enQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc).**
### Background
SphereFace : Face recognition (FP) can be categorized as face identification and face verification. The **identification** classifies a face to a specific identity, while the **verification** determines whether a pair of faces belongs to the same identity.
For **closed-set protocol**, all testing identities are predefined in training set. Therefore, closed- set FR can be well addressed as a classification problem.
For **open-set protocol**, the testing identities are usually not in the training set, so we need to map faces to a discriminative feature space. Then face identification can be viewed as performing face verification between the probe face and every identity in the gallery (given some faces of the identities). **<--- industry usually use this one.**
### Paper History
- [Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.]()
- triplet loss
- [Deep learning face representation from predicting 10,000 classes. In CVPR, 2014.]()
- softmax loss, treats open-set FR as a multi-class classification problem
- open-set
- [Deepface: Closing the gap to human-level performance in face verifica- tion. In CVPR, 2014.]()
- softmax loss, treats open-set FR as a multi-class classification problem
- open-set
- [Deep learning face representation by joint identification-verification. In NIPS, 2014]()
- softmax loss + contrastive loss (Euclidean margin based loss)
- greatly boosting the performance.
- [Facenet: A unified embedding for face recognition and clustering. In CVPR, 2015]()
- triplet loss
- [**code** davidsandberg](https://github.com/davidsandberg/facenet)
- learn a unified face embedding, 200 million face images, current state-of-the-art FR accuracy
- [A discriminative feature learning approach for deep face recognition. In ECCV, 2016]()
- softmax loss + centre loss (Euclidean margin based loss)
SphereFace : One could notice that state-of-the-art FR meth- ods usually adopt ideas (e.g. contrastive loss, triplet loss) from metric learning, showing open-set FR could be well addressed by discriminative metric learning.
- [Sparsifying neural network connections for face recognition. In CVPR, 2016]()
- softmax loss + contrastive loss (Euclidean margin based loss)
- [Targeting ultimate accuracy: Face recognition via deep embedding. arXiv preprint:1506.07310, 2015.]()
- ? loss
- [Large-margin softmax loss for convolutional neural networks. In ICML, 2016. 2,]()
- L-Softmax loss, also **implicitly** involves the concept of angles like SphereFace. Differently, SphereFace A-Softmax loss is developed to **explicitly** learn discriminative face embedding.
- it shows great improvement on closed-set classification problems.
SphereFace : Center loss only explicitly encourages intra-class compactness. Both contrastive loss and triplet loss can not constrain on each individual sample, and thus require carefully designed pair/triplet mining procedure, which is both time-consuming and performance-sensitive.
- [SphereFace: Deep Hypersphere Embedding for Face Recognition. In CVPR, 2017]()
- angular softmax (A-Softmax) loss
- open-set
- haijun : 100x100 still works fine
- [**code** wy1iu](https://github.com/wy1iu/sphereface)
- We extract the deep features (SphereFace) from the output of the FC1 layer. For all experiments, the final representation of a testing face is obtained by **concatenating its original face features and its horizontally flipped features**. The score (metric) is computed by the **cosine distance** of two features.
- [CosFace: Large Margin Cosine Loss for Deep Face Recognition In ArXiv, 2018]()
- [ArcFace/InsightFace: Additive Angular Margin Loss for Deep Face Recognition. ArXiv, 2018](https://arxiv.org/abs/1801.07698)
- jiankang : follows sphereface and cosface
- [**code** insightface](https://github.com/deepinsight/insightface) (ArcFace) from Imperial College and DeepInsight using MXNET
- [**code** InsightFace_TF](https://github.com/auroua/InsightFace_TF) (ArcFace) using **TensorLayer**
- [MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices. ArXiv, 2018](https://arxiv.org/abs/1804.07573)
### Implementation Hints
- https://github.com/auroua/InsightFace_TF/blob/master/losses/face_losses.py
- http://tensorlayer.readthedocs.io/en/latest/modules/cost.html#cosine-similarity
- https://github.com/sirius-ai/MobileFaceNet_TF | 3misc
|
Title: A more universal function to replace subgraph in OptimizationRule
Body: <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
Currently in OptimizationRule, there're only collapsable_predecessors adding/removing and node replacing functions to handle graph mutation. However, some new rules may need to replace a piece of subgraph by adding/removing nodes and edges. Thus we need a more universal function to do this work.
| 1medium
|
Title: How to generate HQ dataset?
Body: thanks for this great job!
The resolution of FFHQ original dataset is 1024x1024, but in your paper the resolution of HQ data is 512x512. So how to generate the 512x512 HQ dataset? like resize_bilinear or resize_bicubic?
| 1medium
|
Title: gradio.State is always null in JavaScript callbacks
Body: ### Describe the bug
I'm trying to extend `gr.Gallery` component with custom logic. The idea is to scroll it to the position selected by the user.
I wrote a custom JS handler which does the job and it works if I provide `gr.Number` component as input. However, I can not
do the same with `gr.State`. The debugger shows that the passed value is always `null`.
**Expected behavior**: should be able to pass either gr.Number or gr.State as inputs to JS handlers
### Have you searched existing issues? ๐
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
extra_js_scripts = (
r"""
<script>
const scrollToIndex = (index) => {
const thumbnails = document.querySelectorAll('.thumbnail-item');
if (index !== null && index >= 0 && index < thumbnails.length) {
thumbnails[index].scrollIntoView({ behavior: 'smooth', block: 'nearest', inline: 'center' });
} else {
console.error('Index out of bounds');
}
};
</script>
"""
)
with gr.Blocks(head=extra_js_scripts) as demo:
current_index = gr.State(0)
gallery = gr.Gallery(columns=1,
object_fit='contain',
allow_preview=False,
show_download_button=False,
show_fullscreen_button=False)
index_input = gr.Number(label="Image Index", value=0)
scroll_button = gr.Button("Scroll to Image")
index_input.change(fn=lambda x: x, inputs=index_input, outputs=current_index)
# This does not work
scroll_button.click(fn=None, js="(index) => scrollToIndex(index)", inputs=current_index)
# This works
# scroll_button.click(fn=None, js="(index) => scrollToIndex(index)", inputs=index_input)
demo.load(lambda: ["https://placebear.com/200/200" for _ in range(6)], outputs=gallery)
demo.launch()
```
### Screenshot
<img width="1673" alt="Image" src="https://github.com/user-attachments/assets/f1c212c1-2b81-4de6-b2a3-6e8d02173d5a" />
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.12.0
gradio_client version: 1.5.4
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.4 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.5
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.2.2
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
I can work around it | 1medium
|
Title: Interfaces lead to circular imports when splitting type definitions across files
Body: I'm not sure if this issue has been raised before. Did a quick search through issues and didn't find anything. But I was wondering if anyone had run into this issue before and there's an existing Python solution I'm not thinking about or if this is an existing issue.
Basically, when defining a type that implements an interface, we specify the interface class that is being implemented by this new type in the Meta interface option.
```
class Character(graphene.Interface):
id = graphene.ID(required=True)
name = graphene.String(required=True)
friends = graphene.List(lambda: Character)
class Human(graphene.ObjectType):
class Meta:
interfaces = (Character, ) # <----- here
starships = graphene.List(Starship)
home_planet = graphene.String()
```
But if we would like to split the class definitions across two files:
```
#schema.py
import graphene
from human import Human
from droid import Droid
class Character(graphene.Interface):
id = graphene.ID(required=True)
name = graphene.String(required=True)
friends = graphene.List(lambda: Character)
def resolve_type(cls, instance, info):
if instance.type == 'DROID':
return Droid
return Human # <----- here we need to import Human
....
schema = graphene.Schema(query=Query, mutation=Mutation, types=[Human])
```
and
```
#human.py
class Human(graphene.ObjectType):
class Meta:
interfaces = (Character, ) # <--- would require us to import Character
starships = graphene.List(Starship)
home_planet = graphene.String()
```
then how would the `Human` class be able to import the `Character` class it needs to specify in the interfaces it implements without introducing circular imports? | 1medium
|
Title: XLNET Base for Malay and Indonesian languages (not an issue)
Body: Hi! This is not an issue, I just want to say XLNET is really great and I successfully pretrained XLNET from scratch for Malay and Indonesian languages. You can read comparison and download pretrained from here, https://github.com/huseinzol05/Malaya/tree/master/xlnet
I am planning to release XLNET Large for these languages! | 3misc
|
Title: [Quantization] enable multi-backend `bitsandbytes`
Body: Similar to https://github.com/huggingface/transformers/pull/31098/ | 1medium
|
Title: When I run trape I get this error
Body: ```
Loading trape...
[x] ERROR: cannot import name base_manager
```
How to fix ? | 1medium
|
Title: Linux, GTX 3060 - MDXNet does not use GPU
Body: MDX-Net is not using my GPU, despite UVR recognising my GPU and having "GPU Conversion" checked. Whenever I try to use MDX-Net, processing is extremely slow and CPU usage skyrockets.
Here is my hardware:
OS: Pop!_OS 22.04 LTS x86_64
Host: B660M DS3H AX DDR4
Kernel: 6.6.6-76060606-generic
Shell: bash 5.1.16
Resolution: 1920x1080
DE: GNOME 42.5
WM: Mutter
WM Theme: Pop
Theme: Pop-dark [GTK2/3]
Icons: Pop [GTK2/3]
Terminal: gnome-terminal
CPU: 13th Gen Intel i5-13500 (20) @ 4
GPU: Intel AlderLake-S GT1
GPU: NVIDIA GeForce RTX 3060 Lite Has | 1medium
|
Title: regression in graph construction time for `Array.ravel()`
Body: **Minimal Complete Verifiable Example**:
```python
import dask.array
import numpy as np
shape=(28, 30, 8, 1, 21, 3)
dtype=np.int64
chunksize=(1, 1, 1, 1, 1, 3)
array = dask.array.from_array(np.arange(np.prod(shape)).reshape(shape), chunks=chunksize)
%timeit array.ravel()
```
This times at 60ms on 2024.6.0, and 275ms on 2024.12.0
Snakeviz blames `_task_spec.py`
<img width="776" alt="image" src="https://github.com/user-attachments/assets/3b231308-dbcc-4314-addc-487715c28826">
**Environment**:
- Dask version: 2024.12.0
- Python version: 3.12
- Operating System: macos
- Install method (conda, pip, source): pip
| 1medium
|
Title: dask.array buggy with pandas multiindex.values / object dtype arrays
Body: <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
**Minimal Complete Verifiable Example**:
```python
import dask.array
import pandas as pd
import numpy as np
idx = pd.MultiIndex.from_product([list("abc"), [0, 1]])
dask.array.from_array(idx.values, chunks=-1)[0].compute()
```
Interestingly
```
array = np.array([('a', 0), ('a', 1), ('b', 0), ('b', 1), ('c', 0), ('c', 1)], dtype=object)
dask.array.from_array(array, chunks=-1)[0].compute()
```
succeeds :/ even though that should be identical to `idx.values`
**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.12.0
- Python version:
- Operating System:
- Install method (conda, pip, source):
| 1medium
|
Title: Are special tokens wrong?
Body: In the vocab of llama, eos_token is "\</s\>", bos_token is "\<s\>", unk_token is "\<unk\>", and the corresponding token ids are 0, 1, 2.
So I think in train.py, [line 214-221](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L214) should be removed.
And are [DEFAULT_BOS_TOKEN](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L30) and [DEFAULT_UNK_TOKEN](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L31) wrong?
And for [line 151](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L151), should we add a space between example['output'] and tokenizer.eos_token? | 1medium
|
Title: UrlRetrieve does not accept key argument context
Body: So I was running the [example](https://github.com/tflearn/tflearn/blob/master/examples/nlp/lstm_generator_cityname.py) using Python 3.5.2 on Anaconda 4.2.0 and I happen to receive this particular error `TypeError: urlretrieve() got an unexpected keyword argument 'context'`.
One stackoverflow [post](http://stackoverflow.com/questions/28575070/urllib-not-taking-context-as-a-parameter) suggested that the problem will be resolved in Python 3.4 - how it is yet to be resolved .
Has anyone managed to resolve this issue. Thanks. | 1medium
|
Title: Wrong example of usage when config name is missing for community script-datasets
Body: As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name is missing.
Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all']
Example of usage:
`load_dataset('fleurs', 'af_za')`
```
Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs". | 0easy
|
Title: ๆพไธๅฐgpt-4-turbo-preview๏ผNotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4-turbo-preview` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
Body: NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4-turbo-preview` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
Traceback:
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "E:\aivideo\MoneyPrinterTurbo\webui\Main.py", line 378, in <module>
result = tm.start(task_id=task_id, params=params)
File "E:\aivideo\MoneyPrinterTurbo\app\services\task.py", line 42, in start
video_script = llm.generate_script(video_subject=video_subject, language=params.video_language,
File "E:\aivideo\MoneyPrinterTurbo\app\services\llm.py", line 167, in generate_script
response = _generate_response(prompt=prompt)
File "E:\aivideo\MoneyPrinterTurbo\app\services\llm.py", line 130, in _generate_response
response = client.chat.completions.create(
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
return func(*args, **kwargs)
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\openai\resources\chat\completions.py", line 667, in create
return self._post(
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\openai\_base_client.py", line 1208, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\openai\_base_client.py", line 897, in request
return self._request(
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\openai\_base_client.py", line 988, in _request
raise self._make_status_error_from_response(err.response) from None | 1medium
|
Title: This version of ChromeDriver only supports Chrome version 114 Current browser version is 103.0.5060.53
Body: I tried below and it still fails
My code:
import undetected_chromedriver as uc
driver = uc.Chrome(version_main=103)
driver.get("https://example.com") | 1medium
|
Title: BUG: set_index with pyarrow timestamp type does not produce DatetimeIndex
Body: ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import io
import pandas as pd
buf = io.StringIO("date,value\n2024-01-01 00:00:00,1\n2024-02-01 00:00:00,2")
df = pd.read_csv(buf, parse_dates=["date"])
df.set_index("date").loc["2024-01"] # works
buf = io.StringIO("date,value\n2024-01-01 00:00:00,1\n2024-02-01 00:00:00,2")
df = pd.read_csv(buf, parse_dates=["date"], dtype_backend="pyarrow", engine="pyarrow")
df.set_index("date").loc["2024-01"] # KeyError
```
```
### Issue Description
The pyarrow timestamp type gets put into a generic `Index` when assigned via set_index, so the datetime overloads do not work correctly
### Expected Behavior
The pyarrow timestamp type should be wrapped by a DatetimeIndex
### Installed Versions
3.0.0.dev0+1696.gfae3e8034f' | 1medium
|
Title: release instructions location
Body: Where are the current release instructions?
I am building a release locally so if we have instructions somewhere that would be beneficial. Also I could work on the latest release. | 0easy
|
Title: Failed to export onnx model when I use kornia.geometry.transform.imgwarp.warp_perspective in the model forward function
Body: ### Discussed in https://github.com/kornia/kornia/discussions/2992
<div type='discussions-op-text'>
<sup>Originally posted by **knavezl** August 23, 2024</sup>
my export model code is:
input_img = torch.randn(1 ,4, 3, 864, 1536).cuda()
num_cameras = 4
num_classes = 3
resolution = [360 ,4 ,360]
Y, Z, X = resolution
encoder_name = 'res50'
model_params = torch.load(pt_path)
state_dict = {}
for key in model_params["state_dict"].keys():
state_dict_key = key.replace('model.' , '')
state_dict[state_dict_key] = model_params["state_dict"][key]
model = MVDet(Y, Z, X, encoder_type = encoder_name, num_cameras = num_cameras, num_classes = num_classes)
model.load_state_dict(state_dict,strict=True)
model.cuda()
torch.onnx.export(model, input_img , onnx_path, verbose=False,opset_version=13)
The error is๏ผ
File "/home/user/BEV/TrackTacular/WorldTrack/models/mvdet.py", line 172, in forward
feat_mems_ = warp_perspective(feat_cams_, proj_mats, (self.Y, self.X), align_corners=False)
File "/home/user/anaconda3/envs/pytorch2.1/lib/python3.9/site-packages/kornia/geometry/transform/imgwarp.py", line 126, in warp_perspective
grid = transform_points(src_norm_trans_dst_norm[:, None, None], grid)
File "/home/user/anaconda3/envs/pytorch2.1/lib/python3.9/site-packages/kornia/geometry/linalg.py", line 191, in transform_points
trans_01 = torch.repeat_interleave(trans_01, repeats=points_1.shape[0] // trans_01.shape[0], dim=0)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
Have you tried to transform the model and encountered the same problem? Do you have any solutions?
</div> | 1medium
|
Title: Local custom dataset & Potential typo in test.py
Body: Hi, thanks for this interesting work!
I tried to use this model on a local custom dataset and followed the dataset structure as specified but it failed to load correctly. I ended up having to hard code some data loading code to make it work. It would be greatly appreciated if you guys can provide a demo or example of local dataset. Thanks!
PS: I think there may be a typo in the test.py: the '--pretrained_path' should probably be '--pretrained_model_name_or_path' ? | 1medium
|
Title: Where is internal state used in train_2d
Body: In `train_2d` defined in [gd](https://github.com/d2l-ai/d2l-en/blob/master/chapter_optimization/gd.md), it says `s1` and `s2` are internal state variables that will be used later, but where exactly are those variable used are not clear, even after reading the following sections.
Perhaps it is better to put a reference in the comments, since it is confusing whether the internal states will be used in the same section later or in the same chapter later.
```
def train_2d(trainer, steps=20, f_grad=None): #@save
"""Optimize a 2D objective function with a customized trainer."""
# `s1` and `s2` are internal state variables that will be used later
x1, x2, s1, s2 = -5, -2, 0, 0
results = [(x1, x2)]
for i in range(steps):
if f_grad:
x1, x2, s1, s2 = trainer(x1, x2, s1, s2, f_grad)
else:
x1, x2, s1, s2 = trainer(x1, x2, s1, s2)
results.append((x1, x2))
print(f'epoch {i + 1}, x1: {float(x1):f}, x2: {float(x2):f}')
return results
``` | 0easy
|
Title: RFE: integration with FastAPI/Starlette
Body: [FastAPI ](https://github.com/tiangolo/fastapi)is rapidly gaining popularity as an API framework. It would be great if there was an integration client for FastAPI like there is for Flask etc.
FastAPI doesn't have a plugin system like Flask, but Starlette supports middlewares, and FastAPI supports dependency injection, so I think it should be possible.
| 1medium
|
Title: [FEATURE] VarianceThresholdClassifier
Body: You can use quantile regression tricks to make predictions about quantiles.
But what if, at prediction time, you'd like to predict `P(y >= value)`?
To answer that question you'd need more than just a quantile you'd need some distribution prediction instead.
So maybe there's an opportunity for a component here. | 1medium
|
Title: peloton_auto_bookmark_metric is an invalid keyword argument for athlete
Body: Hi! Having some trouble getting started. I've pulled the latest image from dockerhub, but the container is crashing:
~/$ docker run -e MODULE_NAME=src.fitly.app -e VARIABLE_NAME=server -p 8050:80 -v /home/me/fitly:/app/config ethanopp/fitly:latest
Checking for script in /app/prestart.sh
Running script /app/prestart.sh
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:
#! /usr/bin/env bash
# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head
{"loglevel": "info", "workers": 8, "bind": "0.0.0.0:80", "workers_per_core": 2.0, "host": "0.0.0.0", "port": "80"}
Traceback (most recent call last):
File "/usr/local/bin/gunicorn", line 8, in <module>
sys.exit(run())
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 58, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 228, in run
super().run()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 58, in __init__
self.setup(app)
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 118, in setup
self.app.wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/app/src/fitly/app.py", line 13, in <module>
db_startup(app)
File "/app/src/fitly/__init__.py", line 86, in db_startup
peloton_auto_bookmark_metric='readiness'
File "<string>", line 4, in __init__
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 433, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
with_traceback=exc_tb,
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 430, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/ext/declarative/base.py", line 840, in _declarative_constructor
"%r is an invalid keyword argument for %s" % (k, cls_.__name__)
TypeError: 'peloton_auto_bookmark_metric' is an invalid keyword argument for athlete
I was previous getting some configuration errors, but I worked through those, and I'm now at this error. Happy to provide any additional info/ Thanks! | 1medium
|
Title: ValueError: Dimension has to be a list or tuple
Body: Traceback (most recent call last):
File "<ipython-input-91-3ab6d73131bd>", line 1, in <module>
runfile('/Users/sameepshah/Desktop/Data/Practice/skoptHyperParm.py', wdir='/Users/sameepshah/Desktop/Data/Practice')
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/sameepshah/Desktop/Data/Practice/skoptHyperParm.py", line 315, in <module>
x0=default_parameters)
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/skopt/optimizer/gp.py", line 214, in gp_minimize
space = normalize_dimensions(dimensions)
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/skopt/utils.py", line 472, in normalize_dimensions
space = Space(dimensions)
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/skopt/space/space.py", line 570, in __init__
self.dimensions = [check_dimension(dim) for dim in dimensions]
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/skopt/space/space.py", line 570, in <listcomp>
self.dimensions = [check_dimension(dim) for dim in dimensions]
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/skopt/space/space.py", line 70, in check_dimension
raise ValueError("Dimension has to be a list or tuple.")
ValueError: Dimension has to be a list or tuple.
Hi guys I was trying to run your posted hyperparameter tuning model. I am getting this error help much appreciated.
Thanks | 1medium
|
Title: Qwen2 MoE manual `head_dim`
Body: ### Feature request
https://github.com/huggingface/transformers/blob/81aa9b2e07b359cd3555c118010fd9f26c601e54/src/transformers/models/qwen2_moe/modeling_qwen2_moe.py#L317
For qwen2 moe, `head_dim` is now forced to be `hidden_size // num_heads`.
### Motivation
manual `head_dim` setting support in llama, mistal, mixtral modeling
### Your contribution
PR | 1medium
|
Title: Why does NHiTS need the target variable specified in the time_varying_unknown_reals attribute?
Body: I was wondering why do I need to specify the target variable twice, when building a `TimeSeriesDataset` for a **NHiTS**? Once for the attribute 'target' and once for 'time_varying_unknown_reals'. If I don't specify it for the second attribute, I get a `ValueError: [target_name] is not in list.` | 1medium
|
Title: AttributeError: module 'pykwalify.core' has no attribute 'yaml'
Body: Hello, it looks like pykwalify has had a new release (1.8) which is breaking my tavern build with the following error:
```
ve/lib/python3.8/site-packages/tavern/schemas/files.py:16: in <module>
core.yaml.safe_load = functools.partial(yaml.load, Loader=IncludeLoader)
E AttributeError: module 'pykwalify.core' has no attribute 'yaml'
```
It looks like pykwalify has dropped/changed their default yaml parser which may be the culprit?
Could we maybe get tavern pinned to pykwalify 1.7 until it's fixed?
https://github.com/Grokzen/pykwalify/releases
https://github.com/taverntesting/tavern/blob/master/setup.cfg#L34 | 1medium
|
Title: future warnings
Body: List of future pandas warnings
```
c:\users\sole\documents\repositories\feature_engine\feature_engine\creation\math_features.py:212: FutureWarning: The provided callable <function sum at 0x0000016E1CA6D090> is currently using Series.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass 'sum' instead.
```
| 0easy
|
Title: Unexpected behaviour of ticks in next gen Seaborn
Body:
Hello,
Just found a potential bug in the next gen version of **seaborn**.
Expected behaviour: create a histogram with y-axis limits that go from 0 to 50.
Actual behaviour: limits of y-axis do not change.
Other possible reasons for this: I may be confused between the syntax for ticks and for axes limits here, but the ticks do not appear to change either.
Reprex:
```python
import pandas as pd
import seaborn.objects as so
diamonds = pd.read_csv(
"https://github.com/mwaskom/seaborn-data/raw/master/diamonds.csv"
)
(
so.Plot(diamonds, x="y")
.add(so.Bar(), so.Hist(binwidth=0.5))
.scale(y=so.Continuous(trans=None).tick(at=[0, 50]))
)
```
System info:
Python implementation: CPython
Python version : 3.9.12
IPython version : 8.3.0
Compiler : Clang 12.0.1
OS : Darwin
Release : 19.6.0
Machine : x86_64
Processor : i386
CPU cores : 4
Architecture: 64bit
seaborn : 0.12.0b1
matplotlib : 3.5.2
matplotlib_inline: 0.1.3
sys : 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:23:20)
[Clang 12.0.1 ]
pandas : 1.4.2
Thanks for all the work on this amazing package! | 1medium
|
Title: Unable to log in with the authentication information provided
Body: when i post data, i always get this msg: Unable to log in with the authentication information provided
```
class CustomBackend(ModelBackend):
def authenticate(self, username=None, password=None, **kwargs):
try:
user = User.objects.get(Q(username=username)|Q(mobile=username))
if user.check_password(password):
return user
except Exception as e:
return None
```

i'm follow offical website config, but don't know why not post data to API? | 1medium
|
Title: I have confusion with final ensemble model.
Body: My constractor:
`automl = autosklearn.classification.AutoSklearnClassifier(
time_left_for_this_task=3600,
memory_limit=3072,
ensemble_size=10,
ensemble_nbest=10,
max_models_on_disc=10,
delete_tmp_folder_after_terminate=False,
)`
Below is my model's sprint_statistics
Number of target algorithm runs: 17\n
Number of successful target algorithm runs: 4\n
Number of crashed target algorithm runs: 2\n
Number of target algorithms that exceeded the time limit: 9\n
Number of target algorithms that exceeded the memory limit: 2\n'
output of `show_model()` :
`"[(1.000000, SimpleClassificationPipeline({'balancing:strategy': 'weighting', 'classifier:__choice__': 'random_forest', 'data_preprocessing:categorical_transformer:categorical_encoding:__choice__': 'no_encoding', 'data_preprocessing:categorical_transformer:category_coalescence:__choice__': 'minority_coalescer', 'data_preprocessing:numerical_transformer:imputation:strategy': 'mean', 'data_preprocessing:numerical_transformer:rescaling:__choice__': 'robust_scaler', 'feature_preprocessor:__choice__': 'select_rates_classification', 'classifier:random_forest:bootstrap': 'False', 'classifier:random_forest:criterion': 'gini', 'classifier:random_forest:max_depth': 'None', 'classifier:random_forest:max_features': 0.23832696118792362, 'classifier:random_forest:max_leaf_nodes': 'None', 'classifier:random_forest:min_impurity_decrease': 0.0, 'classifier:random_forest:min_samples_leaf': 1, 'classifier:random_forest:min_samples_split': 16, 'classifier:random_forest:min_weight_fraction_leaf': 0.0, 'data_preprocessing:categorical_transformer:category_coalescence:minority_coalescer:minimum_fraction': 0.035949803138524174, 'data_preprocessing:numerical_transformer:rescaling:robust_scaler:q_max': 0.7356350569414665, 'data_preprocessing:numerical_transformer:rescaling:robust_scaler:q_min': 0.2902106911441806, 'feature_preprocessor:select_rates_classification:alpha': 0.3610975428987517, 'feature_preprocessor:select_rates_classification:score_func': 'mutual_info_classif'},\ndataset_properties={\n 'task': 2,\n 'sparse': False,\n 'multilabel': False,\n 'multiclass': True,\n 'target_type': 'classification',\n 'signed': False})),\n]"`
My question is why I got only one algorithm/configuration in final ensemble model if my successful target algorithms are 4 and ensemble size are 10? My final model should be ensemble of 4 configurations, right? | 1medium
|
Title: eventually, train on the entire dataset even when warm_starting, or early_stopping
Body: right now we're losing 10-15% of our data for our validation stuff, to find the best number of rounds/iterations/estimators/etc.
eventually, it would be ideal to train with early stopping to find the ideal number of iterations, then use that (plus some small amount, say 5% of however many num_iteration we have to account for the fact that we'll have a slightly larger dataset now) to train a new model with all the data.
so, we'd end up training the model twice: once to find the ideal num_iteration, once with the full dataset using the ideal num_iteration.
make this a param to .train(). not sure of the name yet. | 1medium
|
Title: [BUG] label with angle brackets <> causes display errors
Body: **Describe the bug**
When the label contains **<>**, the label list will show, for example `<NONCAR><font color="#c8c8c8"> ใ </font>`. Meanwhile, in the _polygon labels_ panel, only the color circle shows but not the name <NONCAR>.
**To Reproduce**
Steps to reproduce the behavior:
1. create a polygon and label it as **<text>**
**Expected behavior**
like other normal labels
**Desktop:**
- OS: MacOS 10.15.4
- Labelme Version 4.2.10
| 1medium
|
Title: Japanese Kanji character
Body: I found that the Japanese character "็ต" is missing in the file "ja_char.txt".And I cannot recognize this Japanese Kanji character using the Japanese model. | 0easy
|
Title: Using custom model.hf_text.checkpoint_name
Body: ## Description
In Autogluon multimodal, you can specify a text model on Huggingface (say for the sake of the example roberta-base). If I fine-tuned roberta-base using the Transformers library but did not publish to Huggingface, can I still train on that backbone by specifying the path in the model.hf_text.checkpoint_name hyperparameter?
| 1medium
|
Title: Either don't send email notification or only selectively notify about changes to nimbus-desktop-experiments
Body: Every change to `main-workspace/nimbus-desktop-experiments` is currently notified to each reviewer by email.
These don't seem to carry much value, as they are generated as part of the process of actually releasing an experiment - the reviewer will go and approve them straight away, and they shouldn't need to know that they've approved them.
Whilst reviewers could use email filters to filter these out, I think the fact that we're adding lots of reviewers means we should consider if it is really necessary to send these notifications out in the first place - or maybe, only a subset of reviewers need them. | 1medium
|
Title: [FEATURE] Add embeding ranker in PyTorch
Body: ### Description
<!--- Describe your expected feature in detail -->
https://www.tensorflow.org/recommenders/examples/basic_ranking, but using PyTorch instead of TF
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Willingness to contribute
<!--- Go over all the following points, and put an `x` in the box that apply. -->
- [ ] Yes, I can contribute for this issue independently.
- [ ] Yes, I can contribute for this issue with guidance from Recommenders community.
- [ ] No, I cannot contribute at this time.
### Other Comments
| 1medium
|
Title: The animation window didn't show up
Body: I typed 'manimgl' and pressed the key 'Enter', this is what happened to my terminal.
My terminal just waiting, with the cursor flashing continously.

'Maybe it is loading now, I just to wait for some time', I thought, but through an hour (or longer), nothing happened, and the terminal just waiting, waiting for me to type some string, like a text edit software.

===Environment Config========
System: Ubuntu 21.10, x64
Python: 3.9.7
Manim: 1.3.0
===Problem Description========
It should pop up the animation window and the terminal should enter interactive mode, but in this case the animation window didn't show up.
===What I have done==========
I have installed the manimgl (using the command '`pip install manimgl`'), and other softwares (like TeXLive, FFmpeg) also have been installed. | 1medium
|
Title: long string: black miscalculates line lengths (without string_processing)
Body: **Describe the bug**
In some cases the current black version (25.1.0) reformats a string that is too long. It seems to miscalculate the resulting line length.
**To Reproduce**
```python
class A:
def foo(self):
return (
"This is a very loooooooooooooooooooooooooooooooooooooooooooooong string"
)
```
Note: The long line is exactly 86 chars wide. The bug does not occur when I add one character.
```sh
$ black file.py
```
reformats to:
```python
class A:
def foo(self):
return "This is a very loooooooooooooooooooooooooooooooooooooooooooooong string"
```
With the long line being 89 chars long.
I would expect it to leave the code as is in this case.
**Environment**
- Black's version: 25.1.0
- OS and Python version: Gentoo, Python 3.11.11
**Additional context**
The problem occurred first with in a project using `black -l100` producing a line of length 105.
| 1medium
|
Title: ValueError: The passed save_path is not a valid checkpoint
Body: Hello.
I'm having problems resuming training from a checkpoint on Google Colab. The only way I've found is to delete all checkpoints and start again, which of course isn't a good idea after training for hours.
I'm using TF and t2t version 1.14.0 and Ubuntu 18.04 on normal runtime because of a problem using Colab's GPU I've reported [here](https://github.com/tensorflow/tensorflow/issues/32017).
This is the code :
```
!t2t-trainer \
--tmp_dir='/content/gdrive/My Drive/TCC/T2T LibriSpeech/tmp/' \
--problem='librispeech_clean_small' \
--model='transformer' \
--eval_steps=3 \
--hparams_set='transformer_librispeech' \
--data_dir='/content/gdrive/My Drive/TCC/T2T LibriSpeech/data/' \
--output_dir='/content/gdrive/My Drive/TCC/T2T LibriSpeech/output/'
```
And this is the prompt output:
```
WARNING: Logging before flag parsing goes to stderr.
W0828 15:46:33.243587 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/expert_utils.py:68: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
W0828 15:46:34.237717 139684777486208 lazy_loader.py:50]
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
W0828 15:46:36.218165 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/adafactor.py:27: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
W0828 15:46:36.218689 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/multistep_optimizer.py:32: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.
W0828 15:46:36.231890 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/mesh_tensorflow/ops.py:4237: The name tf.train.CheckpointSaverListener is deprecated. Please use tf.estimator.CheckpointSaverListener instead.
W0828 15:46:36.232139 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/mesh_tensorflow/ops.py:4260: The name tf.train.SessionRunHook is deprecated. Please use tf.estimator.SessionRunHook instead.
W0828 15:46:36.251127 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/models/research/neural_stack.py:38: The name tf.nn.rnn_cell.RNNCell is deprecated. Please use tf.compat.v1.nn.rnn_cell.RNNCell instead.
W0828 15:46:36.288087 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/rl/gym_utils.py:235: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.
W0828 15:46:36.311170 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/trainer_lib.py:111: The name tf.OptimizerOptions is deprecated. Please use tf.compat.v1.OptimizerOptions instead.
W0828 15:46:36.326797 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensorflow_gan/python/contrib_utils.py:305: The name tf.estimator.tpu.TPUEstimator is deprecated. Please use tf.compat.v1.estimator.tpu.TPUEstimator instead.
W0828 15:46:36.327040 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensorflow_gan/python/contrib_utils.py:310: The name tf.estimator.tpu.TPUEstimatorSpec is deprecated. Please use tf.compat.v1.estimator.tpu.TPUEstimatorSpec instead.
W0828 15:46:37.165019 139684777486208 deprecation_wrapper.py:119] From /usr/local/bin/t2t-trainer:32: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.
W0828 15:46:37.165243 139684777486208 deprecation_wrapper.py:119] From /usr/local/bin/t2t-trainer:32: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.
W0828 15:46:37.165358 139684777486208 deprecation_wrapper.py:119] From /usr/local/bin/t2t-trainer:33: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.
W0828 15:46:37.166135 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/hparams_lib.py:49: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.
I0828 15:46:37.167073 139684777486208 hparams_lib.py:64] Loading hparams from existing json /content/gdrive/My Drive/TCC/T2T LibriSpeech/output/hparams.json
W0828 15:46:37.167232 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/hparams_lib.py:65: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.
W0828 15:46:37.169995 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/trainer_lib.py:839: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.
W0828 15:46:37.170993 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/trainer_lib.py:123: The name tf.GraphOptions is deprecated. Please use tf.compat.v1.GraphOptions instead.
W0828 15:46:37.171175 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/trainer_lib.py:129: The name tf.GPUOptions is deprecated. Please use tf.compat.v1.GPUOptions instead.
W0828 15:46:37.171345 139684777486208 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/trainer_lib.py:242: RunConfig.__init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version.
Instructions for updating:
When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.
I0828 15:46:37.171534 139684777486208 trainer_lib.py:265] Configuring DataParallelism to replicate the model.
I0828 15:46:37.171617 139684777486208 devices.py:76] schedule=continuous_train_and_eval
I0828 15:46:37.171699 139684777486208 devices.py:77] worker_gpu=1
I0828 15:46:37.171761 139684777486208 devices.py:78] sync=False
W0828 15:46:37.171855 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/devices.py:139: The name tf.logging.warn is deprecated. Please use tf.compat.v1.logging.warn instead.
W0828 15:46:37.171929 139684777486208 devices.py:141] Schedule=continuous_train_and_eval. Assuming that training is running on a single machine.
I0828 15:46:37.172624 139684777486208 devices.py:170] datashard_devices: ['gpu:0']
I0828 15:46:37.172721 139684777486208 devices.py:171] caching_devices: None
I0828 15:46:37.173149 139684777486208 devices.py:172] ps_devices: ['gpu:0']
I0828 15:46:37.173902 139684777486208 estimator.py:209] Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f0aa908abe0>, '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_train_distribute': None, '_eval_distribute': None, '_experimental_max_worker_delay_secs': None, '_device_fn': None, '_tf_config': gpu_options {
per_process_gpu_memory_fraction: 1.0
}
, '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': None, '_log_step_count_steps': 100, '_protocol': None, '_session_config': gpu_options {
per_process_gpu_memory_fraction: 0.95
}
allow_soft_placement: true
graph_options {
optimizer_options {
global_jit_level: OFF
}
}
isolate_session_state: true
, '_save_checkpoints_steps': 1000, '_keep_checkpoint_max': 20, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': '/content/gdrive/My Drive/TCC/T2T LibriSpeech/output/', 'use_tpu': False, 't2t_device_info': {'num_async_replicas': 1}, 'data_parallelism': <tensor2tensor.utils.expert_utils.Parallelism object at 0x7f0aa908af28>}
W0828 15:46:37.174193 139684777486208 model_fn.py:630] Estimator's model_fn (<function T2TModel.make_estimator_model_fn.<locals>.wrapping_model_fn at 0x7f0aa9087ae8>) includes params argument, but params are not passed to Estimator.
W0828 15:46:37.174434 139684777486208 trainer_lib.py:783] ValidationMonitor only works with --schedule=train_and_evaluate
I0828 15:46:37.185815 139684777486208 estimator_training.py:186] Not using Distribute Coordinator.
I0828 15:46:37.186260 139684777486208 training.py:612] Running training and evaluation locally (non-distributed).
I0828 15:46:37.186565 139684777486208 training.py:700] Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 1000 or save_checkpoints_secs None.
E0828 15:46:37.192399 139684777486208 checkpoint_management.py:348] Couldn't match files for checkpoint /content/gdrive/My Drive/TCC/T2T LibriSpeech/output/model.ckpt-13000
W0828 15:46:37.197599 139684777486208 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
I0828 15:46:37.208258 139684777486208 problem.py:644] Reading data files from /content/gdrive/My Drive/TCC/T2T LibriSpeech/data/librispeech_clean_small-train*
I0828 15:46:37.229276 139684777486208 problem.py:670] partition: 0 num_data_files: 100
W0828 15:46:37.232276 139684777486208 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/data_generators/problem.py:680: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_determinstic`.
W0828 15:46:37.275019 139684777486208 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/layers/common_audio.py:92: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W0828 15:46:37.562360 139684777486208 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/layers/common_audio.py:115: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W0828 15:46:37.750620 139684777486208 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/data_reader.py:275: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
`tf.data.TFRecordDataset(path)`
W0828 15:46:38.267626 139684777486208 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/data_reader.py:395: DatasetV1.output_shapes (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_shapes(dataset)`.
W0828 15:46:38.267972 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/data_reader.py:398: The name tf.logging.warning is deprecated. Please use tf.compat.v1.logging.warning instead.
W0828 15:46:38.268058 139684777486208 data_reader.py:399] Shapes are not fully defined. Assuming batch_size means tokens.
W0828 15:46:38.323740 139684777486208 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/experimental/ops/grouping.py:193: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W0828 15:46:38.372743 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/data_reader.py:231: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.
I0828 15:46:38.437698 139684777486208 estimator.py:1145] Calling model_fn.
I0828 15:46:38.450161 139684777486208 t2t_model.py:2248] Setting T2TModel mode to 'train'
W0828 15:46:38.529374 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/t2t_model.py:244: The name tf.summary.text is deprecated. Please use tf.compat.v1.summary.text instead.
I0828 15:46:39.269068 139684777486208 api.py:255] Using variable initializer: uniform_unit_scaling
I0828 15:46:39.718456 139684777486208 t2t_model.py:2248] Transforming feature 'inputs' with speech_recognition_modality.bottom
W0828 15:46:39.720613 139684777486208 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/layers/modalities.py:439: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.keras.layers.Conv2D` instead.
I0828 15:46:40.186799 139684777486208 t2t_model.py:2248] Transforming feature 'targets' with symbol_modality_256_384.targets_bottom
I0828 15:46:40.323158 139684777486208 t2t_model.py:2248] Building model body
W0828 15:46:40.388057 139684777486208 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/models/transformer.py:96: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
W0828 15:46:40.435504 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/layers/common_layers.py:3077: The name tf.layers.Dense is deprecated. Please use tf.compat.v1.layers.Dense instead.
W0828 15:46:40.844527 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/layers/common_attention.py:1249: The name tf.summary.image is deprecated. Please use tf.compat.v1.summary.image instead.
I0828 15:46:48.565067 139684777486208 t2t_model.py:2248] Transforming body output with symbol_modality_256_384.top
W0828 15:46:48.689695 139684777486208 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/learning_rate.py:120: The name tf.train.get_or_create_global_step is deprecated. Please use tf.compat.v1.train.get_or_create_global_step instead.
I0828 15:46:48.691083 139684777486208 learning_rate.py:29] Base learning rate: 2.000000
I0828 15:46:48.704310 139684777486208 optimize.py:338] Trainable Variables Total size: 70343552
I0828 15:46:48.704722 139684777486208 optimize.py:338] Non-trainable variables Total size: 5
I0828 15:46:48.705073 139684777486208 optimize.py:193] Using optimizer adam
I0828 15:47:00.715373 139684777486208 estimator.py:1147] Done calling model_fn.
I0828 15:47:00.717198 139684777486208 basic_session_run_hooks.py:541] Create CheckpointSaverHook.
I0828 15:47:05.476253 139684777486208 monitored_session.py:240] Graph was finalized.
2019-08-28 15:47:05.480538: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz
2019-08-28 15:47:05.480819: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x22a5640 executing computations on platform Host. Devices:
2019-08-28 15:47:05.480857: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
W0828 15:47:05.483572 139684777486208 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
Traceback (most recent call last):
File "/usr/local/bin/t2t-trainer", line 33, in <module>
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "/usr/local/bin/t2t-trainer", line 28, in main
t2t_trainer.main(argv)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 412, in main
execute_schedule(exp)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 367, in execute_schedule
getattr(exp, FLAGS.schedule)()
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/trainer_lib.py", line 456, in continuous_train_and_eval
self._eval_spec)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/training.py", line 473, in train_and_evaluate
return executor.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/training.py", line 613, in run
return self.run_local()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/training.py", line 714, in run_local
saving_listeners=saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 367, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1158, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1192, in _train_model_default
saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1480, in _train_with_estimator_spec
log_step_count_steps=log_step_count_steps) as mon_sess:
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 584, in MonitoredTrainingSession
stop_grace_period_secs=stop_grace_period_secs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1007, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 725, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1200, in __init__
_WrappedSession.__init__(self, self._create_session())
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 1205, in _create_session
return self._sess_creator.create_session()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 871, in create_session
self.tf_sess = self._session_creator.create_session()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py", line 647, in create_session
init_fn=self._scaffold.init_fn)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/session_manager.py", line 290, in prepare_session
config=config)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/session_manager.py", line 220, in _restore_checkpoint
saver.restore(sess, ckpt.model_checkpoint_path)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py", line 1278, in restore
compat.as_text(save_path))
ValueError: The passed save_path is not a valid checkpoint: /content/gdrive/My Drive/TCC/T2T LibriSpeech/output/model.ckpt-13000
```
| 1medium
|
Title: [BUG] pygwalker bug report
Body: Error Message
Failed to load model class 'BoxModel' from module '@jupyter-widgets/controls'
ChunkLoadError: Loading chunk 345 failed.
Difficult to decode such an error message.
| 1medium
|
Title: how to set the glow effect to the line
Body: ### Question
hello, I have searched this question in docs and
issues, but, I can't find how to set.
Is this function is available?
thanks. | 1medium
|
Title: Failed to build webrtcvad when installing a package
Body: Hi, sorry for bothering but I had and an issue trying to install a package with pip, I don't know if maybe it's an error I'm making but I haven't been able to solve it.
This is what I was trying to install and the error that appeared
```
pip install ffsubsync
Defaulting to user installation because normal site-packages is not writeable
Collecting ffsubsync
Using cached ffsubsync-0.4.25-py2.py3-none-any.whl (36 kB)
Collecting auditok==0.1.5 (from ffsubsync)
Using cached auditok-0.1.5-py3-none-any.whl
Requirement already satisfied: charset-normalizer in /usr/lib/python3.12/site-packages (from ffsubsync) (3.2.0)
Collecting faust-cchardet (from ffsubsync)
Obtaining dependency information for faust-cchardet from https://files.pythonhosted.org/packages/81/33/a705c39e89b7ca7564b90c1a4ab4d4c2c0534cde911191d87a89b87b6c60/faust_cchardet-2.1.19-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
Using cached faust_cchardet-2.1.19-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.3 kB)
Collecting ffmpeg-python (from ffsubsync)
Using cached ffmpeg_python-0.2.0-py3-none-any.whl (25 kB)
Collecting future>=0.18.2 (from ffsubsync)
Using cached future-0.18.3-py3-none-any.whl
Collecting numpy>=1.12.0 (from ffsubsync)
Obtaining dependency information for numpy>=1.12.0 from https://files.pythonhosted.org/packages/c4/c6/f971d43a272e574c21707c64f12730c390f2bfa6426185fbdf0265a63cbd/numpy-1.26.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
Using cached numpy-1.26.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Collecting rich (from ffsubsync)
Obtaining dependency information for rich from https://files.pythonhosted.org/packages/be/be/1520178fa01eabe014b16e72a952b9f900631142ccd03dc36cf93e30c1ce/rich-13.7.0-py3-none-any.whl.metadata
Using cached rich-13.7.0-py3-none-any.whl.metadata (18 kB)
Requirement already satisfied: six in /usr/lib/python3.12/site-packages (from ffsubsync) (1.16.0)
Collecting srt>=3.0.0 (from ffsubsync)
Using cached srt-3.5.3-py3-none-any.whl
Collecting tqdm (from ffsubsync)
Obtaining dependency information for tqdm from https://files.pythonhosted.org/packages/00/e5/f12a80907d0884e6dff9c16d0c0114d81b8cd07dc3ae54c5e962cc83037e/tqdm-4.66.1-py3-none-any.whl.metadata
Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting typing-extensions (from ffsubsync)
Obtaining dependency information for typing-extensions from https://files.pythonhosted.org/packages/b7/f4/6a90020cd2d93349b442bfcb657d0dc91eee65491600b2cb1d388bc98e6b/typing_extensions-4.9.0-py3-none-any.whl.metadata
Using cached typing_extensions-4.9.0-py3-none-any.whl.metadata (3.0 kB)
Collecting webrtcvad (from ffsubsync)
Using cached webrtcvad-2.0.10.tar.gz (66 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting chardet (from ffsubsync)
Obtaining dependency information for chardet from https://files.pythonhosted.org/packages/38/6f/f5fbc992a329ee4e0f288c1fe0e2ad9485ed064cac731ed2fe47dcc38cbf/chardet-5.2.0-py3-none-any.whl.metadata
Using cached chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB)
Collecting pysubs2>=1.2.0 (from ffsubsync)
Using cached pysubs2-1.6.1-py3-none-any.whl (35 kB)
Collecting markdown-it-py>=2.2.0 (from rich->ffsubsync)
Obtaining dependency information for markdown-it-py>=2.2.0 from https://files.pythonhosted.org/packages/42/d7/1ec15b46af6af88f19b8e5ffea08fa375d433c998b8a7639e76935c14f1f/markdown_it_py-3.0.0-py3-none-any.whl.metadata
Using cached markdown_it_py-3.0.0-py3-none-any.whl.metadata (6.9 kB)
Collecting pygments<3.0.0,>=2.13.0 (from rich->ffsubsync)
Obtaining dependency information for pygments<3.0.0,>=2.13.0 from https://files.pythonhosted.org/packages/97/9c/372fef8377a6e340b1704768d20daaded98bf13282b5327beb2e2fe2c7ef/pygments-2.17.2-py3-none-any.whl.metadata
Using cached pygments-2.17.2-py3-none-any.whl.metadata (2.6 kB)
Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich->ffsubsync)
Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Using cached numpy-1.26.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.0 MB)
Using cached chardet-5.2.0-py3-none-any.whl (199 kB)
Using cached faust_cchardet-2.1.19-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (317 kB)
Using cached rich-13.7.0-py3-none-any.whl (240 kB)
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Using cached typing_extensions-4.9.0-py3-none-any.whl (32 kB)
Using cached markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
Using cached pygments-2.17.2-py3-none-any.whl (1.2 MB)
Building wheels for collected packages: webrtcvad
Building wheel for webrtcvad (pyproject.toml) ... error
error: subprocess-exited-with-error
ร Building wheel for webrtcvad (pyproject.toml) did not run successfully.
โ exit code: 1
โฐโ> [20 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-312
copying webrtcvad.py -> build/lib.linux-x86_64-cpython-312
running build_ext
building '_webrtcvad' extension
creating build/temp.linux-x86_64-cpython-312
creating build/temp.linux-x86_64-cpython-312/cbits
creating build/temp.linux-x86_64-cpython-312/cbits/webrtc
creating build/temp.linux-x86_64-cpython-312/cbits/webrtc/common_audio
creating build/temp.linux-x86_64-cpython-312/cbits/webrtc/common_audio/signal_processing
creating build/temp.linux-x86_64-cpython-312/cbits/webrtc/common_audio/vad
gcc -fno-strict-overflow -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -fexceptions -fcf-protection -fexceptions -fcf-protection -fexceptions -fcf-protection -fPIC -DWEBRTC_POSIX -Icbits -I/usr/include/python3.12 -c cbits/pywebrtcvad.c -o build/temp.linux-x86_64-cpython-312/cbits/pywebrtcvad.o
cbits/pywebrtcvad.c:1:10: fatal error: Python.h: No such file or directory
1 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for webrtcvad
Failed to build webrtcvad
ERROR: Could not build wheels for webrtcvad, which is required to install pyproject.toml-based projects
```
OS: Fedora Linux (KDE Desktop environment) | 1medium
|
Title: How to use Django models which have no "name" attribute?
Body: **Do my Django model has to have "name" attribute??? Can I override is somehow?** I have some Django models which don't have "name" attribute. The don't work :( Only those which has "name" attribute work and I can query them with GraphiQL.:/
> ImportError at /graphql
> Could not import 'myproject.schema.schema' for Graphene setting 'SCHEMA'. AttributeError: type object 'MyModel' has no attribute 'name'. | 1medium
|
Title: [ I have Vietnamese voice data] Please provide support for Vietnamese
Body: ## Support for Vietnamese Language
- Hi, I found 100 hours of voice data from VinBigdata - https://institute.vinbigdata.org/events/vinbigdata-chia-se-100-gio-du-lieu-tieng-noi-cho-cong-dong/
- Here is the download link https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view?usp=sharing
- Also there are also about 200 hours of data from Mozzila - https://commonvoice.mozilla.org/en/datasets (choose Vietnamese then download)
Hope this will help. Thanks for your work.
| 1medium
|
Title: doesn't work...no way. please help, windows11, miniconda[Bug]
Body: ### Describe the bug
after many problems to install it, finally it installed...but can't use it.
why so many problems using it on windows?
I saw people installing it in 2 min on linux.
I need it for work, it is making me crazy.
Please help me to make it works.
I must create batch file to make it tts many files, but actually I am unable to make it tts one file.
I am wasting weeks trying to solve all conflicts about this software.
I need help.
### To Reproduce
(TTS) C:\Users\Administrator>tts --text "Text for TTS" --out_path output/path/speech.wav
> tts_models/en/ljspeech/tacotron2-DDC is already downloaded.
> vocoder_models/en/ljspeech/hifigan_v2 is already downloaded.
> Using model: Tacotron2
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Model's reduction rate `r` is set to: 1
> Vocoder Model: hifigan
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Generator Model: hifigan_generator
> Discriminator Model: hifigan_discriminator
Removing weight norm...
> Text: Text for TTS
> Text splitted to sentences.
['Text for TTS']
> Processing time: 0.4076573848724365
> Real-time factor: 0.23714767139186432
> Saving output to output/path/speech.wav
Traceback (most recent call last):
File "C:\Users\shula\miniconda3\envs\TTS\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\shula\miniconda3\envs\TTS\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\shula\miniconda3\envs\TTS\Scripts\tts.exe\__main__.py", line 7, in <module>
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\TTS\bin\synthesize.py", line 451, in main
synthesizer.save_wav(wav, args.out_path)
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\TTS\utils\synthesizer.py", line 244, in save_wav
save_wav(wav=wav, path=path, sample_rate=self.output_sample_rate)
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\TTS\utils\audio\numpy_transforms.py", line 439, in save_wav
scipy.io.wavfile.write(path, sample_rate, wav_norm.astype(np.int16))
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\scipy\io\wavfile.py", line 767, in write
fid = open(filename, 'wb')
FileNotFoundError: [Errno 2] No such file or directory: 'output/path/speech.wav'
### Expected behavior
i made the most simple test, it failed
### Logs
_No response_
### Environment
```shell
(TTS) C:\Users\Administrator>conda list
# packages in environment at C:\Users\shula\miniconda3\envs\TTS:
#
# Name Version Build Channel
absl-py 1.4.0 pypi_0 pypi
accelerate 0.21.0 pypi_0 pypi
aiohttp 3.8.5 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
anyascii 0.3.2 pypi_0 pypi
appdirs 1.4.4 pypi_0 pypi
async-timeout 4.0.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
audioread 3.0.0 pypi_0 pypi
babel 2.12.1 pypi_0 pypi
bangla 0.0.2 pypi_0 pypi
blinker 1.6.2 pypi_0 pypi
bnnumerizer 0.0.2 pypi_0 pypi
bnunicodenormalizer 0.1.1 pypi_0 pypi
boltons 23.0.0 pypi_0 pypi
bzip2 1.0.8 he774522_0
ca-certificates 2023.05.30 haa95532_0
cachetools 5.3.1 pypi_0 pypi
certifi 2023.7.22 pypi_0 pypi
cffi 1.15.1 pypi_0 pypi
charset-normalizer 3.2.0 pypi_0 pypi
clean-fid 0.1.35 pypi_0 pypi
click 8.1.6 pypi_0 pypi
clip-anytorch 2.5.2 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
contourpy 1.1.0 pypi_0 pypi
coqpit 0.0.17 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
cython 0.29.30 pypi_0 pypi
dateparser 1.1.8 pypi_0 pypi
decorator 5.1.1 pypi_0 pypi
docker-pycreds 0.4.0 pypi_0 pypi
docopt 0.6.2 pypi_0 pypi
einops 0.6.1 pypi_0 pypi
encodec 0.1.1 pypi_0 pypi
filelock 3.12.2 pypi_0 pypi
flask 2.3.2 pypi_0 pypi
fonttools 4.42.0 pypi_0 pypi
frozenlist 1.4.0 pypi_0 pypi
fsspec 2023.6.0 pypi_0 pypi
ftfy 6.1.1 pypi_0 pypi
g2pkk 0.1.2 pypi_0 pypi
gitdb 4.0.10 pypi_0 pypi
gitpython 3.1.32 pypi_0 pypi
google-auth 2.22.0 pypi_0 pypi
google-auth-oauthlib 1.0.0 pypi_0 pypi
grpcio 1.57.0 pypi_0 pypi
gruut 2.2.3 pypi_0 pypi
gruut-ipa 0.13.0 pypi_0 pypi
gruut-lang-de 2.0.0 pypi_0 pypi
gruut-lang-en 2.0.0 pypi_0 pypi
gruut-lang-es 2.0.0 pypi_0 pypi
gruut-lang-fr 2.0.2 pypi_0 pypi
huggingface-hub 0.16.4 pypi_0 pypi
idna 3.4 pypi_0 pypi
imageio 2.31.1 pypi_0 pypi
inflect 5.6.0 pypi_0 pypi
itsdangerous 2.1.2 pypi_0 pypi
jamo 0.4.1 pypi_0 pypi
jieba 0.42.1 pypi_0 pypi
jinja2 3.1.2 pypi_0 pypi
joblib 1.3.2 pypi_0 pypi
jsonlines 1.2.0 pypi_0 pypi
jsonmerge 1.9.2 pypi_0 pypi
jsonschema 4.19.0 pypi_0 pypi
jsonschema-specifications 2023.7.1 pypi_0 pypi
k-diffusion 0.0.15 pypi_0 pypi
kiwisolver 1.4.4 pypi_0 pypi
kornia 0.7.0 pypi_0 pypi
lazy-loader 0.3 pypi_0 pypi
libffi 3.4.4 hd77b12b_0
librosa 0.10.0 pypi_0 pypi
llvmlite 0.40.1 pypi_0 pypi
markdown 3.4.4 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
matplotlib 3.7.2 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
msgpack 1.0.5 pypi_0 pypi
multidict 6.0.4 pypi_0 pypi
networkx 2.8.8 pypi_0 pypi
nltk 3.8.1 pypi_0 pypi
num2words 0.5.12 pypi_0 pypi
numba 0.57.0 pypi_0 pypi
numpy 1.23.0 pypi_0 pypi
oauthlib 3.2.2 pypi_0 pypi
openssl 3.0.10 h2bbff1b_0
packaging 23.1 pypi_0 pypi
pandas 2.0.3 pypi_0 pypi
pathtools 0.1.2 pypi_0 pypi
pillow 10.0.0 pypi_0 pypi
pip 23.2.1 py310haa95532_0
platformdirs 3.10.0 pypi_0 pypi
pooch 1.7.0 pypi_0 pypi
protobuf 4.24.0 pypi_0 pypi
psutil 5.9.5 pypi_0 pypi
pyasn1 0.5.0 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pycparser 2.21 pypi_0 pypi
pynndescent 0.5.10 pypi_0 pypi
pyparsing 3.0.9 pypi_0 pypi
pypinyin 0.49.0 pypi_0 pypi
pysbd 0.3.4 pypi_0 pypi
python 3.10.12 he1021f5_0
python-crfsuite 0.9.9 pypi_0 pypi
python-dateutil 2.8.2 pypi_0 pypi
pytz 2023.3 pypi_0 pypi
pywavelets 1.4.1 pypi_0 pypi
pyyaml 6.0.1 pypi_0 pypi
referencing 0.30.2 pypi_0 pypi
regex 2023.8.8 pypi_0 pypi
requests 2.31.0 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
resize-right 0.0.2 pypi_0 pypi
rpds-py 0.9.2 pypi_0 pypi
rsa 4.9 pypi_0 pypi
safetensors 0.3.2 pypi_0 pypi
scikit-image 0.21.0 pypi_0 pypi
scikit-learn 1.3.0 pypi_0 pypi
scipy 1.11.1 pypi_0 pypi
sentry-sdk 1.29.2 pypi_0 pypi
setproctitle 1.3.2 pypi_0 pypi
setuptools 68.0.0 py310haa95532_0
six 1.16.0 pypi_0 pypi
smmap 5.0.0 pypi_0 pypi
soundfile 0.12.1 pypi_0 pypi
soxr 0.3.5 pypi_0 pypi
sqlite 3.41.2 h2bbff1b_0
sympy 1.12 pypi_0 pypi
tensorboard 2.14.0 pypi_0 pypi
tensorboard-data-server 0.7.1 pypi_0 pypi
threadpoolctl 3.2.0 pypi_0 pypi
tifffile 2023.7.18 pypi_0 pypi
tk 8.6.12 h2bbff1b_0
tokenizers 0.13.3 pypi_0 pypi
torch 2.0.1 pypi_0 pypi
torchaudio 2.0.2 pypi_0 pypi
torchdiffeq 0.2.3 pypi_0 pypi
torchsde 0.2.5 pypi_0 pypi
torchvision 0.15.2 pypi_0 pypi
tqdm 4.66.1 pypi_0 pypi
trainer 0.0.30 pypi_0 pypi
trampoline 0.1.2 pypi_0 pypi
transformers 4.31.0 pypi_0 pypi
tts 0.16.1 pypi_0 pypi
typing-extensions 4.7.1 pypi_0 pypi
tzdata 2023.3 pypi_0 pypi
tzlocal 5.0.1 pypi_0 pypi
umap-learn 0.5.1 pypi_0 pypi
urllib3 1.26.16 pypi_0 pypi
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
wandb 0.15.8 pypi_0 pypi
wcwidth 0.2.6 pypi_0 pypi
werkzeug 2.3.6 pypi_0 pypi
wheel 0.38.4 py310haa95532_0
xz 5.4.2 h8cc25b3_0
yarl 1.9.2 pypi_0 pypi
zlib 1.2.13 h8cc25b3_0
```
### Additional context
DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.*; python_version >= "3.7". pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063
Installing collected packages: mecab-python3
Successfully installed mecab-python3-1.0.6
(TTS) C:\Users\Administrator>pip install unidic-lite
Collecting unidic-lite
Using cached unidic-lite-1.0.8.tar.gz (47.4 MB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: unidic-lite
Building wheel for unidic-lite (setup.py) ... done
Created wheel for unidic-lite: filename=unidic_lite-1.0.8-py3-none-any.whl size=47658833 sha256=57d389c236768b40af598139cab9147bb1ceca437eac9c206ed9a8c85b85811f
Stored in directory: c:\users\administrator\appdata\local\pip\cache\wheels\89\e8\68\f9ac36b8cc6c8b3c96888cd57434abed96595d444f42243853
Successfully built unidic-lite
DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.*; python_version >= "3.7". pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063
Installing collected packages: unidic-lite
Successfully installed unidic-lite-1.0.8
(TTS) C:\Users\Administrator>tts --text "Text for TTS" --out_path output/path/speech.wav
> Downloading model to C:\Users\Administrator\AppData\Local\tts\tts_models--en--ljspeech--tacotron2-DDC
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 113M/113M [01:25<00:00, 1.31MiB/s]
> Model's license - apache 2.0
> Check https://choosealicense.com/licenses/apache-2.0/ for more info.
> Downloading model to C:\Users\Administrator\AppData\Local\tts\vocoder_models--en--ljspeech--hifigan_v2
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3.80M/3.80M [00:05<00:00, 712kiB/s]
> Model's license - apache 2.0
> Check https://choosealicense.com/licenses/apache-2.0/ for more info.
> Using model: Tacotron2
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Model's reduction rate `r` is set to: 1
> Vocoder Model: hifigan
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Generator Model: hifigan_generator
> Discriminator Model: hifigan_discriminator
Removing weight norm...
> Text: Text for TTS
> Text splitted to sentences.
['Text for TTS']
> Processing time: 0.5023317337036133
> Real-time factor: 0.298266230293103
> Saving output to output/path/speech.wav
Traceback (most recent call last):
File "C:\Users\shula\miniconda3\envs\TTS\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\shula\miniconda3\envs\TTS\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\shula\miniconda3\envs\TTS\Scripts\tts.exe\__main__.py", line 7, in <module>
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\TTS\bin\synthesize.py", line 451, in main
synthesizer.save_wav(wav, args.out_path)
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\TTS\utils\synthesizer.py", line 244, in save_wav
save_wav(wav=wav, path=path, sample_rate=self.output_sample_rate)
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\TTS\utils\audio\numpy_transforms.py", line 439, in save_wav scipy.io.wavfile.write(path, sample_rate, wav_norm.astype(np.int16))
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\scipy\io\wavfile.py", line 767, in write
fid = open(filename, 'wb')
FileNotFoundError: [Errno 2] No such file or directory: 'output/path/speech.wav'
(TTS) C:\Users\Administrator>tts --text "Text for TTS" --out_path output/path/speech.wav
> tts_models/en/ljspeech/tacotron2-DDC is already downloaded.
> vocoder_models/en/ljspeech/hifigan_v2 is already downloaded.
> Using model: Tacotron2
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Model's reduction rate `r` is set to: 1
> Vocoder Model: hifigan
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Generator Model: hifigan_generator
> Discriminator Model: hifigan_discriminator
Removing weight norm...
> Text: Text for TTS
> Text splitted to sentences.
['Text for TTS']
> Processing time: 0.4537527561187744
> Real-time factor: 0.2712919813562629
> Saving output to output/path/speech.wav
Traceback (most recent call last):
File "C:\Users\shula\miniconda3\envs\TTS\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\shula\miniconda3\envs\TTS\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\shula\miniconda3\envs\TTS\Scripts\tts.exe\__main__.py", line 7, in <module>
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\TTS\bin\synthesize.py", line 451, in main
synthesizer.save_wav(wav, args.out_path)
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\TTS\utils\synthesizer.py", line 244, in save_wav
save_wav(wav=wav, path=path, sample_rate=self.output_sample_rate)
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\TTS\utils\audio\numpy_transforms.py", line 439, in save_wav scipy.io.wavfile.write(path, sample_rate, wav_norm.astype(np.int16))
File "C:\Users\shula\miniconda3\envs\TTS\lib\site-packages\scipy\io\wavfile.py", line 767, in write
fid = open(filename, 'wb')
FileNotFoundError: [Errno 2] No such file or directory: 'output/path/speech.wav'
| 1medium
|
Title: New event to trigger when a tab is in view, not necessarily explicitly selected by the user
Body: - [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
While `gr.Tab` as the `select` event, it is only triggered when a user explicitly _selects_ the tab. What if the tab is selected by default and the user only has to select that tab in order to return to that tab after navigating away from it? Another event, such as `focus` or `default_selected` will be a good idea.
**Describe the solution you'd like**
Support a new event for tabs that is triggered when the tab is in view, not necessarily because the user selected it.
**Additional context**
To see what I mean, select the "Pydantic entity profiles" tab https://huggingface.co/spaces/anirbanbasu/gradio-experiments. By default, a sub-tab called "View (decorative)" is selected. However, this automatic selection does not trigger the `select` event on that tab. Thus the user will see something like this.
<img width="1975" alt="Screenshot 2024-12-11 at 16 27 26" src="https://github.com/user-attachments/assets/28ee7b8a-78a0-4a3b-ae65-1a7876ae8240">
Yet, if the user selects another sub-tab, e.g., "View (JSON)" and then selects "View (decorative)", the `select` event is triggered and the user gets to see something like the following.
<img width="1975" alt="Screenshot 2024-12-11 at 16 27 40" src="https://github.com/user-attachments/assets/f73fca00-fb11-452a-8334-f85c127110fa">
| 1medium
|
Title: Allow background callback tasks to programmatically retry later.
Body: **Is your feature request related to a problem? Please describe.**
Background callbacks running in a distributed environment (Openshift or Kubernetes) can fail for reasons that are recoverable via application logic. e.g. a data resource isn't available at a point in time, but will be available in the future.
A bad solution is to have the background callback task check for a resource and `sleep` for some amount of time, then check again later, and repeat. This consumes the `Celery Worker` thread for no reason, and in our app, leads to worker pool exhaustion.
**Describe the solution you'd like**
It'd make sense for a background callback task to:
1. check whether it can execute given the current state,
2. proceed if it can,
3. re-enqueue itself if it can't, yielding the worker thread to be used by another task.
Since background callbacks are Celery tasks, the features to enable programatic retries are already available with the `bind` argument: a task receives a `self` parameter that can be instructed to retry.
This might look like the following pseudocode:
```python
@dash.callback(
... # Inputs and Outputs
background=True,
celery_bind=True, # first param to func must be for 'self'
retry_on_exceptions=[DBNotAvailableRetry],
)
def func(self, conn):
val = conn.get_value() # raises DBNotAvailableRetry exception
if not val:
self.retry(after="5s", exponential_backoff=True, jitter=True)
return val
```
**Describe alternatives you've considered**
Since Dash controls the context of the executing tasks when it's enqueued in Celery, the functionality of pushing the `self` parameter into the background callback arguments could be avoided if Dash instead implemented exception handling that would trigger retries when caught.
```python
@celery_app.task(
bind=True
)
def dash_bg_callback_wrapper(self, user_func, args):
try:
results = user_func(*args)
return results
except dash.BG_RETRY_EXCEPTION as e:
self.retry(
after=e.args["after"] # user could set this, knowing their app- would default to 0 time before retry.
)
```
| 1medium
|
Title: Request failed during generation: Server error: Value out of range: -29146814772
Body: ### System Info
text-generation-launcher 3.1.1-dev0
Single RTX 4070 S GPU
NVIDIA-SMI 572.16 Driver Version: 572.16 CUDA Version: 12.8
Models Used : meta-llama/Llama-3.1-8B-Instruct, Yujivus/DeepSeek-R1-Distill-Llama-8B-AWQ, Yujivus/Phi-4-Health-CoT-1.1-AWQ
Docker Command:
docker run --name tgi-server --gpus all -p 80:81 --network tgi -v volume:/data --env HUGGING_FACE_HUB_TOKEN=... ghcr.io/huggingface/text-generation-inference:latest --model-id Yujivus/Phi-4-Health-CoT-1.1-AWQ --quantize awq
(Quantize awq is used for only Yujivus/DeepSeek-R1-Distill-Llama-8B-AWQ and Yujivus/Phi-4-Health-CoT-1.1-AWQ models. For meta-llama/Llama-3.1-8B-Instruct, I tried eetq, bitsandbytes-nf4 and bitsandbytes-fp4. None of them worked.)
### Information
- [x] Docker
- [ ] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
### Reproduction
I am running simple function for inference :
async def get_response(query):
"""Send a single request to the TGI server."""
try:
async with AsyncInferenceClient(base_url="http://tgi-server:80") as client:
output = await client.chat.completions.create(
model="tgi",
messages=[
{
"role": "system",
"content": "You are a helpful assistant\n\n",
},
{
"role": "user",
"content": query,
},
],
stream=False,
max_tokens=3000,
)
return output.choices[0].message.content
except Exception as e:
print(f"Error: {e}")
return None
Error : 2025-02-05 16:11:02 2025-02-05T13:11:02.646154Z ERROR text_generation_launcher: Method Decode encountered an error.
2025-02-05 16:11:02 Traceback (most recent call last):
2025-02-05 16:11:02 File "/opt/conda/bin/text-generation-server", line 10, in <module>
2025-02-05 16:11:02 sys.exit(app())
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/typer/main.py", line 323, in __call__
2025-02-05 16:11:02 return get_command(self)(*args, **kwargs)
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
2025-02-05 16:11:02 return self.main(*args, **kwargs)
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/typer/core.py", line 743, in main
2025-02-05 16:11:02 return _main(
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/typer/core.py", line 198, in _main
2025-02-05 16:11:02 rv = self.invoke(ctx)
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/click/core.py", line 1697, in invoke
2025-02-05 16:11:02 return _process_result(sub_ctx.command.invoke(sub_ctx))
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
2025-02-05 16:11:02 return ctx.invoke(self.callback, **ctx.params)
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/click/core.py", line 788, in invoke
2025-02-05 16:11:02 return __callback(*args, **kwargs)
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/typer/main.py", line 698, in wrapper
2025-02-05 16:11:02 return callback(**use_params)
2025-02-05 16:11:02 File "/usr/src/server/text_generation_server/cli.py", line 119, in serve
2025-02-05 16:11:02 server.serve(
2025-02-05 16:11:02 File "/usr/src/server/text_generation_server/server.py", line 315, in serve
2025-02-05 16:11:02 asyncio.run(
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
2025-02-05 16:11:02 return runner.run(main)
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
2025-02-05 16:11:02 return self._loop.run_until_complete(task)
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete
2025-02-05 16:11:02 self.run_forever()
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 608, in run_forever
2025-02-05 16:11:02 self._run_once()
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once
2025-02-05 16:11:02 handle._run()
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/asyncio/events.py", line 84, in _run
2025-02-05 16:11:02 self._context.run(self._callback, *self._args)
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method
2025-02-05 16:11:02 return await self.intercept(
2025-02-05 16:11:02 > File "/usr/src/server/text_generation_server/interceptor.py", line 24, in intercept
2025-02-05 16:11:02 return await response
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor
2025-02-05 16:11:02 raise error
2025-02-05 16:11:02 File "/opt/conda/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor
2025-02-05 16:11:02 return await behavior(request_or_iterator, context)
2025-02-05 16:11:02 File "/usr/src/server/text_generation_server/server.py", line 221, in Decode
2025-02-05 16:11:02 return generate_pb2.DecodeResponse(
2025-02-05 16:11:02 ValueError: Value out of range: -29146814772
2025-02-05 16:11:02 2025-02-05T13:11:02.646395Z ERROR batch{batch_size=1}:decode:decode{size=1}:decode{size=1}: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: Value out of range: -29146814772
2025-02-05 16:11:02 2025-02-05T13:11:02.647196Z ERROR chat_completions:generate:generate_stream:schedule:infer:send_error: text_generation_router_v3::backend: backends/v3/src/backend.rs:546: Request failed during generation: Server error: Value out of range: -29146814772
Only 40-50% of the requests to TGI return a successful response (for Yujivus/Phi-4-Health-CoT-1.1-AWQ), while the rest fail. I tried with several models but I encountered same error for all of them except smaller llama 3.2 1b and 3b models. I observed the highest success rate was when using the EETQ quantization method with the LLaMA 3.1 8B model. But still, I encountered this error oftenly.
The Phi-4 model is fine tuned with FreedomIntelligence/medical-o1-reasoning-SFT dataset. Since this dataset is CoT dataset, the model is talking too much. And I get more errors. If response takes longer tokens, I get more errors. Probably because for every token, there is a change for getting value of range error. It is maybe because of quantization, but to test this, I tried Llama 3.2 1b model with and without quantization and I didn't encounter this error. But still, maybe the error occurs for larger models, which perform much more computation, during the decode phase because quantization causes the values to exceed the expected range.
I tried with smaller max_tokens but probability of getting error did not change. Error occurs roughly for 1/2000 tokens.
If there is a way to fix, it would be good to know. Thanks.
### Expected behavior
2025-02-05 16:49:39 2025-02-05T13:49:39.259073Z INFO text_generation_router_v3::radix: backends/v3/src/radix.rs:108: Prefix 0 - Suffix 1045
2025-02-05 16:49:54 2025-02-05T13:49:54.901547Z INFO chat_completions{total_time="15.645054793s" validation_time="2.504013ms" queue_time="550.086ยตs" inference_time="15.642000896s" time_per_token="21.876924ms" seed="Some(15551805739611344766)"}: text_generation_router::server: router/src/server.rs:625: Success | 2hard
|
Title: ๅฝๆฐไฝฟ็จ
Body: 
ๆๆณ่ทๅๆไธชไบบไธป้กต็ๆๆ่ง้ขไฟกๆฏ๏ผๆ็ฟปไบๆๆกฃ๏ผๅบ่ฏฅๆฏ็จ่ฟไธชๅฝๆฐ๏ผasync def get_videos() ๏ผ(https://nemo2011.github.io/bilibili-api/#/modules/user?id=async-def-get_videos)๏ผไฝๆฏๆไธ็ฅ้ๅ
ทไฝ่ฏฅๆไน็จ๏ผ่ฏท้ฎๆ่ฏฆ็ป็็คบไพไป็ปๅ๏ผ่ฐข่ฐขๅ
ๅผ๏ผ | 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.