text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: CUDA version
Body: When importing insightface I'm keep having
'libcudart.s.0.11.0 : cannot open shared object file : No such file or directory ' error
I think this is because of CUDA version.
So my question is what version of insightface is using under CUDA 10 version? | 1medium
|
Title: How to custom action for selector feature
Body: ### Question
Dear all,
I hope to use click data on the scatter plot to control the input of another figure. I can use the control component to achieve the function of control the figure. I am also able to extract clickData from the action. However, not sure how to pass the information in the custom action to another figure.
This works
```
vm.Parameter(
targets=["my_bio_figure.material"],
selector=vm.Dropdown(options=list(df_perovskite.material), value="YTcO3", multi=False),
)
```
This doesn't work
```
actions=[vm.Action(function=select_interaction(), inputs=["scatter_chart.clickData"],outputs=["my_bio_figure.material"])]
```
The full code
```
import vizro.models as vm
import vizro.plotly.express as px
from vizro import Vizro
from vizro.tables import dash_ag_grid
import pandas as pd
from vizro.models.types import capture
import dash_bio as dashbio
import dash_bio.utils.ngl_parser as ngl_parser
import ase.db
from pymatgen.io.ase import AseAtomsAdaptor
import nvcs
from pymatgen.core import Structure
import nglview
from vizro.actions import filter_interaction
df_perovskite = pd.read_csv("cubic.csv")
structure_db = ase.db.connect("perovskites.db")
data_path = "https://raw.githubusercontent.com/plotly/datasets/master/Dash_Bio/Molecular/"
@capture("figure")
def custom_bio_molecule_figure(data_frame, material):
atoms = structure_db.get_atoms(material)
pmg_structure = AseAtomsAdaptor().get_structure(atoms)
sites_displayed = nvcs.structure._get_displayed(pmg_structure)
pmg_structure_displayed = Structure.from_sites([si.site for si in sites_displayed])
atoms_displayed = AseAtomsAdaptor().get_atoms(pmg_structure_displayed)
ngl_ase_adaptor = nglview.ASEStructure(atoms_displayed)
#data_list = [ngl_ase_adaptor.get_structure_string()]
content = ngl_ase_adaptor.get_structure_string()
data = {
'filename': material,
'ext': 'pdb',
'selectedValue': '1',
'chain': 'ALL',
'aaRange': 'ALL',
'chosen': {'atoms':'', 'residues':''},
'color': '#e41a1c',
'config': {'type': 'text/plain', 'input': content},
'resetView': True,
'uploaded': True
}
data_list = [data]
print(data_list)
molstyles_dict = {
"representations": ["ball+stick", 'unitcell'],
}
return dashbio.NglMoleculeViewer(
id="ngl_molecule_viewer_id",
data=data_list,
molStyles=molstyles_dict,
)
@capture("action")
def select_interaction(clickData):
"""Returns the input value."""
material = clickData['points'][0]['customdata'][0]
print(material)
#print(clickData["custom_data"])
#return clickData["custom_data"]
return material
page = vm.Page(
title="Perovskites",
layout=vm.Layout(grid=[[0, 0, 1],
[0, 0, 2]]),
components=[vm.AgGrid(figure=dash_ag_grid(data_frame=df_perovskite)),
vm.Graph(
id = "scatter_chart",
figure = px.scatter(df_perovskite, x="lattice_constant (AA)", y="bulk_modulus (eV/AA^3)", custom_data = ["material"]),
actions=[vm.Action(function=select_interaction(), inputs=["scatter_chart.clickData"],outputs=["my_bio_figure.material"])]
),
vm.Figure(id="my_bio_figure", figure=custom_bio_molecule_figure(data_frame=pd.DataFrame(), material="YTcO3")),
],
controls=[
vm.Parameter(
targets=["scatter_chart.x"],
selector=vm.Dropdown(options=list(df_perovskite.columns), value="lattice_constant (AA)", multi=False),
),
vm.Parameter(
targets=["scatter_chart.y"],
selector=vm.Dropdown(options=list(df_perovskite.columns), value="bulk_modulus (eV/AA^3)", multi=False),
),
vm.Parameter(
targets=["my_bio_figure.material"],
selector=vm.Dropdown(options=list(df_perovskite.material), value="YTcO3", multi=False),
),
]
)
dashboard = vm.Dashboard(pages=[page], theme="vizro_light")
if __name__ == "__main__":
Vizro().build(dashboard).run()
```
Thank you very much for your help.
### Code/Examples
_No response_
### Which package?
vizro
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | 1medium
|
Title: Percent in Axis label produces hard to read error. need to escape '%' or Warning?
Body: Regarding #50
When assigning a axis label, I was describing a percent, and used the percent symbol. Everything was produced fine in python, and went as expected.
However, I could not figure out why it kept crashing in latex a week later. It took me around an hour and a half, as the latex error output was not helpful at all
Here's reproducible code
```
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib2tikz import save as tikz_save
#RANDOM DATA
des = [1,2,3,4]
perc_below = [1,2,3,4]
df3 = pd.DataFrame({'one':perc_below, 'two':des})
df3.plot(fontsize=20, kind='line',y='one',x='two')
plt.xlabel('example',fontsize=20)
# ISSUE LIES BELOW
plt.ylabel('example %',fontsize=20)
tikz_save('example.tex')`
```
Tex output is as expected...
In my opinion, this should be escaped, as the work around is a backslash when assigning the label. However, this would lead to different behavior as matplotlib. (I also export to pngs, which would end up with a backslash in them)
I understand if you do not agree. In that case, a warning when the graph is produced would be great, as again, the latex error was completely unhelpful.
I love the library, BTW! | 1medium
|
Title: 请问V2版本resnet怎么转成onnx模型?
Body: 转了好久没通过 | 1medium
|
Title: Add Faker into white list or add ability to expand white list.
Body: ### 🚀 The feature
Add Faker into white list or add ability to expand white list.
### Motivation, pitch
I've asked question "Generate me 10 new synthetic rows based on provided examples" and got error:
Generated code includes import of faker which is not in whitelist.
Log:
```
Question: Generate me 10 new synthetic rows based on provided examples
Running PandasAI with openai LLM...
Prompt ID: eb749e24-762f-4807-828c-a52f772666be
<class 'pandasai.helpers.output_types._output_types.DefaultOutputType'> is going to be used.
<class 'pandasai.helpers.viz_library_types._viz_library_types.NoVizLibraryType'> is going to be used.
Executing Step 0: CacheLookup
Executing Step 1: PromptGeneration
Using prompt: <dataframe>
dfs[0]:891x12
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
2,1,1,"Cumings, Mrs. John Bra...",female,38.0,1,0,PC 17599,71.2833,C85,C
3,1,3,"Heikkinen, Miss. Laina...",female,26.0,0,0,STON/O2. 3101282,7.925,,S
1,0,3,"Braund, Mr. Owen Harri...",male,22.0,1,0,A/5 21171,7.25,,S
</dataframe>
Update this initial code:
python
# TODO: import the required dependencies
import pandas as pd
# Write code here
# Declare result var: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" }
Q: Generate me 10 new synthetic rows based on provided examples
Variable `dfs: list[pd.DataFrame]` is already declared.
At the end, declare "result" variable as a dictionary of type and value.
Generate python code and return full updated code:
Executing Step 2: CodeGenerator
Code generated:
# TODO: import the required dependencies
import pandas as pd
from faker import Faker
# Write code here
fake = Faker()
# Generate 10 new synthetic rows based on provided examples
new_rows = []
for _ in range(10):
new_row = {
"PassengerId": fake.random_int(min=4, max=1000),
"Survived": fake.random_int(min=0, max=1),
"Pclass": fake.random_int(min=1, max=3),
"Name": fake.name(),
"Sex": fake.random_element(elements=["male", "female"]),
"Age": fake.random_int(min=1, max=80),
"SibSp": fake.random_int(min=0, max=4),
"Parch": fake.random_int(min=0, max=2),
"Ticket": fake.random_int(min=10000, max=99999),
"Fare": round(fake.random_number(digits=4, fix_len=True), 2),
"Cabin": fake.random_element(elements=[None, "A1", "B2", "C3"]),
"Embarked": fake.random_element(elements=["S", "C", "Q"])
}
new_rows.append(new_row)
# Create a new DataFrame with the original data plus the 10 new synthetic rows
new_df = pd.concat([dfs[0], pd.DataFrame(new_rows)], ignore_index=True)
# Declare result var
result = {"type": "dataframe", "value": new_df}
Executing Step 3: CachePopulation
Executing Step 4: CodeExecution
Failed to execute code with a correction framework [retry number: 1]
Failed with error: Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 186, in execute_code
code_to_run = self._clean_code(code, context)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 375, in _clean_code
self._check_imports(node)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 456, in _check_imports
raise BadImportError(library)
pandasai.exceptions.BadImportError: Generated code includes import of faker which is not in whitelist.
. Retrying
Using prompt: <dataframe>
dfs[0]:891x12
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
2,1,1,"Cumings, Mrs. John Bra...",female,38.0,1,0,PC 17599,71.2833,C85,C
3,1,3,"Heikkinen, Miss. Laina...",female,26.0,0,0,STON/O2. 3101282,7.925,,S
1,0,3,"Braund, Mr. Owen Harri...",male,22.0,1,0,A/5 21171,7.25,,S
</dataframe>
The user asked the following question:
Q: Generate me 10 new synthetic rows based on provided examples
You generated this python code:
# TODO: import the required dependencies
import pandas as pd
from faker import Faker
# Write code here
fake = Faker()
# Generate 10 new synthetic rows based on provided examples
new_rows = []
for _ in range(10):
new_row = {
"PassengerId": fake.random_int(min=4, max=1000),
"Survived": fake.random_int(min=0, max=1),
"Pclass": fake.random_int(min=1, max=3),
"Name": fake.name(),
"Sex": fake.random_element(elements=["male", "female"]),
"Age": fake.random_int(min=1, max=80),
"SibSp": fake.random_int(min=0, max=4),
"Parch": fake.random_int(min=0, max=2),
"Ticket": fake.random_int(min=10000, max=99999),
"Fare": round(fake.random_number(digits=4, fix_len=True), 2),
"Cabin": fake.random_element(elements=[None, "A1", "B2", "C3"]),
"Embarked": fake.random_element(elements=["S", "C", "Q"])
}
new_rows.append(new_row)
# Create a new DataFrame with the original data plus the 10 new synthetic rows
new_df = pd.concat([dfs[0], pd.DataFrame(new_rows)], ignore_index=True)
# Declare result var
result = {"type": "dataframe", "value": new_df}
It fails with the following error:
Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 186, in execute_code
code_to_run = self._clean_code(code, context)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 375, in _clean_code
self._check_imports(node)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 456, in _check_imports
raise BadImportError(library)
pandasai.exceptions.BadImportError: Generated code includes import of faker which is not in whitelist.
Fix the python code above and return the new python code:
Failed to execute code with a correction framework [retry number: 2]
Failed with error: Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 186, in execute_code
code_to_run = self._clean_code(code, context)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 375, in _clean_code
self._check_imports(node)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 456, in _check_imports
raise BadImportError(library)
pandasai.exceptions.BadImportError: Generated code includes import of faker which is not in whitelist.
. Retrying
Using prompt: <dataframe>
dfs[0]:891x12
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
2,1,1,"Cumings, Mrs. John Bra...",female,38.0,1,0,PC 17599,71.2833,C85,C
3,1,3,"Heikkinen, Miss. Laina...",female,26.0,0,0,STON/O2. 3101282,7.925,,S
1,0,3,"Braund, Mr. Owen Harri...",male,22.0,1,0,A/5 21171,7.25,,S
</dataframe>
The user asked the following question:
Q: Generate me 10 new synthetic rows based on provided examples
You generated this python code:
# TODO: import the required dependencies
import pandas as pd
from faker import Faker
# Write code here
fake = Faker()
# Generate 10 new synthetic rows based on provided examples
new_rows = []
for _ in range(10):
new_row = {
"PassengerId": fake.random_int(min=4, max=1000),
"Survived": fake.random_int(min=0, max=1),
"Pclass": fake.random_int(min=1, max=3),
"Name": fake.name(),
"Sex": fake.random_element(elements=["male", "female"]),
"Age": fake.random_int(min=1, max=80),
"SibSp": fake.random_int(min=0, max=4),
"Parch": fake.random_int(min=0, max=2),
"Ticket": fake.random_int(min=10000, max=99999),
"Fare": round(fake.random_number(digits=4, fix_len=True), 2),
"Cabin": fake.random_element(elements=[None, "A1", "B2", "C3"]),
"Embarked": fake.random_element(elements=["S", "C", "Q"])
}
new_rows.append(new_row)
# Create a new DataFrame with the original data plus the 10 new synthetic rows
new_df = pd.concat([dfs[0], pd.DataFrame(new_rows)], ignore_index=True)
# Declare result var
result = {"type": "dataframe", "value": new_df}
It fails with the following error:
Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 186, in execute_code
code_to_run = self._clean_code(code, context)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 375, in _clean_code
self._check_imports(node)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 456, in _check_imports
raise BadImportError(library)
pandasai.exceptions.BadImportError: Generated code includes import of faker which is not in whitelist.
Fix the python code above and return the new python code:
Pipeline failed on step 4: Generated code includes import of faker which is not in whitelist.
```
### Alternatives
_No response_
### Additional context
_No response_ | 1medium
|
Title: Does Zeropadding requires a memory in neural network?
Body: Hello guys, i want to know does zero padding requires a memory in neural network?
I red **VGGNet in detail** section in http://cs231n.github.io/convolutional-networks/
But it doesn't contains zero padding.
Any body? | 1medium
|
Title: invalid reports when used with `-rA`, repeats 3 times with instafail
Body: pytest-sugar reports strange things when used with `-rA`
#### Command used to run pytest
````pytest test_example.py````
#### Test file
````python
def test_example():
print(1)
pass
````
#### Output
````
Test session starts (platform: linux, Python 3.8.5, pytest 6.2.2, pytest-sugar 0.9.4)
rootdir: /home/stas/hf/transformers-stas
plugins: forked-1.3.0, xdist-2.2.0, sugar-0.9.4, instafail-0.4.2
collecting ...
test_example.py ✓ 100% ██████████
Results (0.02s):
1 passed
````
### with -rA
```
$ pytest -rA test_example.py
Test session starts (platform: linux, Python 3.8.5, pytest 6.2.2, pytest-sugar 0.9.4)
rootdir: /home/stas/hf/transformers-stas
plugins: forked-1.3.0, xdist-2.2.0, sugar-0.9.4, instafail-0.4.2
collecting ...
test_example.py ✓ 100% ██████████
====================================================================== PASSES =======================================================================
___________________________________________________________________ test_example ____________________________________________________________________
--------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------
1
___________________________________________________________________ test_example ____________________________________________________________________
--------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------
1
============================================================== short test summary info ==============================================================
PASSED test_example.py::test_example
PASSED test_example.py::test_example
PASSED test_example.py::test_example
Results (0.02s):
1 passed
```
2 problems:
1. stdout is dumped **twice**
2. PASSED report is printed **trice**
### If combined with `instafail`
It now also reports the test 3 times! ✓✓✓
```
pytest -rA test_example.py --instafail
====================================================================== test session starts =======================================================================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /home/stas/hf/transformers-stas
plugins: forked-1.3.0, xdist-2.2.0, sugar-0.9.4, instafail-0.4.2
collected 1 item
test_example.py ✓✓✓ [100%]
============================================================================= PASSES =============================================================================
__________________________________________________________________________ test_example __________________________________________________________________________
---------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------
1
__________________________________________________________________________ test_example __________________________________________________________________________
---------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------
1
==================================================================== short test summary info =====================================================================
PASSED test_example.py::test_example
PASSED test_example.py::test_example
PASSED test_example.py::test_example
======================================================================= 3 passed in 0.06s ========================================================================
```
W/o `-rA`
```
pytest test_example.py --instafail
====================================================================== test session starts =======================================================================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /home/stas/hf/transformers-stas
plugins: forked-1.3.0, xdist-2.2.0, sugar-0.9.4, instafail-0.4.2
collected 1 item
test_example.py ✓✓✓ [100%]
======================================================================= 3 passed in 0.01s ========================================================================
```
Thanks. | 1medium
|
Title: Date range for intraday data
Body: Hello everyone! I wondered if it is possible to change the date range for intraday data of a stock or currency. I've already used outputsize='full', but I need something larger in order to use machine learning methods. It's for my thesis, in which I'm building a trading algorithm based in neural networks, but I need hundreds of thousands of data for this to work.
Thank you for your time.
@RomelTorres | 1medium
|
Title: Streaming for gRPC for Deployment
Body: | 2hard
|
Title: Add back to top functionality
Body: ### Description
As the total height of website is long so back to top button is much needed for easy navigation.
So assign this issue to be under Hacktoberfest.
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
- [ ] Check if a new demo could be provided based on this, or if legacy demos could be benefit from it.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional) | 1medium
|
Title: [Feature request] add request history to network_interceptor
Body: Why this feature is needed:
We need it to parsing requests for certain actions without creating crutches and global variables
We open the Interceptor together with the history request
execute our code and then just get the request history and analyze what happened during the execution of this or that code.
You can also make an empty query at the beginning of the aiter to trigger certain actions
Which is a good, but not very nice solution
```
PARAM: request_history: bool = False
self.request_history = [] if request_history else request_history
self.request_changed_history = [] if request_history else request_history
in _paused_handler:
if isinstance(self.request_history, list):
self.request_history.append(request)
``` | 1medium
|
Title: Can server create stream?
Body: Hi,
I am using aioquic to develop a software. I want to create a one-way stream from the server to transfer data in the existing session, but I don't seem to find a way to create a stream on the server.
Who can give me a solution?
Thanks. | 1medium
|
Title: Cannot extend the User model to use nested objects with Ormar
Body: `OrmarUserDatabase.create()` will not save declared nested models/relationships. They do still appear in the OpenAPI schema docs, but the relationships cannot be saved or retrieved.
Steps to reproduce the behavior:
1. Extend your models with a relational field. I have used roles:
```python
class PublicMeta(ormar.ModelMeta):
'''For use with the Public Postgres schema'''
metadata = metadata
database = database
class Role(ormar.Model):
class Meta(PublicMeta):
tablename = 'role'
id: int = ormar.Integer(primary_key=True)
name: str = ormar.String(max_length=50)
description: str = ormar.String(max_length=255, nullable=True)
class UserModel(OrmarBaseUserModel):
class Meta(PublicMeta):
tablename = 'user'
roles: Optional[List[Role]] = ormar.ManyToMany(Role)
```
Extend Pydantic models:
```python
class User(models.BaseUser):
roles: List[Role]
(UserCreate etc ommited)
```
2. Your registration route should look like this:

3. POST to `/registration`
4. In the response, the role value is empty:
```json
{
"id": "c020a52e-3355-4066-bed2-aa13287305ff",
"email": "[email protected]",
"is_active": true,
"is_superuser": false,
"is_verified": false,
"roles": []
}
```
## Expected behavior
Nested relationships should be stored.
## Configuration
- Python version : 3.8
- FastAPI version : 0.68.1
- FastAPI Users version : 8.1.0
### FastAPI Users configuration
Shown above
## Additional context
It looks like this can be solved by using Ormar's `save_related()` and possibly `select_all()` (for reading) methods.
I managed to store the relationship by calling `save_related` in `OrmarUserDatabase.create()` as follows:
```python
async def create(self, user: UD) -> UD:
oauth_accounts = getattr(user, "oauth_accounts", [])
model = await self.model(**user.dict(exclude={"oauth_accounts"})).save()
await model.save_related()
if oauth_accounts and self.oauth_account_model:
await self._create_oauth_models(model=model, oauth_accounts=oauth_accounts)
user_db = await self._get_user(id=user.id)
return cast(UD, user_db)
```
I suspect that the `_get_db_user()` function would need to call Ormar's [select_all()](https://collerek.github.io/ormar/queries/joins-and-subqueries/#select_all) method in order to return the relationship when querying users, but I've not had success with that, as I'm a bit new to Ormar's API.
I'm happy to do a pull request but I need some guidance here. | 1medium
|
Title: How to use my own class name when predict?
Body: ### Search before asking
- [ ] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
How to use my own class name when predict?
### Additional
_No response_ | 0easy
|
Title: Notebook not found: Serialise Detections to a CSV File
Body: ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
The Colab in this [cookbook](https://supervision.roboflow.com/develop/notebooks/serialise-detections-to-csv/) is not found.
<img width="1118" alt="Screenshot 2024-08-19 at 1 19 21 PM" src="https://github.com/user-attachments/assets/07b23e28-0ccc-456d-a496-631e3600bb57">
```
Notebook not found
There was an error loading this notebook. Ensure that the file is accessible and try again.
Ensure that you have permission to view this notebook in GitHub and authorize Colab to use the GitHub API.
https://github.com/roboflow/supervision/blob/develop/docs/notebooks/detections-to-jsonsink.ipynb
Could not find detections-to-jsonsink.ipynb in https://api.github.com/repos/roboflow/supervision/contents/docs/no
```
### Environment
Browser only error: https://supervision.roboflow.com/develop/notebooks/serialise-detections-to-csv/
### Minimal Reproducible Example
Steps:
1. Open the cookbook https://supervision.roboflow.com/develop/notebooks/serialise-detections-to-csv/
2. Click on "Open in Colab"
3. Get the 404 error
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | 0easy
|
Title: --keep-fragments doesn't work with livestreams
Body: ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Provide a description that is worded well enough to be understood
I was trying to keep fragments along with the single output file for the live news by below command:
yt-dlp --downloader "m3u8:native" --keep-fragments -f 300 -P hls-live/ -v -o "%(title)s.%(ext)s" YDfiTGGPYCk
but there were no fragments on the specified folder, while it would produce the single output file as specified. The format 300 was both with video and audio when testing.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
[debug] Command-line config: ['-vU']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [4bd265539] (zip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.0-125-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: certifi-2020.06.20, requests-2.25.1, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-1.26.20
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
++++++++++++++++++++++++ Below info were the output when downloading ++++++++++++++++++++
[debug] Command-line config: ['--downloader', 'm3u8:native', '--keep-fragments', '-f', '300', '-P', 'hls-live/', '-v', '-o', '%(title)s.%(ext)s', 'YDfiTGGPYCk']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [4bd265539] (zip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.0-125-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: certifi-2020.06.20, requests-2.25.1, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-1.26.20
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[youtube] Extracting URL: YDfiTGGPYCk
[youtube] YDfiTGGPYCk: Downloading webpage
[youtube] YDfiTGGPYCk: Downloading ios player API JSON
[youtube] YDfiTGGPYCk: Downloading mweb player API JSON
[youtube] YDfiTGGPYCk: Downloading m3u8 information
[youtube] YDfiTGGPYCk: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[info] YDfiTGGPYCk: Downloading 1 format(s): 300
[debug] Invoking ffmpeg downloader on "https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8"
[download] Destination: hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4
[debug] ffmpeg command line: ffmpeg -y -loglevel verbose -headers 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.41 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Sec-Fetch-Mode: navigate
' -i https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8 -c copy -f mpegts 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4.part'
ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)
configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
[tcp @ 0x558771f1bac0] Starting connection attempt to 172.217.19.238 port 443
[tcp @ 0x558771f1bac0] Starting connection attempt to 2a00:1450:4006:80b::200e port 443
[tcp @ 0x558771f1bac0] Connected attempt failed: Network is unreachable
[tcp @ 0x558771f1bac0] Successfully connected to 172.217.19.238 port 443
[hls @ 0x558771f16c80] Skip ('#EXT-X-VERSION:3')
[hls @ 0x558771f16c80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x558771f16c80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-12-08T13:58:00.008+00:00')
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32920/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[hls @ 0x558771f16c80] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32920/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[tcp @ 0x558772246200] Starting connection attempt to 203.66.182.18 port 443
[tcp @ 0x558772246200] Successfully connected to 203.66.182.18 port 443
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32921/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[hls @ 0x558771f16c80] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32921/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[tcp @ 0x558772286100] Starting connection attempt to 203.66.182.18 port 443
[tcp @ 0x558772286100] Successfully connected to 203.66.182.18 port 443
[h264 @ 0x5587725afdc0] Reinit context to 1280x720, pix_fmt: yuv420p
Input #0, hls, from 'https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8':
Duration: N/A, start: 69155.644478, bitrate: N/A
Program 0
Metadata:
variant_bitrate : 0
Stream #0:0: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp
Metadata:
variant_bitrate : 0
Stream #0:1: Video: h264 (Main), 1 reference frame ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, left), 1280x720 [SAR 1:1 DAR 16:9], 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
Metadata:
variant_bitrate : 0
[mpegts @ 0x5587725d8a40] service 1 using PCR in pid=256, pcr_period=83ms
[mpegts @ 0x5587725d8a40] muxrate VBR, sdt every 500 ms, pat/pmt every 100 ms
Output #0, mpegts, to 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4.part':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: h264 (Main), 1 reference frame ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, left), 1280x720 (0x0) [SAR 1:1 DAR 16:9], q=2-31, 59.94 fps, 59.94 tbr, 90k tbn, 90k tbc
Metadata:
variant_bitrate : 0
Stream #0:1: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp
Metadata:
variant_bitrate : 0
Stream mapping:
Stream #0:1 -> #0:0 (copy)
Stream #0:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32922/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x558772243040] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32922/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[tcp @ 0x558772c030c0] Starting connection attempt to 172.217.19.238 port 443ts/s speed=2.43x
[tcp @ 0x558772c030c0] Starting connection attempt to 2a00:1450:4006:80b::200e port 443
[tcp @ 0x558772c030c0] Connected attempt failed: Network is unreachable
[tcp @ 0x558772c030c0] Successfully connected to 172.217.19.238 port 443
[hls @ 0x558771f16c80] Skip ('#EXT-X-VERSION:3')
[hls @ 0x558771f16c80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x558771f16c80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-12-08T13:58:10.018+00:00')
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32923/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x55877227ab80] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32923/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32924/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x558772243040] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32924/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[https @ 0x558772b7c240] Opening 'https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8' for reading
[hls @ 0x558771f16c80] Skip ('#EXT-X-VERSION:3')
[hls @ 0x558771f16c80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x558771f16c80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-12-08T13:58:20.028+00:00')
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32925/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x558772243040] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32925/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32926/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x55877227ab80] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32926/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[https @ 0x558772b7c240] Opening 'https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8' for reading
[hls @ 0x558771f16c80] Skip ('#EXT-X-VERSION:3')
[hls @ 0x558771f16c80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x558771f16c80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-12-08T13:58:25.033+00:00')
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32927/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x558772243040] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32927/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[https @ 0x558772b7c240] Opening 'https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8' for reading
[hls @ 0x558771f16c80] Skip ('#EXT-X-VERSION:3')
[hls @ 0x558771f16c80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x558771f16c80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-12-08T13:58:30.038+00:00')
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32928/goap/lmt%3D18/govp/lmt%3D18/dur/4.938/file/seg.ts', offset 0, playlist 0
[https @ 0x558772243040] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32928/goap/lmt%3D18/govp/lmt%3D18/dur/4.938/file/seg.ts' for reading
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32929/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x55877227ab80] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32929/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
^Cframe= 2544 fps= 74 q=-1.0 Lsize= 7181kB time=00:00:42.44 bitrate=1385.9kbits/s speed=1.24x
video:5895kB audio:676kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 9.275200%
Input file #0 (https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8):
Input stream #0:0 (audio): 1829 packets read (692344 bytes);
Input stream #0:1 (video): 2544 packets read (6036934 bytes);
Total: 4373 packets (6729278 bytes) demuxed
Output file #0 (file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4.part):
Output stream #0:0 (video): 2544 packets muxed (6036934 bytes);
Output stream #0:1 (audio): 1829 packets muxed (692344 bytes);
Total: 4373 packets (6729278 bytes) muxed
[AVIOContext @ 0x558772c00900] Statistics: 0 seeks, 29 writeouts
[AVIOContext @ 0x558772288fc0] Statistics: 4942332 bytes read, 0 seeks
[AVIOContext @ 0x5587728d3780] Statistics: 2506792 bytes read, 0 seeks
[AVIOContext @ 0x558772a21b80] Statistics: 28559 bytes read, 0 seeks
[AVIOContext @ 0x558772257200] Statistics: 6860 bytes read, 0 seeks
Exiting normally, received signal 2.
[ffmpeg] Interrupted by user
[download] 100% of 7.01MiB in 00:00:39 at 183.07KiB/s
[debug] ffprobe command line: ffprobe -hide_banner -show_format -show_streams -print_format json 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4'
[debug] ffmpeg command line: ffprobe -show_streams 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4'
[FixupM3u8] Fixing MPEG-TS in MP4 container of "hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4' -map 0 -dn -ignore_unknown -c copy -f mp4 -bsf:a aac_adtstoasc -movflags +faststart 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.temp.mp4'
```
| 1medium
|
Title: t2t_decoder hangs when dot_product_relative_v2 is used
Body: Hello,
I am trying to train a custom Transformer model that has a decoder only (with a custom bottom['targets']), for sequence generation. I was able to train and generate from the model when I had not specified any other special params. However, the generated sequences frequently had a failure mode where certain tokens repeated too often.
I then added the two following params and am training a new model.
hparams.self_attention_type = "dot_product_relative_v2"
hparams.max_relative_position = 256
However, now when I run t2t_decoder, it hangs and does not generate any output (and it's hard to kill it with ^C, and I have to do a kill -9).
I run the decoder in interactive mode, and simply press the return at the '>' prompt.
t2t_decoder --data_dir="${DATA_DIR}" --decode_hparams="${DECODE_HPARAMS}" --decode_interactive --hparams="sampling_method=random" --hparams_set=${HPARAMS_SET} --model=${MODEL} --problem=${PROBLEM} --output_dir=${TRAIN_DIR}
where:
DECODE_HPARAMS="alpha=0,beam_size=1,extra_length=2048"
MODEL=transformer
OS: macOS, High Sierra
$ pip freeze | grep tensor
Error [Errno 20] Not a directory: '/Users/vida_vakil/miniconda3/lib/python3.6/site-packages/magenta-1.0.2-py3.6.egg' while executing command git rev-parse
Exception:
....
NotADirectoryError: [Errno 20] Not a directory: '/Users/vida_vakil/miniconda3/lib/python3.6/site-packages/magenta-1.0.2-py3.6.egg'
The model I am using is based on Score2Perf (https://github.com/tensorflow/magenta/tree/master/magenta/models/score2perf), and I have installed it using instructions from their page, and here: https://github.com/tensorflow/magenta
Looks like the error has to do with the egg thing.
$ python -V
Python 3.6.6 :: Anaconda, Inc.
tensorflow 1.12.0
tensor2tensor 1.13.0
Thanks in advance | 2hard
|
Title: AttributeError: 'Bot' object has no attribute 'enable_puid'
Body: 现在是没有enable_puid这个方法了?
```
raceback (most recent call last):
File "C:/Users/Administrator/PycharmProjects/example/weixin/1.py", line 7, in <module>
bot.enable_puid('wxpy_puid.pkl')
AttributeError: 'Bot' object has no attribute 'enable_puid'
```
我的代码是
```
# -*- coding:utf-8 -*-
from wxpy import *
# 初始化机器人,扫码登陆
bot = Bot(cache_path=True)
# 启用 puid 属性,并指定 puid 所需的映射数据保存/载入路径
bot.enable_puid('wxpy_puid.pkl')
``` | 1medium
|
Title: ComfyUI seems to ignore the --reserve-vram and/or --disable-smart-memory ? Is there anything going wrong ?
Body: ### Your question
So I am using everything to reduce the vram usage amount.
`ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --lowvram --force-fp16 --reserve-vram 2.4 --disable-smart-memory`
However I keep seeing that the memory usage of the loaded model is increased ->

and then I am getting memory errors in later generations as shown in the logs and the generation slows down significantly. Sometimes I get an OOM error and the generation dies.
I am basically using a XY generation of pictures with different guidance values and base_shifts. I am using the flux-q4-ks.gguf model (with clip models t5_v1.1_q8 and vit-l-text-detail ) and this [lora](https://civitai.com/models/730373/hyper-realism-lora-by-aidma-flux).
I can provide the workflow if needed for reproduction.
Seems like the issue is very close to #5958 , #5385 and #4318.
However I am using an SamplerCustomAdvanced and a nvidia card. So just wondering if this is a different problem. Or I am doing something wrong.
### Logs
```powershell
[2025-01-01 20:36:33.946]
[ARequested to load Flux
[2025-01-01 20:36:34.009] 0 models unloaded.
[2025-01-01 20:36:34.118] loaded partially 6320.525634765625 6320.427978515625 0
[2025-01-01 20:36:34.123] Attempting to release mmap (44)
[2025-01-01 20:36:34.259]
[2025-01-01 20:36:34.260] [A
[2025-01-01 20:36:34.261] [AERROR lora diffusion_model.single_blocks.0.linear2.weight Allocation on device
[2025-01-01 20:38:46.045]
[2025-01-01 20:38:46.045] [A
[2025-01-01 20:40:26.966] [A
[2025-01-01 20:42:00.008] [A
[2025-01-01 20:43:26.563] [A
[2025-01-01 20:45:06.902] [A
[2025-01-01 20:46:33.755] [A
[2025-01-01 20:48:04.298] [A
[2025-01-01 20:49:35.220] [A
[2025-01-01 20:52:40.229] [A
[2025-01-01 20:55:56.443] [A
[2025-01-01 20:58:42.092] [A
[2025-01-01 21:01:50.056] [A
[2025-01-01 21:02:17.609] [A
bosh3: 100%|█████████████████████████████████████████████████████████████████| 1/1 [25:43<00:00, 1543.35s/it]
bosh3: 100%|█████████████████████████████████████████████████████████████████| 1/1 [25:43<00:00, 1543.35s/it]
[2025-01-01 21:02:17.612]
[2025-01-01 21:02:17.613]
[ARequested to load Flux
[2025-01-01 21:02:17.679] 0 models unloaded.
[2025-01-01 21:02:17.759] loaded partially 6384.427978515625 6383.381103515625 0
[2025-01-01 21:02:17.765] Attempting to release mmap (40)
[2025-01-01 21:02:17.869]
[2025-01-01 21:02:17.869] [A
[2025-01-01 21:02:17.870] [AERROR lora diffusion_model.single_blocks.0.linear2.weight Allocation on device
[2025-01-01 21:04:24.271]
```
### Other
Total VRAM 8192 MB, total RAM 40352 MB
pytorch version: 2.5.1+cu124
Forcing FP16.
Set vram state to: LOW_VRAM
Disabling smart memory management
Device: cuda:0 NVIDIA GeForce RTX 3070 Laptop GPU : cudaMallocAsync
Using pytorch cross attention
### Loading: ComfyUI-Manager (V2.55.5)
### ComfyUI Version: v0.3.7-13-g44db978 | Released on '2024-12-10' | 2hard
|
Title: x_powered_by_vuln module to log/show the header value
Body: we now have the ability to log response results - need to update the module to log/display the x-powered-by header value | 1medium
|
Title: Issue with Gradio assets not loading through Nginx reverse proxy
Body: ### Describe the bug
When accessing a Gradio application through Nginx reverse proxy, the main page loads but static assets (JS/CSS) fail to load with 404 errors when the page attempts to fetch them automatically.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
### gradio code
```python
import gradio as gr
import time
def test(x):
time.sleep(4)
return x
gr.Interface(test, "textbox", "textbox").queue().launch(
root_path="/webui", # add i also try to use nginx+http://0.0.0.0:7861/ with no root_path="/webui"
server_name="0.0.0.0",
server_port=7861
)
```
### Current Behavior
- `localhost:9999/webui` loads successfully and returns the Gradio web interface
- When the page tries to fetch its assets, the following requests return 404:
- `localhost:9999/assets/index-Dj1xzGVg.js`
- `localhost:9999/assets/index-Bmd1Nf3q.css`
I manually tried accessing with the /webui prefix, but still got 404:
- `localhost:9999/webui/assets/index-Dj1xzGVg.js`
- `localhost:9999/webui/assets/index-Bmd1Nf3q.css`
However, accessing directly through port 7861 works:
- `localhost:7861/webui/assets/index-Dj1xzGVg.js`
- `localhost:7861/webui/assets/index-Bmd1Nf3q.css`
### Expected Behavior
Static assets should load correctly when accessing the application through the Nginx reverse proxy at `localhost:9999/webui`.
### Question
Is there something wrong with my configuration? How can I properly serve Gradio's static assets through the Nginx reverse proxy?
### Additional Notes
- The main application interface loads correctly
- Static assets (JS/CSS) fail to load when the page tries to fetch them automatically
- Direct access to the Gradio server works as expected
### Nginx Configuration
```
server {
listen 9999;
server_name _;
root /root;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ [^/]\.php(/|$) {
try_files $uri =404;
fastcgi_pass unix:/run/php/php8.1-fpm.sock;
fastcgi_index index.php;
set $path_info $fastcgi_path_info;
set $real_script_name $fastcgi_script_name;
if ($fastcgi_script_name ~ "^(.+?\.php)(/.+)$") {
set $real_script_name $1;
set $path_info $2;
}
fastcgi_param SCRIPT_FILENAME $document_root$real_script_name;
fastcgi_param SCRIPT_NAME $real_script_name;
fastcgi_param PATH_INFO $path_info;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
location /webui {
proxy_pass http://0.0.0.0:7861/webui/; # and i also try to use http://0.0.0.0:7861/ with no root_path="/webui"
proxy_buffering off;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
### Screenshot
Gradio is running normally on port 7861, and 7861/webui can also be accessed.
The following is the situation on localhost:9999:
The webpage opens up as a blank page.
However, the returned HTML contains Gradio content.

### Logs
```shell
None
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.6.0
gradio_client version: 1.4.3
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.3.0
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.4.3 is not installed.
httpx: 0.27.0
huggingface-hub: 0.26.2
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.0
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.1
ruff: 0.7.4
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.13.0
typing-extensions: 4.11.0
urllib3: 2.2.1
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.0
huggingface-hub: 0.26.2
packaging: 24.0
typing-extensions: 4.11.0
websockets: 12.0
```
### Severity
Blocking usage of gradio | 2hard
|
Title: docs: add How-To section placeholder to all brokers pages
Body: I think, we should add such placeholder to all specific broker sections: RabbitMQ, NATS, Redis.
Also we should edit the current Kafka placeholder and add detail information, how user should add the section to navigation exactly | 0easy
|
Title: Release test long_running_many_tasks.aws failed
Body: Release test **long_running_many_tasks.aws** failed. See https://buildkite.com/ray-project/release/builds/34295#01954657-cbe0-487c-b8a5-d54e567f3856 for more details.
Managed by OSS Test Policy | 2hard
|
Title: Difficult response after request from an TLS client which not supporting TLS SNI
Body: We did some tests with `openssl s_client -connect 23.23.115.5:443` to simulate our embedded client.
We can't get an connection successfully with this method and got always "Internal_error" . After digging deeper we found `openssl s_client -connect 23.23.115.5:443 -servername www.httpbin.org` is working fine.
The https://tools.ietf.org/html/rfc6066 states:
> If the server understood the ClientHello extension but does not recognize the server name, the server SHOULD take one of two actions: either abort the handshake by sending a fatal-level unrecognized_name(112) alert or continue the handshake.
It would be very nice to change the response to "unrecognized_name" instead of "Internal_error"
This will help others to find the root cause much faster ;-) | 2hard
|
Title: Emulated Hue error since 2025.3.x
Body: ### The problem
Emulated hue devices are offline in Harmony HUB, repair or refresh doesn't fix it.
Accessing http://haip:8300/api/v2/lights throws a 500 Internal Server Error.
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
core-2024.x.x
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
Emulated Hue
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/emulated_hue/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
Logger: aiohttp.server
Source: /usr/local/lib/python3.13/site-packages/aiohttp/web_protocol.py:451
First occurred: 6:56:39 PM (3 occurrences)
Last logged: 6:56:57 PM
Error handling request from 192.168.0.54
Error handling request from 192.168.0.20
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/aiohttp/web_protocol.py", line 480, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/aiohttp/web_app.py", line 569, in _handle
return await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/http.py", line 75, in handle
result = handler(request, **request.match_info)
File "/usr/src/homeassistant/homeassistant/components/emulated_hue/hue_api.py", line 247, in get
return self.json(create_list_of_entities(self.config, request))
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/emulated_hue/hue_api.py", line 899, in create_list_of_entities
config.entity_id_to_number(entity_id): state_to_json(config, state)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/emulated_hue/hue_api.py", line 775, in state_to_json
state_dict = get_entity_state_dict(config, state)
File "/usr/src/homeassistant/homeassistant/components/emulated_hue/hue_api.py", line 666, in get_entity_state_dict
return _build_entity_state_dict(entity)
File "/usr/src/homeassistant/homeassistant/components/emulated_hue/hue_api.py", line 740, in _build_entity_state_dict
data[STATE_BRIGHTNESS] = round(percentage * HUE_API_STATE_BRI_MAX / 100)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
TypeError: unsupported operand type(s) for /: 'str' and 'int'
```
### Additional information
_No response_ | 1medium
|
Title: BatchedInferencePipeline degrades transcription quality heavily
Body: At first the new `BatchedInferencePipeline` seems great. It produces around 2X speed improvement compared to the normal pipeline. But after some more testing I discovered for some audio files the transcription quality is highly degraded. Whole segments are missing compared to the normal pipeline. Some segments switch language mid way for long periods.
Example:
Segment A has 30 seconds audio, fully in Dutch. It does contains a few English words. Half way the transcription segment the text becomes English, translating the Dutch audio. And at the end of the segment the `initial_prompt` is displayed. This happens at multiple places.
So this makes the `BatchedInferencePipeline` not suited for a production application. | 2hard
|
Title: Support `<pre>` blocks with Textractor
Body: Update the Textractor pipeline to handle `<pre>` blocks. These blocks should be converted to Markdown code blocks.
| 1medium
|
Title: add functionality to detect introduction of NAN in transform method of discretizers
Body: For the arbitrary, equal width and equal frequency, we should include in the transform method a check to see if NaN are being introduced accidentally.
This can happen if a value is outside the boundaries entered by the user in the arbitrary, or of the variable is too skewed for the other transformers.
The catch should be as a warning to begin with, not to break backwards compatibility perhaps, and should inform the user exactly in which variables the NaN were introduced.
We should also expand the tests of all the transformers to make sure this functionality works. As it is the same functionality for all transformers, and also for the categorical encoders as well as per issue #294, I wonder if we should make a master function from this? I think it depends in the amount of code, if it is a one liner probably not. | 1medium
|
Title: [tsv] add CLI option to use NUL as delimiter
Body: It's useful to parse output from GNU grep's `-Z` option. That produces lines that in Python are `f'{filename}\0{line}\n'`, instead of the usual `f'{filename}:{line}\n'`.
Right now the command line can't be used to specify a NUL delimiter, as in `vd --delimiter="\0"`, because `sys.argv` strings are NUL-terminated and can't ever contain NUL.
My workarounds for now are to use .visidatarc, either add a temporary line:
`vd.option('delimiter', '\x00', 'field delimiter to use for tsv/usv filetype', replay=True)`.
or add a new filetype to allow `vd -f nsv`:
```
@VisiData.api
def open_nsv(vd, p):
tsv = TsvSheet(p.base_stem, source=p)
tsv.delimiter = '\x00'
tsv.reload()
return tsv
```
Can `open_nsv()` be written without `reload()` right now? I couldn't think of another way to set `delimiter` for TsvSheet. | 1medium
|
Title: Why the mAP increase only 0.001 percent every epoch. Any suggestion how to make fast?
Body: ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
Hello,
I’ve been training a YOLO model on a custom dataset and have noticed that the mean Average Precision (mAP) increases by approximately 0.001% with each epoch. The training process doesn't provide clear guidance on when to stop, and I'm concerned that the model might be overfitting. However, the confusion matrix at epoch 400 doesn't seem to indicate overfitting.
Do you have any suggestions on how to determine the optimal stopping point or strategies to prevent potential overfitting?
Thank you!
<img width="855" alt="Image" src="https://github.com/user-attachments/assets/3cd039bc-5ed8-4ea2-b646-1b47bfd0c1f5" />
Thanks
### Additional
_No response_ | 1medium
|
Title: Bug: Stop loss results is very largely different between from_signals(sl_stop=0.1) and generate_ohlc_stop_exits(sl_stop=0.1)
Body: I'm trying to compare the stop loss mechanisms implemented with different ways such as model 1: `from_signals(sl_stop=0.1, sl_trail=False)`, model 2: `generate_ohlc_stop_exits(sl_stop=0.1, sl_trail=False) ` and model 3: `vbt.OHLCSTX.run(sl_stop=[0.1])`.
I found that model 2 and model 3 gave the exact same results (i.e., total return=459.05%) , but for model 1, the results are very different (i.e., total return = 47.47%) as displayed below:
Model 1: `from_signals(sl_stop=0.1, sl_trail=False)`

Model 2: `generate_ohlc_stop_exits(sl_stop=0.1, sl_trail=False)`

Model 3: `vbt.OHLCSTX.run(sl_stop=[0.1])`

Is it a bug? Or is it my implementation error?
And btw, here is where I got the reference from: https://github.com/polakowo/vectorbt/issues/181
Below are the codes for these 3 models:
--------------------------------------------------
Model 1: Using `from_signals(sl_stop=0.1, sl_trail=False)`
```
# Reference: stop exits with RANDENEX indicator: https://github.com/polakowo/vectorbt/issues/181
import vectorbt as vbt
ohlcv = vbt.YFData.download(
"BTC-USD",
start='2017-01-01 UTC',
end='2020-01-01 UTC'
).concat()
# Random enter signal generator based on the number of signals.
rand = vbt.RAND.run(ohlcv["Close"].shape, n=10, seed=42)
# Random exit signal generator based on the number of signals.
randx = vbt.RANDX.run(rand.entries, seed=42)
pf1 = vbt.Portfolio.from_signals(ohlcv["Close"],
rand.entries,
randx.exits,
open=ohlcv["Open"],
high=ohlcv["High"],
low=ohlcv["Low"],
sl_stop=0.1,
sl_trail=False,
)
pf1.stats()
```
Model 2: Using `generate_ohlc_stop_exits(sl_stop=0.1, sl_trail=False) `
```
import vectorbt as vbt
ohlcv = vbt.YFData.download(
"BTC-USD",
start='2017-01-01 UTC',
end='2020-01-01 UTC'
).concat()
# Random enter signal generator based on the number of signals.
rand = vbt.RAND.run(ohlcv["Close"].shape, n=10, seed=42)
# Random exit signal generator based on the number of signals.
randx = vbt.RANDX.run(rand.entries, seed=42)
stop_exits = rand.entries.vbt.signals.generate_ohlc_stop_exits(
open=ohlcv["Open"],
high=ohlcv['High'],
low=ohlcv['Low'],
close=ohlcv['Close'],
sl_stop=0.1,
sl_trail=False,
)
exits = randx.exits.vbt | stop_exits # optional: combine exit signals such that the first exit of two conditions wins
entries, exits = rand.entries.vbt.signals.clean(exits) # optional: automatically remove ignored exit signals
pf2 = vbt.Portfolio.from_signals(ohlcv['Close'], entries, exits,
open=ohlcv["Open"],high=ohlcv["High"],
low=ohlcv["Low"])
pf2.stats()
```
Model 3: Using `vbt.OHLCSTX.run(sl_stop=[0.1])`
```
import numpy
import vectorbt as vbt
ohlcv = vbt.YFData.download(
"BTC-USD",
start='2017-01-01 UTC',
end='2020-01-01 UTC'
).concat()
# Random enter signal generator based on the number of signals.
rand = vbt.RAND.run(ohlcv["Close"].shape, n=10, seed=42)
# Random exit signal generator based on the number of signals.
randx = vbt.RANDX.run(rand.entries, seed=42)
stops = [0.1,]
sl_exits = vbt.OHLCSTX.run(
rand.entries,
ohlcv['Open'],
ohlcv['High'],
ohlcv['Low'],
ohlcv['Close'],
sl_stop=list(stops),
stop_type=None,
stop_price=None
).exits
exits = randx.exits.vbt | sl_exits
pf3 = vbt.Portfolio.from_signals(ohlcv['Close'], rand.entries, exits) # with SL
pf3.stats()
```
| 1medium
|
Title: "buffer_size" error in the middle of training
Body: I faced "buffer_size cannot be larger than the size of the DataFlow!" error in the middle of training (e.g., after epoch 10). I'm trying to minimize reproducible codes for debugging, but couldn't yet find.
Meanwhile, can I ask your advice about where to look at?
### training code
```
df = MyDataFlow(config, 'trainvalminusminival')
df = MultiThreadMapData(df, 10, df.mapf, buffer_size=32, strict=True)
df = PrefetchDataZMQ(df, 10)
df = BatchData(df, config.TRAIN.BATCH_SIZE, remainder=False)
vdf = MyDataFlow(config, 'minival')
vdf = MultiThreadMapData(vdf, 10, vdf.mapf, buffer_size=32, strict=True)
vdf = PrefetchDataZMQ(vdf, 10)
vdf = BatchData(vdf, config.TRAIN.BATCH_SIZE)
model = MyModel(config)
traincfg = get_train_config(model, df, vdf, config)
nr_tower = max(get_num_gpu(), 1)
trainer = SyncMultiGPUTrainerReplicated(nr_tower)
launch_train_with_config(traincfg, trainer)
```
### data flow
```
class MyDataFlow(RNGDataFlow):
def __init__(self, config, split, path, aug=False):
super(MyDataFlow, self).__init__()
self.config = config
self.image_size = config.DATA.IMAGE_SIZE
self.aug = aug
... tfrecord file grapping and generator using tf.python_io.tf_record_iterator ...
logger.info('{}: grabbed {} TFRecords.'.format(split, len(tfrecords)))
logger.info('{}: grabbed {} examples.'.format(split, self.num_samples))
def __len__(self):
return self.num_samples
def __iter__(self):
while True:
example = next(self.generator)
... parsing using tf.train.Example.FromString(example) ...
yield key, points, label
def mapf(self, example):
... some preprocessing ...
```
### log
````
[1021 15:51:58 @base.py:250] Start Epoch 8 ...
[1021 16:12:30 @base.py:260] Epoch 8 (global_step 2500000) finished, time:20 minutes 31 seconds.
[1021 16:12:30 @graph.py:73] Running Op sync_variables/sync_variables_from_main_tower ...
[1021 16:12:30 @saver.py:77] Model saved to train_log/config/model-2500000.
[1021 16:12:31 @misc.py:109] Estimated Time Left: 15 hours 45 minutes 23 seconds
[1021 16:14:30 @monitor.py:459] DataParallelInferenceRunner/QueueInput/queue_size: 25
[1021 16:14:30 @monitor.py:459] GPUUtil/0: 19.745
[1021 16:14:30 @monitor.py:459] QueueInput/queue_size: 49.969
[1021 16:14:30 @monitor.py:459] cost: 5.802
[1021 16:14:30 @monitor.py:459] learning_rate: 0.01
[1021 16:14:30 @monitor.py:459] train-error-top1: 0.98717
[1021 16:14:30 @monitor.py:459] train-error-top3: 0.96963
[1021 16:14:30 @monitor.py:459] val-error-top1: 0.99726
[1021 16:14:30 @monitor.py:459] val-error-top3: 0.99152
[1021 16:14:30 @group.py:48] Callbacks took 119.715 sec in total. DataParallelInferenceRunner: 1 minute 59 seconds
[1021 16:14:30 @base.py:250] Start Epoch 9 ...
[1021 16:35:00 @base.py:260] Epoch 9 (global_step 2812500) finished, time:20 minutes 30 seconds.
[1021 16:35:00 @graph.py:73] Running Op sync_variables/sync_variables_from_main_tower ...
[1021 16:35:00 @saver.py:77] Model saved to train_log/config/model-2812500.
[1021 16:35:01 @misc.py:109] Estimated Time Left: 15 hours 22 minutes 47 seconds
[1021 16:37:01 @monitor.py:459] DataParallelInferenceRunner/QueueInput/queue_size: 25
[1021 16:37:01 @monitor.py:459] GPUUtil/0: 19.735
[1021 16:37:01 @monitor.py:459] QueueInput/queue_size: 49.858
[1021 16:37:01 @monitor.py:459] cost: 5.8078
[1021 16:37:01 @monitor.py:459] learning_rate: 0.01
[1021 16:37:01 @monitor.py:459] train-error-top1: 0.99174
[1021 16:37:01 @monitor.py:459] train-error-top3: 0.9626
[1021 16:37:01 @monitor.py:459] val-error-top1: 0.99711
[1021 16:37:01 @monitor.py:459] val-error-top3: 0.99116
[1021 16:37:01 @group.py:48] Callbacks took 120.659 sec in total. DataParallelInferenceRunner: 2 minutes
[1021 16:37:01 @base.py:250] Start Epoch 10 ...
[1021 16:57:34 @base.py:260] Epoch 10 (global_step 3125000) finished, time:20 minutes 32 seconds.
[1021 16:57:34 @graph.py:73] Running Op sync_variables/sync_variables_from_main_tower ...
[1021 16:57:34 @saver.py:77] Model saved to train_log/config/model-3125000.
[1021 16:57:34 @misc.py:109] Estimated Time Left: 15 hours 36 seconds
[1021 16:59:32 @parallel_map.py:53] [4m [5m [31mERR [0m [MultiThreadMapData] buffer_size cannot be larger than the size of the DataFlow!
[1021 16:59:32 @parallel_map.py:53] [4m [5m [31mERR [0m [MultiThreadMapData] buffer_size cannot be larger than the size of the DataFlow!
````
### error related code: `parallel_map.py`
```
def _fill_buffer(self, cnt=None):
if cnt is None:
cnt = self._buffer_size - self._buffer_occupancy
try:
for _ in range(cnt):
dp = next(self._iter)
self._send(dp)
except StopIteration:
logger.error(
"[{}] buffer_size cannot be larger than the size of the DataFlow!".format(type(self).__name__))
raise
self._buffer_occupancy += cnt
```
Is it possible for `data source` to get empty (the end of its data), during the for loop in `_fill_buffer`?
Python version: 3.5
TF version: 1.11.0
Tensorpack version: 0.8.9 | 1medium
|
Title: Support `background` CSS property in `st.dataframe()`
Body: ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Support setting the `background` CSS property via `df.styler` in `st.dataframe()`.
### Why?
Pandas' beautiful [`Styler.bar()`](https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.bar.html) feature uses the `background` CSS property. It's also possible to add CSS directly via [`Styler.map()`](https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.map.html). See examples below.
I should note that the `background-color` CSS property works as expected, but not `background` or `background-image`. Also, they all work in `st.table()`. Only `background` and `background-image` don't work in `st.dataframe()`.
[](https://issues.streamlitapp.com/?issue=gh-10768)
```py
import pandas as pd
import streamlit as st
df = pd.DataFrame({"solid": [0.1, 0.2, 0.3], "gradient": [0.4, 0.5, 0.6], "bar": [0.7, 0.8, 0.9]})
styler = df.style
styler.format(lambda x: f"{x:.0%}")
styler.map(lambda x: f"background-color: green;", subset="solid")
styler.map(lambda x: f"background-image: linear-gradient(to right, green {x:%}, transparent {x:%});", subset="gradient")
styler.bar(subset="bar", vmin=0, vmax=1, color="green") # Uses a `background: linear-gradient` under the hood.
st.code("st.table()")
st.table(styler) # Both the solid color and the gradient work as expected.
st.divider()
st.code("st.dataframe()")
st.dataframe(styler) # The solid color works as expected, but not the gradient.
```

### How?
_No response_
### Additional Context
_No response_ | 1medium
|
Title: Make the minimum password length configurable
Body: Just 8 characters are not that great anymore according to recent standards. 12 or 15 is a more common minimum nowadays (TODO check the NIST guidelines). However, I could imagine that many Indico instances do not want to enforce such long passwords, so I'd prefer to not change the global default.
- Add `LOCAL_PASSWORD_MIN_LENGTH` setting, default to the current hardcoded value of `8`.
- Do not allow anything shorter unless debug mode is enabled (fail in `IndicoConfig.validate`).
- In `validate_secure_password`, keep the hard check for less than 8 chars (we want to keep forcing a password change for existing users with a shorter password), but also add a check for the new limit when the context is `set-user-password`.
- Maybe populate the config file in the setup wizard with a longer minimum length, so newly installed instances get a better default?
Alternatively, we could just raise the minimum length but still with the context check to avoid forcing an "upgrade" from everyone who has a shorter one that's still 8+ chars right now. Any opinions? | 1medium
|
Title: What is the best way to get the predicted values of the training set in Darts?
Body: I am trying to get the **predicted values of my training set** in Darts. In SKlearn, one can simply do:
```
model.fit(training_set)
model.predict(training_set)
```
What is the equivalent method in Darts assuming I have target lags, past covariate lags and future covariate lags?
From what I've tried, the .predict() method is only forward looking after you fit your data so I won't be able to get the predictions of my training set.
Thanks in advance.
| 1medium
|
Title: Inconsistencies with the behavior of bias initializers, leading to poor performance in some cases
Body: Hello,
I've noticed some (potentially harmful) inconsistencies in bias initializers when running a simple test of the keras package, i.e. using a shallow MLP to learn a sine wave function in the [-1, 1] interval.
# Context
Most of the times (or for deep enough networks), using the default zero-initialization for biases is fine. However, for this simple problem having randomized biases is essential, since without them the neurons end up being too similar (redundant) and training converges to a very poor local optimum.
The [official guide](https://keras.io/api/layers/initializers/#variancescaling-class) suggests to use weight initializers for biases as well.
Now:
* The default initialization from _native_ PyTorch leads to good results that improve as expected as the network size grows.
* Several keras initializers are expected to be similar or identical to the PyTorch behavior (i.e. `VarianceScaling` and all its subclasses), but they fail to produce good results, regardless of the number of neurons in the hidden layer.
# Issues
The issues are due to the fact that all [RandomInitializer](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/initializers/random_initializers.py#L10) subclasses in their `__call__` function only have access to the shape they need to fill.
In case of bias vectors for `Dense` layers, this shape is a one element tuple, i.e. `(n,)` where `n` is the number of units in the current layer.
The [compute_fans function](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/initializers/random_initializers.py#L612) in this case reports a fan in of `n`, which is actually the number of units, i.e. the fan out.
Unfortunately, the correct fan in is not accessible, since the number of layer inputs is not included in the shape of the bias vector.
This makes the [official description of the VarianceScaling initializer](https://keras.io/api/layers/initializers/#variancescaling-class) incorrect when applied to neuron biases. The same holds for the description of the Glorot, He, LeCun initializers, which are implemented as `VarianceScaling` subclasses.
In my simple example, as soon as the shallow network has more than very few neurons, all size-dependent initializers have so little variability that they behave very similar to a zero initialization (i.e. incredibly poorly). What stumped me (before understanding the problem) is that the larger is the network, the worse the behavior.
# About possible fixes
I can now easily fix the issue by computing bounds for `RandomUniform` initializers externally so as to replicate the default PyTorch behavior, but this is not an elegant solution -- and I am worried other users may have encountered similar problems without noticing.
If the goal is correctly computing the fan in, I am afraid that I see no easy fix, short of restructuring the `RandomInitializer` API and giving it access to more information.
However, the real goal here is not actually computing the fan in, but preserving the properties that the size-dependent initializers were attempting to enforce. I would need to read more literature on the topic before suggesting a theoretically sound fix from this perspective. I would be willing to do that, in case the keras teams is fine with going in this direction. | 2hard
|
Title: SNLI-VE dataset reader and model
Body: SNLI-VE is here: https://github.com/necla-ml/SNLI-VE
The VQA reader and model should serve as an example, but there will likely be significant differences. | 2hard
|
Title: about generate images from simulation to real world
Body: Hi, I 'm a new guy about image translation. Hope for your help.
>This is domain A. 1920×1080 images from unity3D.
>
>
>And this is domain B. 1920×1080 underwater images from onboard camera.
>
I trained with cycle GAN by --crop size 512, and tested by --preprocess none. But the result looks like bad.
I guess wether the random crop may not include the small target every times,or any other reason. I really don't know why and how to solve it. I hope that you can give me some tips or a little inspiration.
>This is the input image.
>
>
>And this is the output with epoch 110.
>
| 1medium
|
Title: hardlink count not equal to hardlinked file count
Body: hardlink count less than hardlinked file count, it seems as if 2 hidden hardlinked files automatically generated by aidlux are not counted, whose file name have ".l2s." as prefix, and one has "0001" as suffix, the other has '0001.000#" as suffix. there are no such two hardlinked hidden files on an ordinary linux system.
when a hidden hardlink is deleted, the left hardlinks will lose hardlink and become not accessible. this situation always lead to problems when apt upgrade debian of aidlux, or when installing packages. especially, when installing packages from source code, the problem shows up more often and with a faillure result.
====the following are the steps to show the issue:
root@localhost:/tmp/tmp# date>x
root@localhost:/tmp/tmp# ls -ali
总用量 16
533961 drwx------. 2 root root 3488 8月 2 23:08 .
207221 drwx------. 49 root root 8192 8月 2 23:04 ..
1104231 -rw-------. 1 root root 43 8月 2 23:08 x
root@localhost:/tmp/tmp# ln x y
root@localhost:/tmp/tmp# ls -ali
总用量 28
533961 drwx------. 2 root root 3488 8月 2 23:08 .
207221 drwx------. 49 root root 8192 8月 2 23:04 ..
1104231 -rw-------. 2 root root 43 8月 2 23:08 .l2s.x0001
1104231 -rw-------. 2 root root 43 8月 2 23:08 .l2s.x0001.0002
1104231 -rw-------. 2 root root 43 8月 2 23:08 x
1104231 -rw-------. 2 root root 43 8月 2 23:08 y
root@localhost:/tmp/tmp# ln x z
root@localhost:/tmp/tmp# ls -ali
总用量 32
533961 drwx------. 2 root root 3488 8月 2 23:09 .
207221 drwx------. 49 root root 8192 8月 2 23:04 ..
1104231 -rw-------. 3 root root 43 8月 2 23:08 .l2s.x0001
1104231 -rw-------. 3 root root 43 8月 2 23:08 .l2s.x0001.0003
1104231 -rw-------. 3 root root 43 8月 2 23:08 x
1104231 -rw-------. 3 root root 43 8月 2 23:08 y
1104231 -rw-------. 3 root root 43 8月 2 23:08 z
root@localhost:/tmp/tmp# find . -type l
./.l2s.x0001
./z
./x
./y
root@localhost:/tmp/tmp# cat x
2021年 08月 02日 星期一 23:49:59 UTC
root@localhost:/tmp/tmp# rm .l2s.x0001
root@localhost:/tmp/tmp# cat x
cat: x: 没有那个文件或目录
root@localhost:/tmp/tmp# ls -ali
ls: 无法访问'z': 不允许的操作
ls: 无法访问'x': 不允许的操作
ls: 无法访问'y': 不允许的操作
总用量 16
533961 drwx------. 2 root root 3488 8月 2 23:48 .
207221 drwx------. 49 root root 8192 8月 2 23:04 ..
1104231 -rw-------. 3 root root 43 8月 2 23:08 .l2s.x0001.0003
? l?????????? ? ? ? ? ? x
? l?????????? ? ? ? ? ? y
? l?????????? ? ? ? ? ? z
root@localhost:/tmp/tmp# | 2hard
|
Title: Stale output from a long-running computation erroneously shows as not stale when app rerun
Body: ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
This is a bug report on behalf of Thiago in order to keep it tracked.
When a Streamlit app with long-running computation is stopped and then re-run, old stale data from the previous run show up as if it was not stale until the old thread completes execution, at which point the data is properly cleaned up.
### Reproducible Code Example
```Python
import platform
import time
import streamlit as st
st.caption(
f"""
Running with Python {platform.python_version()} and Streamlit {st.__version__}.
"""
)
"""
This is an app!!
"""
def slow_computation(x):
st.write("Starting slow computation...")
time.sleep(10)
st.write("Stopping slow computation...")
return f"Done, {x + 1}"
out = slow_computation(1)
st.write(out)
```
### Steps To Reproduce
This is a Playwright test that simulates the current behavior (which is to say, this test currently passes, but should fail once the underlying issue is fixed). There are `NOTE`s inline that describe expected behavior
```py
import time
from playwright.sync_api import Page, expect
def test_threading_behavior(page: Page):
# This uses `localhost:8501` since I was running different versions of Streamlit
# in an attempt to bisect this issue in case it was introduced at some point in time
page.goto("http://localhost:8501/")
page.get_by_text("Stopping slow computation...").wait_for(timeout=20000)
expect(page.get_by_text("Stopping slow computation...")).to_be_visible()
print(page.query_selector_all(".element-container")[0].inner_text())
# At this point, the first run of the thread completed
expect(page.get_by_text("Done, 2")).to_be_visible()
# conditional logic to make it work with older versions of Streamlit
if page.query_selector('div[data-testid="stMainMenu"]'):
page.get_by_test_id("stMainMenu").click()
elif page.query_selector('[id="MainMenu"]'):
main_menu = page.query_selector('[id="MainMenu"]')
if main_menu:
main_menu.click()
# Now we are re-running the app
page.get_by_text("Rerun").click()
# Some time delay so that the new thread is started and some elements are marked as stable
time.sleep(2)
# Expect some of the elements to be marked as stale
assert len(page.query_selector_all('div[data-stale="true"]')) == 2
expect(page.get_by_text("Stopping slow computation...")).to_be_visible()
expect(page.get_by_text("Done, 2")).to_be_visible()
# Stop the new thread
page.get_by_role("button", name="Stop").click()
time.sleep(2)
# NOTE: This should not pass. Stale elements shouldn't suddenly be marked as not stale
# since these are old results from a thread that was supposed to have been stopped
assert len(page.query_selector_all('div[data-stale="true"]')) == 0
# NOTE: This should not pass. It is unexpected that the results from the old thread
# are still showing up, despite us re-running
expect(page.get_by_text("Stopping slow computation...")).to_be_visible()
expect(page.get_by_text("Done, 2")).to_be_visible()
# wait for the thread to complete
time.sleep(10)
# Expect old elements to be cleared out
expect(page.get_by_text("Stopping slow computation...")).not_to_be_visible()
expect(page.get_by_text("Done, 2")).not_to_be_visible()
```
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.0.0 -> 1.40.1 all show this behavior (I didn't test versions before)
- Python version: 3.8
- Operating System: Mac
- Browser: Chrome
### Additional Information
_No response_ | 2hard
|
Title: Time format error?
Body: Generating Heat Map.....
Traceback (most recent call last):
File "/home/server/Scrivania/Personal-YouTube-PDF-Report-Generator/report.py", line 252, in <module>
visual.heat_map()
File "/home/server/Scrivania/Personal-YouTube-PDF-Report-Generator/report.py", line 46, in heat_map
Mon = html.dataframe_heatmap('Mon')
File "/home/server/Scrivania/Personal-YouTube-PDF-Report-Generator/parse.py", line 97, in dataframe_heatmap
times = self.find_times()
File "/home/server/Scrivania/Personal-YouTube-PDF-Report-Generator/parse.py", line 52, in find_times
dayOfWeek = datetime.datetime.strptime(time[0:12], '%b %d, %Y').strftime('%a')
File "/usr/lib/python3.6/_strptime.py", line 565, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "/usr/lib/python3.6/_strptime.py", line 362, in _strptime
(data_string, format))
ValueError: time data 'Dec 15, 2019' does not match format '%b %d, %Y' | 1medium
|
Title: Super slow iteration with trivial custom transform
Body: ### Describe the bug
Dataset is 10X slower when applying trivial transforms:
```
import time
import numpy as np
from datasets import Dataset, Features, Array2D
a = np.zeros((800, 800))
a = np.stack([a] * 1000)
features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")})
ds1 = Dataset.from_dict({"a": a}, features=features).with_format('numpy')
def transform(batch):
return batch
ds2 = ds1.with_transform(transform)
%time sum(1 for _ in ds1)
%time sum(1 for _ in ds2)
```
```
CPU times: user 472 ms, sys: 319 ms, total: 791 ms
Wall time: 794 ms
CPU times: user 9.32 s, sys: 443 ms, total: 9.76 s
Wall time: 9.78 s
```
In my real code I'm using set_transform to apply some post-processing on-the-fly for the 2d array, but it significantly slows down the dataset even if the transform itself is trivial.
Related issue: https://github.com/huggingface/datasets/issues/5841
### Steps to reproduce the bug
Use code in the description to reproduce.
### Expected behavior
Trivial custom transform in the example should not slowdown the dataset iteration.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.12.2 | 2hard
|
Title: Don't apply a layout algorithm by default when provided with matplotlib axes in Plot.on
Body: If `Plot.on` is provided with a matplotlib axes, it probably makes sense to defer the choice of a layout algorithm to the caller.
The expected behavior is less obvious when given a figure or subfigure. | 1medium
|
Title: Using load_dataset with data_files and split arguments yields an error
Body: ### Describe the bug
It seems the list of valid splits recorded by the package becomes incorrectly overwritten when using the `data_files` argument.
If I run
```python
from datasets import load_dataset
load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl")
```
then I get the error
```
ValueError: Unknown split "all_examples". Should be one of ['train'].
```
However, if I run
```python
from datasets import load_dataset
load_dataset("allenai/super", split="train", name="Expert")
```
then I get
```
ValueError: Unknown split "train". Should be one of ['all_examples'].
```
### Steps to reproduce the bug
Run
```python
from datasets import load_dataset
load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl")
```
### Expected behavior
No error.
### Environment info
Python = 3.12
datasets = 3.2.0 | 1medium
|
Title: AttributeError: 'LogisticRegression' object has no attribute 'classes'
Body: Hi everyone, that's my first help request, so I'm sorry if I do something wrong; so:
**Description**
I used for evaluate the classification models the classification_report class and it worked until I setted up `imblearn`(I don't think that it change something but, it happen when I set up that), now when i run the program I had the error
`AttributeError: 'LogisticRegression' object has no attribute 'classes'`
That's the "core" of the code called into question:
```
X_train, X_test, y_train, y_test = train_test_split(features, labels,test_size=0.25,shuffle=True, random_state = 0)
from sklearn.linear_model import LogisticRegression
logisticRegr = LogisticRegression(random_state=0)
from yellowbrick.classifier import ClassificationReport
visualizer = ClassificationReport(logisticRegr)
visualizer.fit(X_train, y_train) # Fit the visualizer and the model
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Draw/show the data
```
someone can help me to fix that?
Thank you
**Versions**
scikit-learn 0.24.1
yellowbrick 1.2
python 3.7
Environment Anaconda 1.9.12
<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
| 1medium
|
Title: questions about the form of the course
Body: should we just read the course web, or there are some videos? thx | 3misc
|
Title: Unpin `transformers` when a newer version > 4.32.1 is released
Body: Ludwig also runs into the same issue flagged here: https://github.com/huggingface/transformers/issues/25805 | 1medium
|
Title: RocksIOError: ....../CURRENT: no such file or directory
Body: ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To reproduce
When I created hundreds of runs, I sometimes encounter the following error.

The script I run is as follows:

The command is : python create_runs.py -n 900
<!-- Reproduction steps. -->
### Environment
- Aim Version (e.g., 3.0.1): 3.16.1
- Python version: 3.9.15
- pip version: 22.3.1
- OS (e.g., Linux): Centos-8
| 2hard
|
Title: Refactor storage service to increase efficiency and stability
Body: # Motivation
Many problems exist with current implementation of Mars storage.
1. No flexible way to control data location
When loading data from other endpoints, we may prefer multiple locations for fallback. Current implementation does not support this and may introduce unnecessary spill operations.
2. No support for remote reader / writer
Remote readers and writers provide flexible way to handle data transfer, enabling shuffle and client-side data manipulation without high memory cost. Current implementation only handles readers and writers locally.
3. Mix of lower-level code and higher-level code
Data transfer and spill should be implemented upon a common IO layer to make the whole storage more maintainable. Current implementation mixes all things up.
4. Race condition exists when spilling data on shuffle
In current implementation, when starting a reader and data spill is launched, it is possible that the data is spilled and we get a KeyError afterward.
5. Unnecessary IPC calls
In current implementation, we need to do quota request, put data info, update quota and deal with spill, all introducing more than one IPC call. The number of calls can be reduced to no more than 2.
# Design
The new design of Mars storage can be divided into two parts: the kernel storage and the user storage. The kernel storage is a thin wrap of storage backends plus necessary access controls. The user storage is constructed over the kernel storage with spill and transfer support.
<img width="745" alt="image" src="https://user-images.githubusercontent.com/8284922/155280541-1e061963-2045-45a6-bca3-89261cca3862.png">
## Kernel Storage
The principle of kernel storage is to make thing simple. That is, the API does not handle complicated retries and redirections. When encountering storage full or lock errors, it raises straightforwardly (instead of performing wait or retry). `KernelStorageAPI` will look like
```python
class KernelStorageAPI:
@classmethod
async def create(cls, band_name: str, worker_address: str) -> "KernelStorageAPI":
"""
Create a band-specific API
"""
async def open_reader(
self,
session_id: str,
data_key: str,
level: StorageLevel = None,
) -> KernelStorageFileObject:
"""
Create a reader on a specific file
"""
async def open_writer(
self,
session_id: str,
data_key: str,
size: int,
level: StorageLevel = None,
) -> KernelStorageFileObject:
"""
Create a writer on a specific file
"""
async def delete(self, session_id: str, data_key: str, error: str = "raise"):
"""
Delete a file with specified keys
"""
async def get_capacity(self) -> Dict[StorageLevel, StorageCapacity]:
"""
Get capacities of levels of the band
"""
async def list(
self,
level: StorageLevel,
lock_free_only: bool = False,
) -> List[InternalDataInfo]:
"""
Get information of all data in the band
"""
async def put(
self,
session_id: str,
data_key: str,
obj: Any,
level: StorageLevel = None,
) -> InternalDataInfo:
"""
Put an object into the band storage
"""
async def get(
self,
session_id: str,
data_key: str,
conditions: List = None,
level: StorageLevel = None,
error: str = "raise",
) -> Any:
"""
Get an object into the band storage.
Slicing support is also provided.
"""
async def get_info(
self,
session_id: str,
data_key: str,
level: StorageLevel = None,
) -> List[InternalDataInfo]:
"""
Get internal information of an object
"""
async def pin(
self,
session_id: str,
data_key: str,
level: StorageLevel = None,
error: str = "raise",
):
"""
Pin specific data on a specific level.
The object will get a read-only lock until unpinned
"""
async def unpin(
self,
session_id: str,
data_key: str,
level: StorageLevel = None,
error: str = "raise",
):
"""
Unpin specific data on a specific level
"""
```
A `StorageItemManagerActor` will hold all information necessary for kernel data management. It comprises of four separate handlers, namely `QuotaHandler`, `LockHandler`, `MetaHandler` and `ReferenceHandler`, implemented separately to reduce potential call overhead. Note that this actor only deal with data metas, not data themselves. Data are handled in caller actors with storage backends.
## User Storage
User storage API wraps kernel storage and provides more capabilities including multiple level handling, spill and transfer. The API can look like
```python
StorageLevels = Optional[List[StorageLevel]]
class UserStorageAPI:
@classmethod
async def create(
cls,
session_id: str,
band_name: str,
worker_address: str,
) -> "UserStorageAPI":
"""
Create a session and band specific API
"""
async def fetch(
self,
data_key: str,
levels: StorageLevels = None,
band_name: str = None,
remote_address: str = None,
error: str = "raise",
):
"""
Fetch object from remote worker or load object from disk
"""
async def open_reader(
self,
data_key: str,
levels: StorageLevels = None,
) -> UserStorageFileObject:
"""
Create a reader on a specific file
"""
async def open_writer(
self,
data_key: str,
size: int,
levels: StorageLevels = None,
band_name: str = None,
) -> UserStorageFileObject:
"""
Create a writer on a specific file
"""
async def delete(self, data_key: str, error: str = "raise"):
"""
Delete a file with specified keys
"""
async def put(
self,
data_key: str,
obj: Any,
levels: StorageLevels = None,
band_name: str = None,
) -> InternalDataInfo:
"""
Put an object into the band storage
"""
async def get(
self,
data_key: str,
conditions: List = None,
levels: StorageLevels = None,
band_name: str = None,
error: str = "raise",
) -> Any:
"""
Get an object into the band storage.
Slicing support is also provided.
"""
async def get_info(
self,
data_key: str,
levels: StorageLevels = None,
band_name: str = None,
) -> List[InternalDataInfo]:
"""
Get internal information of an object
"""
async def pin(
self,
data_key: str,
levels: StorageLevels = None,
band_name: str = None,
error: str = "raise",
):
"""
Pin specific data on a specific level.
The object will get a read-only lock until unpinned
"""
async def unpin(
self,
session_id: str,
data_key: str,
level: StorageLevel = None,
band_name: str = None,
error: str = "raise",
):
"""
Unpin specific data on a specific level
"""
```
## Spill
To implement spill, We need a `SpillManagerActor` to coordinate spill actions. A spill actor will be look like
```python
class SpillManagerActor(mo.StatelessActor):
@classmethod
def gen_uid(cls, band_name: str, storage_level: int) -> str:
pass
def notify_spillable(self, data_key: str, size: int):
"""
Register a spillable data key.
Only called when spill state is True.
"""
async def acquire_spill_lock(self, size: int) -> List[str]:
"""
Acquire certain size for spill and lock the actor
for spill. Keys will be returned for the caller to
spill.
"""
def release_spill_lock(self):
"""
Release the actor when spill ends.
"""
def wait_spill_state_change(self, last_state: bool) -> bool:
"""
Wait until the state of spill changes.
"""
```
Inside the actor, we define a boolean state to indicate whether the storage level is under spill. When the state changes to True, it will be broadcasted to all subscribers to notify them to notify data changes. When the storage is about to spill, it calls `acquire_spill_lock` and supply some sizes. Then the actor enters spill state, locks the actor and then checks for keys to spill. When sizes to spill is available, it will return keys to spill and spill is started from the caller. When spill ends (finishes or encounters an error), the caller calls `release_spill_lock` to release the spill lock for other callers. When there is no pending callers, the state of the actor turns into False.
## Transfer / Remote IO
To implement data transfer, we propose a two-actor solution. We will add a `SenderManagerActor` and a `RemoteIOActor` to do all required things. The `SenderManagerActor` masters data transfer initiated between workers, and `RemoteIOActor` handles remote readers and writers both for inter-worker data transfer as well as `UserStorageAPI`.
When starting an inter-worker transfer, a request is sent to `SenderManagerActor` at the worker hosting the data to send to the calling worker. It calls `RemoteIOActor.create_writer` at receiver site and then `write_data` with batch calls.
`RemoteIOActor` will look like
```python
class RemoteIOActor(mo.StatelessActor):
@mo.batch
async def create_reader(
self,
session_id: str,
data_key: str,
levels: StorageLevels,
) -> List[str]:
pass
@mo.batch
async def create_writer(
self,
session_id: str,
data_key: str,
data_size: int,
levels: StorageLevels,
) -> List[str]:
pass
@mo.batch
async def read_data(
self,
session_id: str,
reader_key: str,
data_buffer: bytes,
size: int,
):
pass
@mo.batch
async def write_data(
self,
session_id: str,
writer_key: str,
data_buffer: bytes,
is_eof: bool,
):
pass
@mo.batch
async def close(
self,
session_id: str,
key: str,
):
pass
```
And `SenderManagerActor` will look like
```python
class SenderManagerActor(mo.StatelessActor):
@mo.extensible
async def send_batch_data(
self,
session_id: str,
data_keys: List[str],
address: str,
level: StorageLevel,
band_name: str = "numa-0",
block_size: int = None,
error: str = "raise",
):
pass
``` | 2hard
|
Title: Failed to use {{key}}={{value}} for nested JSON
Body: Hi! I wanna write a function to [create a GitHub gist](https://docs.github.com/en/rest/gists/gists?apiVersion=2022-11-28#create-a-gist). This is what I wrote:
```fish
function gists__new --description "Create a gist for the authenticated user"
argparse l/login= p/pat= d/description= P/public f/file= c/content= -- $argv
set login $_flag_login
set pat $_flag_pat
set description $_flag_description
set public false
set --query _flag_public && set public true
set file $_flag_file
set content $_flag_content
set body "$(jq --null-input '{
"description": $description,
"public": $public,
"files": {
($file): {
"content": $content
}
}
}' \
--arg description $description \
--arg public $public \
--arg file $file \
--arg content $content)"
https --auth "$login:$pat" POST api.github.com/gists \
Accept:application/vnd.github+json \
X-GitHub-Api-Version:$api_version \
--raw $body
end
```
It works, but requires `jq`. According to [HTTPie docs](https://httpie.io/docs/cli/nested-json) I can get rid of it. I tried to use `{{key}}={{value}}` but failed:
``` fish
function gists__new --description "Create a gist for the authenticated user"
argparse l/login= p/pat= d/description= P/public f/file= c/content= -- $argv
set login $_flag_login
set pat $_flag_pat
set description $_flag_description
set public false
set --query _flag_public && set public true
set file $_flag_file
set content $_flag_content
https --auth "$login:$pat" POST api.github.com/gists \
"description=$description" \
"public=$public" \
"files[$file][content]=$content" \
Accept:application/vnd.github+json \
X-GitHub-Api-Version:$api_version \
end
```
The response I get is:
```json
{
"documentation_url": "https://docs.github.com/rest/gists/gists#create-a-gist",
"message": "Invalid request.\n\nInvalid input: object is missing required key: files."
}
```
What am I doing wrong? | 1medium
|
Title: [BUG] interpretable predictor interpretable_models_summary print_interpretable_rules not available
Body: **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
Following usage example from documentation (of version 0.5.1?) training works but no summary and no interpretable rules
https://auto.gluon.ai/0.5.1/tutorials/tabular_prediction/tabular-interpretability.html
**Expected behavior**
as described in documentation
**To Reproduce**
<!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged.
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com.
In short, we are going to copy-paste your code to run it and we expect to get the same result as you. -->
```
from autogluon.tabular import TabularDataset, TabularPredictor
train_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')
subsample_size = 500 # subsample subset of data for faster demo, try setting this to much larger values
train_data = train_data.sample(n=subsample_size, random_state=0)
train_data.head()
predictor = TabularPredictor(label='class')
predictor.fit(train_data, presets='interpretable')
predictor.leaderboard()
predictor.interpretable_models_summary()
predictor.print_interpretable_rules() # can optionally specify a model name or complexity threshold
```
**Screenshots / Logs**
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 predictor.interpretable_models_summary()
AttributeError: 'TabularPredictor' object has no attribute 'interpretable_models_summary'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[4], line 1
----> 1 predictor.print_interpretable_rules()
AttributeError: 'TabularPredictor' object has no attribute 'print_interpretable_rules'
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
INSTALLED VERSIONS
------------------
date : 2024-01-14
time : 12:06:25.974741
python : 3.10.12.final.0
OS : Linux
OS-release : 6.2.0-1017-aws
Version : #17~22.04.1-Ubuntu SMP Fri Nov 17 21:07:13 UTC 2023
machine : x86_64
processor : x86_64
num_cores : 128
cpu_ram_mb : 253829.41015625
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 98683
accelerate : 0.21.0
async-timeout : 4.0.3
autogluon : 1.0.0
autogluon.common : 1.0.0
autogluon.core : 1.0.0
autogluon.features : 1.0.0
autogluon.multimodal : 1.0.0
autogluon.tabular : 1.0.0
autogluon.timeseries : 1.0.0
boto3 : 1.34.16
catboost : 1.2.2
defusedxml : 0.7.1
evaluate : 0.4.1
fastai : 2.7.13
gluonts : 0.14.3
hyperopt : 0.2.7
imodels : 1.4.1
jinja2 : 3.0.3
joblib : 1.3.2
jsonschema : 4.20.0
lightgbm : 4.1.0
lightning : 2.0.9.post0
matplotlib : 3.8.2
mlforecast : 0.10.0
networkx : 3.2.1
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.26.3
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
orjson : 3.9.10
pandas : 2.1.4
Pillow : 10.2.0
psutil : 5.9.7
PyMuPDF : None
pytesseract : 0.3.10
pytorch-lightning : 2.0.9.post0
pytorch-metric-learning: 1.7.3
ray : 2.6.3
requests : 2.31.0
scikit-image : 0.20.0
scikit-learn : 1.3.2
scikit-learn-intelex : None
scipy : 1.11.4
seqeval : 1.2.2
setuptools : 60.2.0
skl2onnx : None
statsforecast : 1.4.0
statsmodels : 0.14.1
tabpfn : None
tensorboard : 2.15.1
text-unidecode : 1.3
timm : 0.9.12
torch : 2.0.1
torchmetrics : 1.1.2
torchvision : 0.15.2
tqdm : 4.65.2
transformers : 4.31.0
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
```
</details>
| 1medium
|
Title: [Feature Request] python api
Body: Currently, the tool supports inpainting through cli and ui but having a python api is extremely helpful since it gives a better control on the process.
Any possibility of working on a python api? | 1medium
|
Title: Default to adding a newline for everything before the `extra_tooltip_text` when binding an action to a button
Body: Follow up issue from the feedback at https://github.com/napari/napari/pull/6794#discussion_r1592792932
> Nice! Now wondering if we should default to adding a newline for everything before the `extra_tooltip_text`...
_Originally posted by @brisvag in https://github.com/napari/napari/pull/6794#discussion_r1595128310_
> That makes sense! From a quick check seems like the only other button that has some extra text is pan/zoom:
>
> 
>
> Maybe something like `Temporarily re-enable by holding Space` could be used as text in case the extra text is added always in a new line?
>
> 
>
> Also, should an issue be made to tackle that later or maybe is something worthy to be done here?
_Originally posted by @dalthviz in https://github.com/napari/napari/pull/6794#discussion_r1595648027_
> I mean I feel like the new line would be safe as a default...
> but I don't think I would change it in this PR--if at all. It's easy to add the new line, harder to remove it if someone does want a one-liner.
_Originally posted by @psobolewskiPhD in https://github.com/napari/napari/pull/6794#discussion_r1595982004_ | 1medium
|
Title: Query to only return specific fields set
Body: In order to reduce the amount of data being transferred from a resource, is it possible to provide a query args to return a set of fields?
Sometimes, we don't need all attributes of an object but a couple of them.
It would require too many custom routes to expose the different set of attributes we would need.
Here a few examples to describe it:
```
/users?include_fields=['first_name', 'last_name']
/users?include_fields=['email']
/users?include_fields=['city', 'country']
``` | 1medium
|
Title: DataFilesNotFoundError for datasets LM1B
Body: ### Describe the bug
Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b
### Steps to reproduce the bug
`dataset = datasets.load_dataset('lm1b', split=split)`
### Expected behavior
`Traceback (most recent call last):
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/word_freq.py", line 13, in <module>
train_data = DiffusionLoader(tokenizer=tokenizer).my_load(task_name='lm1b', splits=['train'])[0]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in my_load
return [self._load(task_name, name) for name in splits]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in <listcomp>
return [self._load(task_name, name) for name in splits]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 13, in _load
dataset = datasets.load_dataset('lm1b', split=split)
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2594, in load_dataset
builder_instance = load_dataset_builder(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2266, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1827, in dataset_module_factory
).get_module()
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1040, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 598, in infer_module_for_data_files
raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in lm1b`
### Environment info
datasets: 2.20.0 | 1medium
|
Title: Need a better way to switch backend
Body:
Currently, it is not easy to switch to the ray execution backend.
We don't want to introduce lots of the new APIs, such as `new_cluster`, `new_ray_session` in the https://github.com/mars-project/mars/blob/master/mars/deploy/oscar/ray.py for the mars on ray.
Instead, we want to reuse the mars APIs.
For Mars,
```python
new_cluster(worker_num=2, worker_cpu=2) # The default backend is mars
```
For Ray,
```python
new_cluster(backend="ray", worker_num=2, worker_cpu=2)
```
| 1medium
|
Title: request.post command with Content-Type set to application/x-www-form-urlencoded expect json from FlareSolverr server
Body: ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: I freshly `git pull`
- Last working FlareSolverr version: Only used the `git` version
- Operating system: GNU/Linux
- Are you using Docker: [yes/no] No
- FlareSolverr User-Agent (see log traces or / endpoint): User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
- Are you using a VPN: [yes/no] No
- Are you using a Proxy: [yes/no] No
- Are you using Captcha Solver: [yes/no] No
- If using captcha solver, which one:
- URL to test this issue: https://www3.yggtorrent.qa/user/login
```
### Description
Hello,
I'm trying to perform a `POST` request using the `request.post` command. The parameters of my request is the following:
```
{'cmd': 'request.post', 'url': 'https://www3.yggtorrent.qa/user/login', 'postData': 'id=someuser&pass=somepassword&ci_csrf_token=', 'session': '1234', 'maxTimeout': 60000, 'returnOnlyCookies': False}
```
Before this request I created a session (so I'm using the same `session_id`) and get the challenge.
But while sending the `request.post` command above, I got the following error from FlareSolverr logs:
```
2024-02-03 21:21:00 ERROR 'NoneType' object is not iterable
2024-02-03 21:21:00 INFO 127.0.0.1 POST http://127.0.0.1:8191/v1 500 Internal Server Error
```
So looking at the souce code I discovered the following in `FlareSolverr.py` (from line 48):
```
@app.post('/v1')
def controller_v1():
"""
Controller v1
"""
req = V1RequestBase(request.json)
res = flaresolverr_service.controller_v1_endpoint(req)
if res.__error_500__:
response.status = 500
return utils.object_to_dict(res)
```
The issue is the following line:
```
req = V1RequestBase(request.json)
```
So using `request.post` command with `application/x-www-form-urlencoded` does not seem to produce an expected `JSON`. As a result, `request.json` is empty and thus explain the error logs.
### Logged Error Messages
```text
2024-02-03 21:21:00 ERROR 'NoneType' object is not iterable
2024-02-03 21:21:00 INFO 127.0.0.1 POST http://127.0.0.1:8191/v1 500 Internal Server Error
```
### Screenshots
_No response_ | 1medium
|
Title: BLD: Release Xorbits docker images for multi python versions that we support
Body: Note that the issue tracker is NOT the place for general support. For
discussions about development, questions about usage, or any general questions,
contact us on https://discuss.xorbits.io/.
Docker image supports multi python versions.
| 1medium
|
Title: [Feature Request] mic_code instead of exchange as a parameter when fetching data
Body: It would be nice to use the `mic_code` parameter to differentiate between markets when fetching time_series, live & eod prices.
| 0easy
|
Title: Topic 6 russian lecture notebook is in english
Body: https://github.com/Yorko/mlcourse.ai/blob/main/jupyter_russian/topic06_features/topic6_feature_engineering_feature_selection_english.ipynb | 0easy
|
Title: Weekly data - last week missing
Body: Hi all,
The code below, gives me data until the very last day of last week (13th of Jan):
```
yfObj = yf.Ticker(stock)
data = yfObj.history(period="3y")
```
But, if I want to have weekly data, using the code here below:
`
data = yfObj.history(period="3y",interval="1wk")
`
It gives me data until 9th of January. So for some reason the last weekly data isn't available.
If I look on yahoo manually though (historical data, weekly), I do see the data of 13th of January.
Any idea if there's a setting or parameter that I could include? | 1medium
|
Title: [Bug]: ModuleNotFoundError: 'flair.trainers.plugins.functional' on git-installed master
Body: ### Describe the bug
When installing the flair master branch via pypi & git, we get an ModuleNotFound error.
### To Reproduce
```python
exec("pip install git+https://github.com/flairNLP/flair.git")
from flair.models import TARSClassifier
```
### Expected behavior
I can import any module and use flair normally.
### Logs and Stack traces
```stacktrace
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\bened\anaconda3\envs\steepmc_clf\lib\site-packages\flair\__init__.py", line 28, in <module>
from . import ( # noqa: E402 import after setting device
File "C:\Users\bened\anaconda3\envs\steepmc_clf\lib\site-packages\flair\trainers\__init__.py", line 2, in <module>
from .trainer import ModelTrainer
File "C:\Users\bened\anaconda3\envs\steepmc_clf\lib\site-packages\flair\trainers\trainer.py", line 19, in <module>
from flair.trainers.plugins import (
File "C:\Users\bened\anaconda3\envs\steepmc_clf\lib\site-packages\flair\trainers\plugins\__init__.py", line 2, in <module>
from .functional.amp import AmpPlugin
ModuleNotFoundError: No module named 'flair.trainers.plugins.functional'
```
### Screenshots
_No response_
### Additional Context
I suppose this happens due to missing `__init__.py` files in https://github.com/flairNLP/flair/tree/master/flair/trainers/plugins/functional and https://github.com/flairNLP/flair/tree/master/flair/trainers/plugins/loggers
leading to those folders not being recognized as modules and therefore won't be found/installed as code.
### Environment
#### Versions:
##### Pytorch
2.0.0+cu117
##### flair
`ModuleNotFoundError`
##### Transformers
4.28.1
#### GPU
True
| 1medium
|
Title: 如何在yolov5中添加FPS和mAPs评价指标?
Body: ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
如何在yolov5中添加FPS和mAPs评价指标?
### Additional
_No response_ | 1medium
|
Title: [DOCS]when use fob.compute_visualization for Object similarity, it compute the embedding for each object instance?
Body: ### Instructions
Thank you for submitting an issue. Please refer to our [issue policy](https://www.github.com/voxel51/fiftyone/blob/develop/ISSUE_POLICY.md) for information on what types of issues we address.
1. Please fill in this template to ensure a timely and thorough response
2. Place an "x" between the brackets next to an option if it applies. For example:
- [x] Selected option
3. **Please delete everything above this line before submitting the issue**
### URL(s) with the issue
Please provide a link to the documentation entry in question.
### Description of proposal (what needs changing)
Provide a clear description. Why is the proposed documentation better?
### Willingness to contribute
The FiftyOne Community encourages documentation contributions. Would you or another member of your organization be willing to contribute a fix for this documentation issue to the FiftyOne codebase?
- [ ] Yes. I can contribute a documentation fix independently
- [ ] Yes. I would be willing to contribute a documentation fix with guidance from the FiftyOne community
- [ ] No. I cannot contribute a documentation fix at this time
| 0easy
|
Title: S3 support
Body: what is the best way to transparent store files/images to S3? Maybe anybody can share a simple demo, please? | 1medium
|
Title: ImportError: cannot import name 'build_model'
Body: Hi team,
I have installed DeepPavlov version 0.1.0 but unable to import build_model. PFA screenshot for details.
Configuration: Windows 10

| 1medium
|
Title: ImportError: numpy.core.multiarray failed to import
Body: I got an ImportError when I import hypertools, and my numpy is 1.12.1 in windows(or 1.14 in mac). How can I run it?
But when import hypertools second time, the error will disapper. | 1medium
|
Title: largelisttype not supported (.from_polars())
Body: ### Describe the bug
The following code fails because LargeListType is not supported.
This is especially a problem for .from_polars since polars uses LargeListType.
### Steps to reproduce the bug
```python
import datasets
import polars as pl
df = pl.DataFrame({"list": [[]]})
datasets.Dataset.from_polars(df)
```
### Expected behavior
Convert LargeListType to list.
### Environment info
- `datasets` version: 2.19.1.dev0
- Platform: Linux-6.8.7-200.fc39.x86_64-x86_64-with-glibc2.38
- Python version: 3.12.2
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.3.1 | 1medium
|
Title: Conda enviroments at customized locations
Body: When the VBA code tries to load the XLWings dll and there is a Conda Env informed, it tries to load the DLL from:
```
{Conda Path}\envs\{Conda Env}\xlwings...dll
```
In Windows, `Conda Path` usually is something like `C:\ProgramData\Miniconda3` and the default path for the conda environments is `C:\ProgramData\Miniconda3\envs\`, and so everything works.
But when we customize the path where the conda enviroments are created (with `conda config -add envs_dirs <path>`), it breaks, because now the DLL is at `<path>\{Conda Env}\xlwings...dll`. | 1medium
|
Title: post view has a little bug
Body: Hi Miguel,
I am studying flask using your tutorials. Thanks for your helpful book and codes.
There I find a little bug in the `post` view in the views.py file in the main blueprint. That is this view doesn't check whether the current user has `Permission.COMMENT`. I noticed that you removed the comment form in the template when the current user has no comment permissions. However I think that the `post` view should have its own validation logic. If an anonymous user send a post request to this view, an error will be raised.
`AttributeError: 'AnonymousUser' object has no attribute '_sa_instance_state'`
Thanks. | 1medium
|
Title: Integrate VTK Visualization into Taipy as an Extension
Body: ### Description:
VTK (Visualization Toolkit) provides robust 3D visualization capabilities widely used in domains like medical imaging, computational fluid dynamics, and scientific data visualization. Integrating similar functionality directly into Taipy as an optional extension would greatly expand Taipy’s visualization repertoire, enabling users to build rich 3D interactive graphics within the Taipy environment.
### Proposed Solution:
- Create a Taipy extension or component wrapper that can be embeded directly within Taipy pages.
- Provide a straightforward API for developers to:
- Load 3D datasets.
- Interactively manipulate views (e.g., rotate, zoom).
- Apply filters, color maps, and advanced rendering options.
- Support bidirectional communication between the visualization component and Taipy states/variables, similar to how Taipy integrates with other components.
Example Use Case: A medical researcher might want to visualize MRI scans in 3D, slice through volumetric data, or apply custom segmentations. An engineer might want to display complex CFD simulations, adjusting parameters on the fly and seeing updated 3D renderings without leaving the Taipy interface.
### Acceptance Criteria
- [ ] If applicable, a new demo code is provided to show the new feature in action.
- [ ] Integration tests exhibiting how the functionality works are added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | 2hard
|
Title: An error is reported during installation, indicating that Organization already exists
Body: Traceback (most recent call last):
File "/app/api/seed.py", line 128, in <module>
org_obj = create_org_by_org_or_uuid(
File "/app/api/helpers.py", line 95, in create_org_by_org_or_uuid
raise HTTPException(status_code=404, detail="Organization already exists")
fastapi.exceptions.HTTPException
| 1medium
|
Title: `parse_config` doesn't allow to add extra variables
Body: Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first.
**What problem are we trying to solve?**:
```
1. `parse_config` function from `deeppavlov.core.commands.utils` doesn't allow me to add extra vars or override existing. The only way to override vars is to add environment variable. It is very unhandy. I can rewrite this function so it would allow to add extra vars.
2. Variables in config-files are substituted by hand. Why don't you use industry standard template engines like J2?
```
**How can we solve it?**:
```
1. Via adding new parameter to function.
2. Via using jinja
```
**Are there other issues that block this solution?**:
```
As far as I can see - none.
```
If you are ok with it - I will code it myself and do PR. | 1medium
|
Title: The query_compiler.merge reconstructs the Right dataframe for every partition of Left Dataframe
Body: The query_compiler.merge reconstructs the Right dataframe from its partitions for every partition of Left Dataframe, The concat operation results in higher memory consumption when the size of right dataframe is large.
A possible option is to combine the right Dataframe partitions to a single partition dataframe by calling a remote function. This single partiotion dataframe is then passed to each partition of left dataframe thus avoiding the reconstruction in every worker while doing merge. | 2hard
|
Title: [bug] - Dynaconf stop list variables and does not recognize develop variables in my env
Body: Dynaconf dont list variables and thrown a error validation.
* I change my config from mac +zsh to manjaro + fish
* I run my venv
* I run dynaconf -i config.settings list
1. Having the following folder structure
> tree 09:10:07
.
├── alembic
│ ├── env.py
│ ├── __pycache__
│ │ └── env.cpython-39.pyc
│ ├── README
│ ├── script.py.mako
│ └── versions
├── alembic.ini
├── api
│ ├── api_v1
│ │ └── __init__.py
│ ├── deps.py
│ ├── __init__.py
│ └── tests
│ ├── __init__.py
│ ├── test_celery.py
│ ├── test_items.py
│ ├── test_login.py
│ └── test_users.py
├── backend_pre_start.py
├── celeryworker_pre_start.py
├── config.py
├── config.py.old
├── conftest.py
├── connections
│ ├── fetcher.py
│ └── __init__.py
├── constants
│ ├── core.py
│ ├── __init__.py
│ └── shopping_cart_checkout.py
├── crud
│ ├── base.py
│ └── tests
│ ├── __init__.py
│ ├── test_item.py
│ └── test_user.py
├── db
│ ├── base_class.py
│ ├── base.py
│ ├── __init__.py
│ └── __pycache__
│ ├── base_class.cpython-39.pyc
│ ├── base.cpython-39.pyc
│ └── __init__.cpython-39.pyc
├── ext
│ ├── celery_app.py
│ ├── config.py
│ ├── database.py
│ ├── __init__.py
│ └── security.py
├── init_db.py
├── initial_data.py
├── __init__.py
├── main.py
├── models
├── __pycache__
│ └── config.cpython-39.pyc
├── schemas
├── settings.toml
├── test.py
├── tests
│ ├── __init__.py
│ └── utils
│ ├── __init__.py
│ ├── item.py
│ ├── user.py
│ └── utils.py
├── tests_pre_start.py
├── utils.py
└── worker.py
2. Having the following config files:
The dynaconf run in app folder and settings.toml and .secrets.toml staying in app folder
**/app/.secrets.toml**
```toml
[development]
# Postgres
POSTGRES_SERVER=172.15.0.2
POSTGRES_USER=user
POSTGRES_PASSWORD=pass
POSTGRES_DB=mydb
```
and
**/app/settings.toml**
```toml
[default]
STACK_NAME="paymentgateway-com-br"
BACKEND_CORS_ORIGINS=["http://dev.domain.com"]
PROJECT_NAME="Microservice Gateway"
SECRET_KEY="dc24d995bf6c"
FIRST_SUPERUSER="[email protected]"
FIRST_SUPERUSER_PASSWORD="123"
SMTP_TLS=true
SMTP_PORT=587
SMTP_HOST=""
SMTP_USER=""
SMTP_PASSWORD=""
EMAILS_FROM_EMAIL="[email protected]"
USERS_OPEN_REGISTRATION=false
SENTRY_DSN=""
API_V1_STR="/payment-api/v1"
# 60 minutes * 24 hours * 8 days = 8 days
ACCESS_TOKEN_EXPIRE_MINUTES=11520
[development]
DOMAIN="localhost"
TRAEFIK_PUBLIC_NETWORK="traefik-public"
TRAEFIK_TAG="paymentgateway.app.com.br"
TRAEFIK_PUBLIC_TAG="traefik-public"
DOCKER_IMAGE_BACKEND="docker"
BACKEND_CORS_ORIGINS=["http://localhost", "https://localhost"]
# Postgres
SQLALCHEMY_DATABASE_URI="..."
```
3. Having the following app code:
```python
from typing import Any
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from dynaconf import settings
from loguru import logger
def get_engine():
SQLALCHEMY_DATABASE_URL = settings.SQLALCHEMY_DATABASE_URI
engine = create_engine(
SQLALCHEMY_DATABASE_URL,
)
return engine
def get_session():
_engine = get_engine()
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=_engine)
return SessionLocal
```
**/app/main.py**
```python
from fastapi import FastAPI
from starlette.middleware.cors import CORSMiddleware
from api.api_v1.api import api_router
from dynaconf import settings
import logging
import sys
from loguru import logger
app = FastAPI(
title=settings.PROJECT_NAME,
openapi_url=f"{settings.API_V1_STR}/openapi.json",
docs_url="/payment-api",
redoc_url=None
)
if settings.BACKEND_CORS_ORIGINS:
app.add_middleware(
CORSMiddleware,
allow_origins=[str(origin) for origin in settings.BACKEND_CORS_ORIGINS],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(api_router, prefix=settings.API_V1_STR)
```
4. Executing under the following environment
<development>
```fish
$ cd app
$ dynaconf -i config.settings list
```
</details>
**Expected behavior**
List all variables settings
**Environment (please complete the following information):**
- OS: Manjaro BSPWN 20.0.1
- Dynaconf Version 3.1.2 and 3.1.4
- Frameworks in use (FastAPI - 0.61.2)
**Additional context**
Error:
```
> dynaconf -i config.settings list 09:06:22
Traceback (most recent call last):
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/toml/decoder.py", line 253, in loads
try:n=K.load_line(C,G,T,P)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/toml/decoder.py", line 355, in load_line
if P==A[-1]:raise ValueError('Invalid date or number')
ValueError: Invalid date or number
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/bin/dynaconf", line 8, in <module>
sys.exit(main())
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/click/core.py", line 221, in __call__
def __call__(A,*B,**C):return A.main(*B,**C)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/click/core.py", line 205, in main
H=E.invoke(F)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/click/core.py", line 345, in invoke
with C:return F(C.command.invoke(C))
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/click/core.py", line 288, in invoke
if A.callback is not _A:return ctx.invoke(A.callback,**ctx.params)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/click/core.py", line 170, in invoke
with G:return A(*B,**E)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/cli.py", line 442, in _list
cur_env = settings.current_env.lower()
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/base.py", line 145, in __getattr__
self._setup()
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/base.py", line 195, in _setup
self._wrapped = Settings(
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/base.py", line 259, in __init__
self.execute_loaders()
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/base.py", line 990, in execute_loaders
settings_loader(
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/loaders/__init__.py", line 126, in settings_loader
loader["loader"].load(
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/loaders/toml_loader.py", line 31, in load
loader.load(filename=filename, key=key, silent=silent)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/loaders/base.py", line 62, in load
source_data = self.get_source_data(files)
File "/home/jonatas/workspace/microservice-payment-gateway/.ve
nv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/loaders/base.py", line 83, in get_source_data
content = self.file_reader(open_file)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/toml/decoder.py", line 83, in load
try:return loads(f.read(),B,A)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/toml/decoder.py", line 254, in loads
except ValueError as Y:raise TomlDecodeError(str(Y),I,N)
dynaconf.vendor.toml.decoder.TomlDecodeError: Invalid date or number (line 3 column 1 char 25)
```
| 1medium
|
Title: [Feature request] Recognise and offer fixes for missing `git clone` when given a url or SSH ending in .git
Body: It would be neat if a fix was required when I pasted a `git` URL intending to clone it, but forgot to add the `git clone` to the start. There's a fix for when `git clone` is present twice, so I think it's reasonable to have one for when it's never present.
If someone can point me to a getting started page so I can find my way around the project and learn the basics, I'm happy to create an implementation and PR for this myself.
| 0easy
|
Title: Requesting empty scope removes scope from response
Body: When this [request is made](https://httpie.org/doc#forms):
http -a client:secret -f :/auth/token grant_type=client_credentials scope=
I get response, without `scope` even if it was given in request.
Code responsible for this is here:
https://github.com/lepture/authlib/blob/master/authlib/oauth2/rfc6750/wrappers.py#L98-L99
Is this a bug? I would expect `scope` present in response since it was given in request, even if given scope was empty string. | 1medium
|
Title: Get indexation setup complete for benchmarking workflow
Body: - [x] Explore BKTree implementation
- [x] Explore `shelve` implementation
> `shelve` seems to have problems scaling to larger memory collections
>
> If problems persist, we will move to exploring some fast, local database solution
- [x] Explore Fallbacks/Brute Force/(other unoptimized search forms in worst case) | 1medium
|
Title: Make it possible to require whistleblowers to upload files before proceeding with the completion of the submission
Body: **Describe the bug**
In questionnaires if an attachment field is set as required, alarm is not given and report can be sent without the attachment.
**To Reproduce**
Steps to reproduce the behavior:
1. On a questionnaire set an attachment field as required.
2. When you try to file a report the upsaid required field is ignored, and if there are other errors, It's ignored in the error list also.
3. Same if there are multiple files
**Expected behavior**
An alarm of missed attachment should be expected
**Desktop (please complete the following information):**
- OS: w10
- Browser: firefox 94
| 1medium
|
Title: Add friendly view of PyTables structured HDF5 files
Body: pandas uses PyTables for HDF5 outputs. This creates a lot of extra structure (which I don't totally understand) that makes it hard to view idomatically in visidata.
"Unpacking" the PyTables schema would make this tool incredibly useful for peeking at HDF5 files created by pandas.
| 1medium
|
Title: Convert api and cli over to using user groups instead of repo groups
Body: | 2hard
|
Title: Support local path when migrating from wandb to aim
Body: ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
User can specify the local wandb directory path when migrating from wandb to aim.
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
When using the command `aim convert wandb --entity 'my_team' --project 'test_project'` to migrate, the server needs to be able to access the network.
However, since some private servers cannot be connected to the Internet, it cannot be executed at this time. In this case, it would be more flexible to be able to migrate without accessing the network but given a local path.
### Pitch
<!-- A clear and concise description of what you want to happen. -->
`aim convert wandb --run $LOCAL_PATH_TO_WANDB_RUN`
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| 1medium
|
Title: نسخه dev سوالات
Body: سلام
ببخشید الان تعداد یوزر های من بالاست تنها مشکلی که دارم روی فعال کردن فرگمنت که وقتی لینک بروز رسانی میکنن یا از نرم افزار ها خارج میشن فرکمنت غیر فعال میشه و مجدد باید دستی ثبت بکنن
الان دیدم نسخه هست به نام dev
اگر بخوام به این نسخه برم ممکنه الان باگی داشته باشه ؟؟
۱۲ تا سرور به صورت نود انجام دادم نیازی نیست تو اون سرور ها کاری انجام بدم ؟؟
معمولا چه کارهایی باید انجام بشه تا یوزرهام به مشکل نخوره ؟؟
| 1medium
|
Title: [ASK] New Python recommender systems library - LibRecommender
Body: Hi!
just FYI, a new Python library that includes some interesting reco algorithms was recently added to Github: https://github.com/massquantity/LibRecommender
Maybe it would be interesting to include some usecases for some of the included algos that are not covered yet by this repo.
thank you!
| 3misc
|
Title: DNS Resolver: Add `getaddrinfo` fallback
Body: #### Problem Description
Based on https://github.com/mitmproxy/mitmproxy/issues/7064, hickory's functionality to determine the OS name servers seems to have issues on both Linux and Windows. As much as I prefer hickory, we should have a fallback that uses `getaddrinfo`. This restores at least some basic functionality.
Implementation-wise, this likely means we should change `DnsResolver.name_servers` to return an empty list if it's unable to determine servers. This way it's cached (whereas an exception is not).
#### Steps to reproduce the behavior:
1. Run mitmproxy in WireGuard mode on a setup where hickory is unable to determine nameservers. | 1medium
|
Title: [request] ARM64 image
Body: I'd like to give this ago on my Pi4. Is there / will there be /could there be a version which runs on ARM64? | 1medium
|
Title: Fix Ivy Failing Test: numpy - statistical.sum
Body: | 1medium
|
Title: [FEATURE] Mix MIND utils
Body: ### Description
<!--- Describe your expected feature in detail -->
DRY in Mind:
- https://github.com/microsoft/recommenders/blob/master/reco_utils/dataset/mind.py
- https://github.com/microsoft/recommenders/blob/staging/reco_utils/recommender/newsrec/newsrec_utils.py
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
| 1medium
|
Title: OpenCV function not implemented
Body: I got unspecified error when trying to run opencv following this basic OpenCV [getting started](https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_image_display/py_image_display.html#display-image). The error is:
```
OpenCV(3.4.1) Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage, file /root/opencv/modules/highgui/src/window.cpp, line 636
Traceback (most recent call last):
File "image_get_started.py", line 8, in <module>
cv2.imshow("image", img)
cv2.error: OpenCV(3.4.1) /root/opencv/modules/highgui/src/window.cpp:636: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage
```
I was using the `keras-cpu-py36` while adding OpenCV in its Dockerfile. Looking back to OpenCV and the `libgtk2.0-dev` and `pkg-config` wasn't yet included. | 1medium
|
Title: باگ پروتکل httpguard
Body: سلام وقتی با اینباند جنریتور کاستوم کانفیگ میزنم و به مرزبان اضافه میکنم موقع ساخت یوزر لینک ساب کانفیگو کامل نمیارهکلا ناقص کانفیگو بالا میاره مثه اینکه با پروتکل httpguard مشکل داره

| 1medium
|
Title: MachineLearning(机器学习) 学习路线图链接失效
Body: MachineLearning(机器学习) 学习路线图链接失效:
http://www.apachecn.org/map/145.html | 1medium
|
Title: replace in confpy source_suffix = {'.rst': 'restructuredtext'}
Body: | 1medium
|
Title: static code to generated models
Body: Hey there, for the maintainers, thanks for that great library.
I would appreciate a heads-up on something I'm trying to do, which is basically adding some static code to a generate model.
I'm not sure Jinja would be suitable for that, since it's just a templating. Can someone give me a direction on the best approach for that?
For example, for an Enum class:
```python
class Foo(str, Enum):
foo = 'foo'
bar = 'bar'
```
I would like the generated model to override a special method:
```python
class Foo(str, Enum):
foo = 'foo'
bar = 'bar'
@classmethod
def _missing_(cls, value):
pass
```
I'm generating my models from an `openapi.yml` spec. Appreciate any thoughts or help!
| 1medium
|
Title: quicksight.create_athena_dataset/datasource: allow user groups to be passed in allowed_to_use and allowed_to_manage
Body: **Is your idea related to a problem? Please describe.**
Right now, the parameters `allowed_to_use` and `allowed_to_manage` inside the method `quicksight.create_athena_dataset` allow only users to be passed but not user groups. If I want to give those permissions to user groups then I have to do a separate call using boto and run `update_data_set_permissions `. Same goes for data sources.
**Describe the solution you'd like**
It would be nice if `allowed_to_use` and `allowed_to_manage` also accepted user groups to avoid the workaround with boto.
| 1medium
|
Title: 预处理数据集出现如下错误
Body: 做预处理数据集时出现如下错误:
E:\Miniconda3\envs\mockingbird\MockingBird-main>python pre.py E:\Miniconda3\envs\mockingbird\MockingBird-main -d aidatatang_200zh -n 1
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Using data from:
E:\Miniconda3\envs\mockingbird\MockingBird-main\aidatatang_200zh\corpus\train
aidatatang_200zh: 0%| | 0/547 [00:00<?, ?speakers/s]Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
aidatatang_200zh: 100%|████████████████████████████████████████████████████████| 547/547 [01:17<00:00, 7.03speakers/s]
The dataset consists of 0 utterances, 0 mel frames, 0 audio timesteps (0.00 hours).
Traceback (most recent call last):
File "E:\Miniconda3\envs\mockingbird\MockingBird-main\pre.py", line 72, in <module>
preprocess_dataset(**vars(args))
File "E:\Miniconda3\envs\mockingbird\MockingBird-main\models\synthesizer\preprocess.py", line 101, in preprocess_dataset
print("Max input length (text chars): %d" % max(len(m[5]) for m in metadata))
ValueError: max() arg is an empty sequence

请大神们给予帮助!!! | 1medium
|
Title: Phonenumber library does not recognize +592 7 Guyanese phone numbers as valid
Body: ## Description of the issue
The phonenumber library currently does not recognize new Guyanese (GY) phone numbers starting with `+592 7` as valid. Only numbers starting with `+592 6` are being correctly validated. This causes issues when users attempt to submit forms using phone numbers with the updated format.
## Context information (for bug reports)
**Output of `bench version`**:
15.56.0
## Steps to reproduce the issue
1. Attempt to input a phone number starting with `+592 7` in any form field using the phone fieldtype by selecting `guyana` then enter a number `7004812` (which is a valid GY number).
2. Submit the form.
3. Observe validation error.
### Observed result
Phone numbers starting with `+592 7` are incorrectly marked as invalid.
### Expected result
Phone numbers starting with `+592 7` should be recognized as valid.
### Stacktrace / full error message
```bash
15:23:36 web.1 | Traceback (most recent call last):
15:23:36 web.1 | File "apps/frappe/frappe/app.py", line 114, in application
15:23:36 web.1 | response = frappe.api.handle(request)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/api/_init_.py", line 49, in handle
15:23:36 web.1 | data = endpoint(**arguments)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/api/v1.py", line 36, in handle_rpc_call
15:23:36 web.1 | return frappe.handler.handle()
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/handler.py", line 50, in handle
15:23:36 web.1 | data = execute_cmd(cmd)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/handler.py", line 86, in execute_cmd
15:23:36 web.1 | return frappe.call(method, **frappe.form_dict)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/_init_.py", line 1726, in call
15:23:36 web.1 | return fn(*args, **newargs)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/utils/typing_validations.py", line 31, in wrapper
15:23:36 web.1 | return func(*args, **kwargs)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/desk/form/save.py", line 37, in savedocs
15:23:36 web.1 | doc.submit()
15:23:36 web.1 | File "apps/frappe/frappe/utils/typing_validations.py", line 31, in wrapper
15:23:36 web.1 | return func(*args, **kwargs)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/model/document.py", line 1060, in submit
15:23:36 web.1 | return self._submit()
15:23:36 web.1 | ^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/model/document.py", line 1043, in _submit
15:23:36 web.1 | return self.save()
15:23:36 web.1 | ^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/model/document.py", line 342, in save
15:23:36 web.1 | return self._save(*args, **kwargs)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/model/document.py", line 381, in _save
15:23:36 web.1 | self._validate()
15:23:36 web.1 | File "apps/frappe/frappe/model/document.py", line 587, in _validate
15:23:36 web.1 | self._validate_data_fields()
15:23:36 web.1 | File "apps/frappe/frappe/model/base_document.py", line 914, in _validate_data_fields
15:23:36 web.1 | frappe.utils.validate_phone_number_with_country_code(phone, phone_field.fieldname)
15:23:36 web.1 | File "apps/frappe/frappe/utils/_init_.py", line 119, in validate_phone_number_with_country_code
15:23:36 web.1 | frappe.throw(
15:23:36 web.1 | File "apps/frappe/frappe/_init_.py", line 603, in throw
15:23:36 web.1 | msgprint(
15:23:36 web.1 | File "apps/frappe/frappe/_init_.py", line 568, in msgprint
15:23:36 web.1 | _raise_exception()
15:23:36 web.1 | File "apps/frappe/frappe/_init_.py", line 519, in _raise_exception
15:23:36 web.1 | raise exc
15:23:36 web.1 | frappe.exceptions.InvalidPhoneNumberError: Phone Number +592-7123345 set in field phone_number is not valid.
15:23:36 web.1 |
15:23:36 web.1 | 172.18.0.1 - - [19/Feb/2025 15:23:36] "POST /api/method/frappe.desk.form.save.savedocs HTTP/1.1" 417 -
```
## Additional information
- **Frappe install method**: Docker
| 1medium
|
Title: Support a collapsed page navigation menu that only shows the page icons
Body: ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Support a collapsed page navigation menu that only shows the page icons similar to the VS Code activity bar:

### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | 1medium
|
Title: Explicit Routers
Body: **Is your feature request related to a problem? Please describe.**
The [documentation](https://tortoise.github.io/router.html?h=router#model-signals) and the implementation for the Router don't seem to match. The documentation suggests that the methods for the router class are explicit while the code suggest that these methods are optional. The current code appears to follow django's methodology of allowing multiple routers to be registered and then processed in order of significance.
**Describe the solution you'd like**
Routers to be implemented according to the documentation. This would also allow for more accurate static type checking through Protocols. An example for the change can found [here](https://github.com/tortoise/tortoise-orm/compare/develop...i-am-grub:tortoise-orm:explicit-router)
**Describe alternatives you've considered**
Updating the documentation to clarify the intent of the router implementation.
| 1medium
|
Title: Update to marshmallow 3
Body: Hi, everyone.
I want some information about update to marshmallow 3. Where I can look it? | 3misc
|
Title: frontend for reporting OOM errors from the backend, and informing the users to contact admins.
Body: | 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.