text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Alicevision Meshroom support for custom data format and reusable for gsplat as well?
Body: **Is your feature request related to a problem? Please describe.**
I'm running the SFM pipeline from Alicevision Meshroom, but I don't find a supporting np-process-data pipeline explicitly handle the Meshroom output. I searched issues and PR with "Meshroom" keywords and got nothing found. Since Meshroom is not a very poor quality calibration+reconstruction software.
Also, the new repo [gsplat](https://github.com/nerfstudio-project/gsplat/tree/main) has the example processing COLMAP format. Will this np-process-data pipeline become reusable for both repos?
**Describe the solution you'd like**
I verified Meshroom output in Blender including the camera extrinsic and intrinsics as well as the mesh location
**Describe alternatives you've considered**
manually have a code to convert the output from Meshroom to the nerfstudio is not difficult but could be integrated to the data preprocessing pipeline.
| 1medium
|
Title: 如何通过点击跳转到新页面?
Body: 目前看到的案例里面都是页面通过register_admin 在初始化的时候已经设定好了。我在测试的时候需要通过点击某个item(比如一个列表)然后这个列表会打开或跳转到一个新页面 (比如ArticleOverviewPage)。而这个页面并没有在初始化的时候就设定好。
请问有没有实现的例子呢?多谢各位。 | 1medium
|
Title: AssertionError: Batch size 4 is wrong. It must be a multiple of # GPUs 3.
Body: [python3.6 run.py --input_folder test_images/old_w_scratch_gpu012/ --output_folder old_w_scratch_output-gpu-012/ --GPU 0,1,2](url)
Error comming...
help please
--------------------------------
File "test_face.py", line 15, in <module>
opt = TestOptions().parse()
File "/home/ubuntu/download/Bringing-Old-Photos-Back-to-Life/Face_Enhancement/options/base_options.py", line 293, in parse
), "Batch size %d is wrong. It must be a multiple of # GPUs %d." % (opt.batchSize, len(opt.gpu_ids))
AssertionError: Batch size 4 is wrong. It must be a multiple of # GPUs 3.
Finish Stage 3 ...
| 1medium
|
Title: Steam Provider doesn't appear in socialaccounts in Response from _allauth/browser/v1/config
Body: Hey, I'm trying to integrate Steam into my Django Rest + SolidJS Project. To understand how allauth headless works, I downloaded the react-spa example from the repo and added these things:
In INSTALLED_APPS:
"allauth.socialaccount.providers.openid",
"allauth.socialaccount.providers.steam",
In the Django admin dashboard, I added a social application with this data:
Provider: Steam
Name: Steam Provider
Client id: steam_api_key
Secret Key: steam_api_key
After starting the docker-compose and going to the signup page, I don't see any Steam Provider because providers is empty in the Response from _allauth/browser/v1/config.
Any help or insights would be greatly appreciated. Thank you! | 1medium
|
Title: How to calculate relevance/score for a query asked by the user against the trained document on a scale of 0-1
Body: **Question**
How to calculate relevance/score for a query asked by the user against the trained documents ?
**Additional context**
@abhishekraok @DmitryKey
As relevance/score for the results obtained against the documents is very important to be known , so that user is aware of the confidence level of the retrieved documents .
Could you please let us know how is this calculated?
- If this is incorporated in this code base where is it?
- How do we do it for a BERT model ?( is it using dot product calculation? )
| 1medium
|
Title: FilterSet fields associated with CharField with choices are not converted into graphene.Enum
Body: | 1medium
|
Title: CHUNK_SIZE defaults to 1, so unless explicitly set, prefetching won't work
Body: **Describe the bug**
Not setting `CHUNK_SIZE` in your resource will cause it to default to being set to 1. Since we need to paginate the queryset to enable prefetching, this means we iterate over the queryset row by row, which prevents prefetching from working.
Setting the `CHUNK_SIZE` explicitly against the resource fixes the issue and lets the prefetch work, but there's no signal that this problem could be happening to the user, which can be very confusing if you're new to prefetch and can't see why it's not working (not that I know anyone who fits that description, ahem)
**To Reproduce**
Steps to reproduce the behavior:
1. Remove the `CHUNK_SIZE` setting from your resource
2. Make a resource for a Django model that has a valid prefetch option, e.g. a `m2m_thing = ManyToManyField(…)`
3. Export the resource using `MyResource().export(queryset.prefetch_related('m2m_thing')`
4. See in the query log that one query for `m2m_thing` is run for each row of the model
**Versions:**
- Django Import Export: 2.3.0
- Python: 3.6.9
- Django: 3.0.8
**Expected behavior**
* Setting the `CHUNK_SIZE` default value to something higher, e.g. 5000?
* Possibly a warning output if `CHUNK_SIZE` isn't set but prefetches are being used on the queryset?
* A note added to the docs against `CHUNK_SIZE`?
| 1medium
|
Title: performance issue
Body: I have a big performance issue, 1.5s using the raw query, more than 16s using the sqlalchemy core version of it.
```
raw_query = str(stmt.compile(dialect=postgresql.dialect(), compile_kwargs={"literal_binds": True}))
conn = await asyncpg.connect(str(databaselake.url))
direct0 = time.time()
direct_result = await conn.fetch(raw_query)
await conn.close()
direct1 = time.time()
print(f"direct: {direct1-direct0}")
raw0 = time.time()
raw_result = await databaselake.fetch_all(raw_query)
raw1 = time.time()
print(f"raw: {raw1-raw0}")
stmt0 = time.time()
result = await databaselake.fetch_all(stmt)
stmt1 = time.time()
print(f"stmt: {stmt1 - stmt0}")
print("result ok")
```
direct: 1.3257312774658203
raw: 1.4148566722869873
stmt1: 19.825509071350098
`direct` and `raw` both use `raw_query` which is the fully compiled version of `stmt`
They have similar performance, in line with what I get from entering the query in psql, around 1.5s
`direct` uses asyncpg directly, this was just to test, while `raw` uses databases
now the fetch_all version using the sqlachemy core `stmt` takes 16 times more time...
the query returns about 21 columns and 4500 rows. it's about 80 000 characters long, not sure it makes sense to paste it here but maybe there's something else I havn't considered
edit: the raw statement looks like this
```
select l.field1, l.field2, l.field3, ,......, l.field21
from lake l
join (
select x.unnest, x.ordinality
from unnest(array ['ticker1','ticker2', ....., 'ticker4299', 'ticker4300']) with ordinality
as x (unnest, ordinality)) as r on l.ticker = r.unnest
where time = '2019-06-20'
order by r.ordinality;
``` | 2hard
|
Title: Python Script Killed when I am running auto profiling. It's executing for few mins and just when summarising the dataset it throws one column name and says "Killed". The machine is a 16GB node unix server
Body: | 2hard
|
Title: How to make `irfft` in ONNX
Body: # Ask a Question
### Question
Hello! Can you point out how `irfft` can be used? I found issues and documentation on using `rfft`, but didn't find anything about `irfft`.
I found that https://github.com/onnx/onnx/issues/1646 and https://github.com/onnx/onnx/issues/3573 was closed with comment `All the other ops from the original list were added at some point.`. But I can't find any information related to `irfft`.
I would be glad to help!
| 1medium
|
Title: Big bug on create || post > id increment on fail post wasting unique id and spamming db
Body: 
id keep keep increase even on fail post while this issue not occur on sqlalchmey | 2hard
|
Title: AddToBrightness not deterministic/reproducible
Body: I have found that the result of `AddToBrightness` is not deterministic, even when a seed is provided. Given that other probabilistic augmentations such as `Dropout` do seem to be deterministic when a seed is provided, I assume this is unexpected behavior.
The behavior can be reproduced with the following snippet:
```python
import hashlib
import imgaug
import numpy as np
from imgaug.augmenters import AddToBrightness, Dropout, Sequential
from numpy.random import default_rng
numpy_random_generator = default_rng(seed=42)
image = numpy_random_generator.integers(0, 255, (200, 200, 3), dtype=np.uint8)
imgaug_random_generator = imgaug.random.RNG(numpy_random_generator)
sequential = Sequential(
children=[
AddToBrightness(seed=imgaug_random_generator),
Dropout(seed=imgaug_random_generator),
],
seed=imgaug_random_generator,
)
image = sequential.augment_image(image)
def hash_image(image):
md5_hash = hashlib.md5()
md5_hash.update(image.data.tobytes())
return int(md5_hash.hexdigest(), 16)
print(hash_image(image))
```
When running this code, the hash will be different each time. I expect the hash to be the same each time, as the seed is provided.
When commenting out `AddToBrightness` and only applying `Dropout` augmentation, the seed is the same on each run. | 1medium
|
Title: access previous execution
Body: Please add option to access notebooks from previous execution. The parameters values should be displayed and the notebook. | 1medium
|
Title: Stateless app loading hook exceptions should be captured and not propagated
Body: A side effect of the misconfiguration in #330 is that it exposes a lack of exception handling around the hook to locate a stateless app.
The code in [get_local_stateless_by_name](https://github.com/GibbsConsulting/django-plotly-dash/blob/master/django_plotly_dash/dash_wrapper.py) should cleanly capture any exception raised by the hook.
| 1medium
|
Title: Do not deserializing to python decimal.Decimal is a nightmare for floating numbers in JSON
Body: Python uses floats for it's JSON library. Herein lies the problems.
$ python
>>> import json
>>> json.dumps(10.001)
'10.000999999999999'
>>> json.loads('{"blaa": 0.333331}')
{u'blaa': 0.33333099999999999}
>>> type(json.loads('{"blaa": 0.333331}')['blaa'])
<type 'float'>
This is unacceptable.
see https://bitcointalk.org/index.php?topic=4086.msg58882#msg58882
In fact, it’s impossible to customize the behaviour of orjson to map to decimal values. This is weird. | 2hard
|
Title: [New transform] PadIfNeeded3D
Body: | 1medium
|
Title: PackageNotFoundError
Body: Traceback (most recent call last):
File "main.py", line 4, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "whisperx\__init__.py", line 1, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "whisperx\transcribe.py", line 9, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "whisperx\alignment.py", line 12, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "transformers\__init__.py", line 26, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "transformers\dependency_versions_check.py", line 57, in <module>
File "transformers\utils\versions.py", line 117, in require_version_core
File "transformers\utils\versions.py", line 104, in require_version
importlib.metadata.PackageNotFoundError: No package metadata was found for The 'tqdm>=4.27' distribution was not found and is required by this application.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main
[87568] Failed to execute script 'main' due to unhandled exception! | 1medium
|
Title: [docs] "Adding custom CSS to your demo" is not updated for 5.0
Body: ### Describe the bug
"For additional styling ability, you can pass any CSS to your app using the css= kwarg. You can either the filepath to a CSS file, or a string of CSS code." silently fails for css from a file path since 5.0.
This is mentioned in #9463 but not updated in the doc.
See also #10344 for another issue on that page.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
Prerequisites: `styles.css` in the same directory defining `some-style`
```python
import gradio as gr
with gr.Blocks(fill_height=True, css="styles.css") as demo:
gr.Textbox(label="Time", elem_classes="some-style")
```
fails silently.
The correct parameter for css from a file path:
```
css_paths="styles.css"
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio >= 5.0
```
### Severity
I can work around it | 0easy
|
Title: feature request about resuming model training with epochs change
Body: ### Scene
given trained model total 100 epochs done, then I want to resume training based on previous trained model and increasing/decreasing some epochs.
first 100 epochs:
```python
model = YOLO("yolo.pt")
results = model.train(
data="datasets/xxx.yaml",
save_period=2,
epochs=100,
)
```
then, resume training and change epochs:
```python
model = YOLO("epoch90.pt")
results = model.train(
data="datasets/xxx.yaml",
save_period=2,
epochs=300, # more epochs
resume=True, # resume flag
)
```
but `epochs=300` not works, it still stops at 100.
As already pointed in https://github.com/ultralytics/ultralytics/issues/16402 and https://github.com/ultralytics/ultralytics/issues/18154 . I'm wondering why changing epochs when resuming is not supported in current code base?
Is it possible to support it?
| 1medium
|
Title: The popover doesn't hide when it's inside a fragment and depends on another widget to be visible.
Body: ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
When I add a `popover` that depends on another widget to be visible, once it becomes visible, it never hides again.
### Reproducible Code Example
```Python
from dataclasses import dataclass
from typing import Union
import streamlit as st
@dataclass
class Test:
name: str = "Test"
child: Union["Test", None] = None
@st.experimental_fragment
def show(self):
content_container, config_container = st.columns([1, 1])
with content_container:
st.write(f"This is the content of {self.name}")
with config_container:
with st.popover("Config"):
st.write("This is the config")
if st.toggle(label="Show child", key=self.name):
self.show_child()
def show_child(self):
if self.child is None:
st.write("No child")
return
self.child.show()
Test(child=Test("Child")).show()
```
### Steps To Reproduce
1. Initial state

2. Enable the `toggle`

3. Disable the `toggle`

### Expected Behavior
I expect the `popover` the be visible only if the `toggle` is enabled.
### Current Behavior
The `popover` remains visible but "disabled" if it has been rendered at least once and the `toggle` is disabled.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.36.0
- Python version: 3.11.10
- Operating System: WSL Ubuntu 22.04.5 LTS
- Browser: Chrome
### Additional Information
_No response_ | 1medium
|
Title: A flag to disable running code entirely and give replies only
Body: ### Is your feature request related to a problem? Please describe.
Hello, it seems like there is no way to disable the running code by default when returning responses which can cause the response to freeze waiting in a infinite loop for a y/n. Seems there was an oversight considering there is the option to default to execute code by default with `interpreter -y`.
### Describe the solution you'd like
Have a simple flag that can be toggled in the CLI and API, such as `interpreter -nr` and `res = interpreter.chat("Explain what is Python?", disable_code_run=True)`.
### Describe alternatives you've considered
Telling the interpreter to not run code doesn't always work and sometimes will get stuck in the prompt.
### Additional context
_No response_ | 1medium
|
Title: Argument to PredictionError is overwritten
Body: **Describe the bug**
Argument `is_fitted` is overwritten when Instantiating a `PredictionError` or `ResidualsPlot`.
**To Reproduce**
```python
from yellowbrick.regressor import PredictionError
from sklearn.linear_models import Lasso
visualizer = PredictionError(Lasso(), "test title", is_fitted=True)
print(visualizer.is_fitted)
```
**Expected behavior**
When `is_fitted` is passed in, I expect it to not be overwritten in the __init__ calls
**Desktop (please complete the following information):**
- OS: Ubuntu 20.04
- Python Version 3.8.10
- Yellowbrick Version : 1.3.post1
| 1medium
|
Title: Error: Convert Pytorch model to IR model
Body: I trained the model with pytorch and used torch.save() to save the whole model including weights and network but when I run "mmtoir -f pytorch -d mn --inputShape 1,28,28 -n xxx.pth", I got the error
> "AttributeError: Can't get attribute 'Net' on <module '__main__' from '/home/xxxx/anaconda3/bin/mmtoir'>"
>
It seems mmtoir cannot find my network definition? | 1medium
|
Title: Dedicated yaml tab for Trials
Body: /kind feature
**Describe the solution you'd like**
Our goal is to create a new yaml tab that will show the full YAML as it comes from the backend using the new Editor component introduced in https://github.com/kubeflow/kubeflow/pull/6733.
Related issue: https://github.com/kubeflow/katib/issues/1763
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| 1medium
|
Title: Tabulator : tooltips
Body: Hello all,
#### ALL software version info
MacOs with Chrome, Safari or FireFox
bokeh 3.6.1 and panel >= 1.5.2
#### Description of expected behavior and the observed behavior
The issue occurs in Tabulator when using `header_tooltips` with a `FastListTemplate`. The background and font colors of the tooltips are both dark, making the text unreadable.
I couldn't find the CSS responsible for the background color.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import pandas as pd
import panel as pn
import random
import numpy as np
pn.extension('tabulator')
n = 100
data = {
"ID": range(1, n + 1),
"Name": [f"Name_{i}" for i in range(1, n + 1)],
"Age": [random.randint(18, 70) for _ in range(n)],
"Score": [round(random.uniform(50, 100), 2) for _ in range(n)],
"Category": [random.choice(["A", "B", "C"]) for _ in range(n)],
"Active": [random.choice([True, False]) for _ in range(n)],
"Date": pd.date_range("2023-01-01", periods=n),
"Comment": [f"Comment_{i}" for i in range(1, n + 1)],
"Rating": [round(random.uniform(1, 5), 1) for _ in range(n)],
"Value": np.random.randint(100, 500, size=n)}
df = pd.DataFrame(data)
htt = {x: x for x in data.keys()}
tabulator = pn.widgets.Tabulator(df, page_size=10, sizing_mode='stretch_width', header_tooltips=htt)
# app = tabulator # OK
template = pn.template.FastListTemplate(title="Tabulator test", main=[tabulator]) # bug
# template = pn.template.BootstrapTemplate(title="Tabulator test", main=[tabulator]) # OK
# template = pn.template.MaterialTemplate(title="Tabulator test", main=[tabulator]) # OK
# template = pn.template.MaterialTemplate(title="Tabulator test", main=[tabulator]) # OK
app = template
app.servable()
app.show()
```
#### Screenshots or screencasts of the bug in action
<img width="770" alt="Image" src="https://github.com/user-attachments/assets/76b5606e-03cc-4505-85b6-ef379496675a" />
| 1medium
|
Title: 1.2.6 不能用javacap连接竖屏模式下的模拟器, 只能连接横屏模式
Body: **描述问题bug**
根据教程连接emulator时选择"use javacap",然后就会无法连接处于竖屏模式下的模拟器,我尝试了mumu emulator (oversea version) 和 ldplayer 都不行,但是在横屏模式就可以完美连接。如果不勾选"use javacap",使用minicap也可以在竖屏模式下连接,但图像比较模糊,还是希望可以使用javacap。
***Use javacap connect portrait mode log (mumu emulator)***:
```
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 get-state
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 wait-for-device
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.sdk
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap ; echo ---$?---
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap.so ; echo ---$?---
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -v 2>&1
[07:10:58][DEBUG]<airtest.core.android.minicap> WARNING: linker: /data/local/tmp/minicap has text relocations. This is wasting memory and prevents security hardening. Please fix.
version:5
[07:10:58][DEBUG]<airtest.core.android.minicap> upgrade minicap to lastest version: -1->6
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell rm -r /data/local/tmp/minicap*
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.product.cpu.abi
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.preview_sdk
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.release
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\x86\minicap /data/local/tmp/minicap
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap ; echo ---$?---
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\minicap-shared/aosp/libs/android-23/x86/minicap.so /data/local/tmp/minicap.so
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap.so ; echo ---$?---
[07:10:59][INFO]<airtest.core.android.minicap> minicap installation finished
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys window displays ; echo ---$?---
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell pm path jp.co.cyberagent.stf.rotationwatcher ; echo ---$?---
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell export CLASSPATH=/data/app/jp.co.cyberagent.stf.rotationwatcher-1/base.apk;exec app_process /system/bin jp.co.cyberagent.stf.rotationwatcher.RotationWatcher
[07:11:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys package com.netease.nie.yosemite ; echo ---$?---
[07:11:00][INFO]<airtest.core.android.yosemite> local version code is 302, installed version code is 302
[07:11:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 forward --no-rebind tcp:15578 localabstract:javacap_15578
[07:11:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell pm path com.netease.nie.yosemite ; echo ---$?---
[07:11:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell CLASSPATH=/data/app/com.netease.nie.yosemite-1/base.apk exec app_process /system/bin com.netease.nie.yosemite.Capture --scale 100 --socket javacap_15578 -lazy 2>&1
[07:11:00][DEBUG]<airtest.utils.nbsp> [javacap_sever]b'Capture server listening on @javacap_15578'
[07:11:00][ERROR]<airtest> Traceback (most recent call last):
File "app\plugins\devicepool\android\device.py", line 342, in run
File "app\plugins\devicepool\android\device.py", line 351, in main_loop
File "airtest\core\android\javacap.py", line 101, in get_frame_from_stream
return self.frame_gen.send(None)
StopIteration
[Warning] The current device is unplugged!
```
***Use minicap connect portrait mode log (mumu emulator)***:
```
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 get-state
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 wait-for-device
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.sdk
F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\helper.py:41: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
cls.LOGGING.warn("Device:%s updated %s -> %s" % (dev.uuid, instance, dev))
[07:20:58][WARNING]<airtest.core.api> Device:127.0.0.1:7555 updated <airtest.core.android.android.Android object at 0x000002244D4A9CF8> -> <airtest.core.android.android.Android object at 0x000002244C134F60>
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap ; echo ---$?---
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap.so ; echo ---$?---
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -v 2>&1
[07:20:58][DEBUG]<airtest.core.android.minicap> WARNING: linker: /data/local/tmp/minicap has text relocations. This is wasting memory and prevents security hardening. Please fix.
version:5
[07:20:58][DEBUG]<airtest.core.android.minicap> upgrade minicap to lastest version: -1->6
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell rm -r /data/local/tmp/minicap*
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.product.cpu.abi
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.preview_sdk
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.release
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\x86\minicap /data/local/tmp/minicap
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap ; echo ---$?---
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\minicap-shared/aosp/libs/android-23/x86/minicap.so /data/local/tmp/minicap.so
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap.so ; echo ---$?---
[07:20:59][INFO]<airtest.core.android.minicap> minicap installation finished
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys window displays ; echo ---$?---
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell pm path jp.co.cyberagent.stf.rotationwatcher ; echo ---$?---
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell export CLASSPATH=/data/app/jp.co.cyberagent.stf.rotationwatcher-1/base.apk;exec app_process /system/bin jp.co.cyberagent.stf.rotationwatcher.RotationWatcher
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 forward --no-rebind tcp:14428 localabstract:minicap_14428
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys window displays ; echo ---$?---
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -n 'minicap_14428' -P 1600x900@450x800/270 -l 2>&1
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'WARNING: linker: /data/local/tmp/minicap has text relocations. This is wasting memory and prevents security hardening. Please fix.'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'PID: 3115'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: Using projection 1600x900@450x253/3'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:240) Creating SurfaceComposerClient'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:243) Performing SurfaceComposerClient init check'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:250) Creating virtual display'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:256) Creating buffer queue'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:261) Creating CPU consumer'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:265) Creating frame waiter'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:269) Publishing virtual display'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/JpgEncoder.cpp:64) Allocating 4379652 bytes for JPG encoder'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/minicap.cpp:489) Server start'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/minicap.cpp:492) New client connection'
[07:21:00][DEBUG]<airtest.core.android.minicap> (1, 24, 3115, 1600, 900, 450, 253, 3, 2)
[07:21:00][DEBUG]<airtest.core.android.minicap> quirk_flag found, going to resetup
[07:21:00][DEBUG]<airtest.core.android.minicap> minicap stream ends
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 forward --remove tcp:14428
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 forward --no-rebind tcp:19850 localabstract:minicap_19850
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys window displays ; echo ---$?---
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -n 'minicap_19850' -P 900x1600@800x450/0 -l 2>&1
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'WARNING: linker: /data/local/tmp/minicap has text relocations. This is wasting memory and prevents security hardening. Please fix.'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'PID: 3141'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: Using projection 900x1600@253x450/0'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:240) Creating SurfaceComposerClient'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:243) Performing SurfaceComposerClient init check'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:250) Creating virtual display'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:256) Creating buffer queue'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:261) Creating CPU consumer'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:265) Creating frame waiter'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:269) Publishing virtual display'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/JpgEncoder.cpp:64) Allocating 4379652 bytes for JPG encoder'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/minicap.cpp:489) Server start'
[07:21:00][DEBUG]<airtest.core.android.minicap> (1, 24, 3141, 900, 1600, 253, 450, 0, 2)
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/minicap.cpp:492) New client connection'
```
***Use javacap connect landscape mode log (mumu emulator)***:
```
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 get-state
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 wait-for-device
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.sdk
F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\helper.py:41: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
cls.LOGGING.warn("Device:%s updated %s -> %s" % (dev.uuid, instance, dev))
[07:22:55][WARNING]<airtest.core.api> Device:127.0.0.1:7555 updated <airtest.core.android.android.Android object at 0x000002244C134F60> -> <airtest.core.android.android.Android object at 0x0000022451BC9B70>
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap ; echo ---$?---
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap.so ; echo ---$?---
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -v 2>&1
[07:22:55][DEBUG]<airtest.core.android.minicap> WARNING: linker: /data/local/tmp/minicap has text relocations. This is wasting memory and prevents security hardening. Please fix.
version:5
[07:22:55][DEBUG]<airtest.core.android.minicap> upgrade minicap to lastest version: -1->6
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell rm -r /data/local/tmp/minicap*
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.product.cpu.abi
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.preview_sdk
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.release
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\x86\minicap /data/local/tmp/minicap
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap ; echo ---$?---
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\minicap-shared/aosp/libs/android-23/x86/minicap.so /data/local/tmp/minicap.so
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap.so ; echo ---$?---
[07:22:56][INFO]<airtest.core.android.minicap> minicap installation finished
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys window displays ; echo ---$?---
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell pm path jp.co.cyberagent.stf.rotationwatcher ; echo ---$?---
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell export CLASSPATH=/data/app/jp.co.cyberagent.stf.rotationwatcher-1/base.apk;exec app_process /system/bin jp.co.cyberagent.stf.rotationwatcher.RotationWatcher
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys package com.netease.nie.yosemite ; echo ---$?---
[07:22:57][INFO]<airtest.core.android.yosemite> local version code is 302, installed version code is 302
[07:22:57][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 forward --no-rebind tcp:18115 localabstract:javacap_18115
[07:22:57][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell pm path com.netease.nie.yosemite ; echo ---$?---
[07:22:57][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell CLASSPATH=/data/app/com.netease.nie.yosemite-1/base.apk exec app_process /system/bin com.netease.nie.yosemite.Capture --scale 100 --socket javacap_18115 -lazy 2>&1
[07:22:57][DEBUG]<airtest.utils.nbsp> [javacap_sever]b'Capture server listening on @javacap_18115'
[07:22:57][DEBUG]<airtest.core.android.javacap> (1, 3, 0, 1600, 900, 0, 0, 0, 1)
```
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
**复现步骤**
1. 打开mumu emulator,启用 ADB connection
2. 添加 remote connection "adb connect 127.0.0.1:7555"
3. 勾选 use javacap
4. 旋转mumu emulator屏幕
5. 点击connect
**预期效果**
横屏和竖屏应该都能正常使用javacap连接
**python 版本:** `Airtest IDE 1.2.6 默认python (3.6.5)`
**airtest 版本:** `1.2.6`
**设备:**
- 型号: mumu emulator 2.3.18, desktop launcher 2.4.0
- 系统: Android 6.0.1
**其他相关环境信息**
在横屏模式或minicap下都能正常连接。
| 1medium
|
Title: Minor fix for Meta docs
Body: The documentation says that attribute name of Meta defaults to `the lower-case of the model’s class name` but it actually defaults to the [lower-case of the model's table name](https://github.com/biosustain/potion/blob/af3f52173e97ef41bddfa8f4775280c9b8d3188d/flask_potion/contrib/alchemy/manager.py#L60).
| 0easy
|
Title: Shebang on vba_extract.py set incorrectly in wheel
Body: The shebang at the beginning of vba_extract.py is set to `#!/Users/John/.pythonbrew/pythons/Python-2.7.2/bin/python` which is specific to a particular installation. It should probably be changed to just `#!python`.
Example which demonstrates the problem:
``` shell
$ vba_extract.py
bash: /usr/local/bin/vba_extract.py: /Users/John/.pythonbrew/pythons/Python-2.7.2/bin/python: bad interpreter: No such file or directory
```
| 0easy
|
Title: Keep pool connections count not less than min_size
Body: Is there a way to specify min_size of the pool, so that even in case if connection isn't used longer than `max_inactive_connection_lifetime` it won't be closed if pool has exatly min_size connections oppened?
From what I see in the code, there is no such possibility at the moment and it may be tricky to add.
We have trafik spikes and overall requests variation during the day, i.e. there is more requests in the evening than in the morning, and there is significant variation in number of requests hourly.
So, we want to always have min_size of connections openned, but at the same time, we don't won't to keep all openned conections active all the time.
| 1medium
|
Title: ForeignKeyWidget through a OneToOneField w/ self as input
Body: My model is based on a network connection:
# models.py
class NetworkPort(models.Model):
label = models.ForeignKey(
NetworkPortLabel,
on_delete=models.PROTECT,
)
asset = models.ForeignKey(
Asset,
on_delete=models.CASCADE
)
connection = models.OneToOneField(
"self",
null=True,
on_delete=models.SET_NULL
)
class Meta:
unique_together = ['label', 'asset']
Here's the NetworkPortResource:
# resources.py
class NetworkPortResource(resources.ModelResource):
class SrcPortForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row):
my_asset = Asset.objects.get(hostname=row['src_hostname'])
return self.model.objects.get(
name__iexact=row['src_port'],
itmodel__vendor__iexact=my_asset.itmodel.vendor,
itmodel__model_number__iexact=my_asset.itmodel.model_number
)
class DestPortForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row):
my_asset = Asset.objects.get(hostname=row['dest_hostname'])
return self.model.objects.get(
name__iexact=row['dest_port'],
itmodel__vendor__iexact=my_asset.itmodel.vendor,
itmodel__model_number__iexact=my_asset.itmodel.model_number
)
src_hostname = fields.Field(
column_name='src_hostname',
attribute='asset',
widget=ForeignKeyWidget(Asset, 'hostname')
)
src_port = fields.Field(
column_name='src_port',
attribute='label',
widget=SrcPortForeignKeyWidget(NetworkPortLabel, 'name')
)
src_mac = fields.Field(
column_name='src_mac',
attribute='asset',
widget=ForeignKeyWidget(Asset, 'mac_address')
)
dest_hostname = fields.Field(
column_name='dest_hostname',
attribute='connection',
widget=ForeignKeyWidget(Asset, 'hostname')
)
dest_port = fields.Field(
column_name='dest_port',
attribute='connection',
widget=DestPortForeignKeyWidget(NetworkPortLabel, 'name')
)
class Meta:
model = NetworkPort
exclude = ('id', 'label', 'asset', 'connection')
import_id_fields = ('src_hostname', 'src_port')
export_order = ('src_hostname', 'src_port', 'src_mac', 'dest_hostname', 'dest_port')
skip_unchanged = True
report_skipped = True
clean_model_instances = True
I'm not sure how to assign the dest_hostname and dest_port since they are subfields of the connection field. I'm currently getting this error message when attempting import:
Cannot assign "<NetworkPortLabel: Eth1 : V1 by Dell>": "NetworkPort.connection" must be a "NetworkPort" instance. | 1medium
|
Title: Incorrect type annotations for untyped JSON schema arrays
Body: **Describe the bug**
When a JSON schema field is declared as `"type": "array"` with no further details, the generated data model code fails typechecking:
```
src/_models/__init__.py:1990: error: Missing type parameters for generic type "List" [type-arg]
src/_models/__init__.py:1991: error: Missing type parameters for generic type "List" [type-arg]
```
**To Reproduce**
Example schema snippet:
```json
"system/rpc/listDownloadedModels/returns": {
"type": "array"
},
```
Used commandline:
```
$ datamodel-codegen --input /path/to/schema.json --input-file-type jsonschema --output /path/to/src/_models/__init__.py \
--output-model-type pydantic_v2.BaseModel --use-annotated --use-union-operator
```
The generated model that fails typechecking:
```python
class SystemRpcListDownloadedModelsReturns(RootModel[List]):
root: List
```
**Expected behavior**
Untyped arrays should be effectively typed as `List[Any]` rather than `List`
**Version:**
- OS: Linux
- Python version: 3.12
- datamodel-code-generator version: 0.26.3
**Additional context**
`--use-generic-container-types` just shifted the error to complaining about `Sequence` instead.
`--use-standard-collections` crashed `mypy` outright, so I didn't investigate that any further.
The offending schema field not having any type information at all regarding the permitted elements is probably a bug in its own right, but not one that has anything to do with datamodel-code-generator.
| 1medium
|
Title: Questions about new NMS Export for Detect, Segment, Pose and OBB YOLO
Body: ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Thanks for this nice feature to have the option to embed the nms in the backends.
If I understand the source code correctly, there are now there two different methods of doing the nms:
- normal NMS: using non_max_suppression in utils/ops.py
- NMS export: using NMSModel.forward in engine.exporter.py
Questions:
- Is it to be expected that both methods will give exactly the same results?
- What is the (technical) reason that non_max_suppression is not used for NMS export?
Please enlighten me.
### Additional
_No response_ | 1medium
|
Title: Add Swahili Stopwords
Body: From #2800 | 1medium
|
Title: keyerror ONLINE_LAST_MINUTES
Body: I clone the repo and follow the doc to deploy my flaskbb, but when I run command :
python manage.py createall
it hints not found the command , so I use command "python manage.py initdb" instead
I also do the command :
cp flaskbb/configs/production.py.example flaskbb/configs/production.py
but when run:
python manage.py runserver
it hints keyerror:
Traceback (most recent call last):
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1836, in **call**
return self.wsgi_app(environ, start_response)
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask_debugtoolbar/**init**.py", line 124, in dispatch_request
return view_func(**req.view_args)
File "/Users/xiaobing/workspace/flask_apps/flaskbb/flaskbb/forum/views.py", line 47, in index
online_users = User.query.filter(User.lastseen >= time_diff()).count()
File "/Users/xiaobing/workspace/flask_apps/flaskbb/flaskbb/utils/helpers.py", line 296, in time_diff
diff = now - timedelta(minutes=flaskbb_config['ONLINE_LAST_MINUTES'])
File "/Users/xiaobing/workspace/flask_apps/flaskbb/flaskbb/utils/settings.py", line 26, in **getitem**
return Setting.as_dict()[key]
KeyError: 'ONLINE_LAST_MINUTES'
| 1medium
|
Title: ImproperlyConfigured: Error loading psycopg2 or psycopg module on AWS
Body: <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.8/3.9/3.10/3.11/3.12 -->
## Expected Behavior
zappa should update dev when I do zappa update dev. It was working before without having to do anything.
## Actual Behavior
When I do zappa update dev I get the error "Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 502 response code."
When I do zappa tail I get the following error message
ImproperlyConfigured: Error loading psycopg2 or psycopg module
## Possible Fix
It was working when it showed INFO: - pyyaml==2.9.9: Using locally cached manylinux wheel in the output. Somehow set it back to that or something.
## Steps to Reproduce
I have a lambda django application configured to connect to postgres database backend
On MacOS Sequoia 15.2 running a virtual environmnet with pip Python 3.12.
try a zappa update of the running code.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
Ouput of pip freeze
```
argcomplete==3.5.2
asgiref==3.8.1
boto3==1.35.90
botocore==1.35.90
certifi==2024.12.14
cfn-flip==1.3.0
charset-normalizer==3.4.1
click==8.1.8
Django==5.1.4
django-cors-headers==4.6.0
django-phonenumber-field==8.0.0
djangorestframework==3.15.2
djangorestframework-simplejwt==5.3.1
durationpy==0.9
emoji==2.14.0
hjson==3.1.0
idna==3.10
iniconfig==2.0.0
Jinja2==3.1.5
jmespath==1.0.1
kappa==0.6.0
MarkupSafe==3.0.2
packaging==24.2
phonenumbers==8.13.52
placebo==0.9.0
pluggy==1.5.0
psycopg==3.2.3
psycopg2-binary==2.9.10
PyJWT==2.10.1
pytest==8.3.4
pytest-django==4.9.0
pytest-html==4.1.1
pytest-metadata==3.1.1
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-slugify==8.0.4
pytz==2024.2
PyYAML==6.0.2
regex==2024.11.6
requests==2.32.3
s3transfer==0.10.4
setuptools==75.6.0
six==1.17.0
sqlparse==0.5.3
stripe==11.4.1
text-unidecode==1.3
toml==0.10.2
tqdm==4.67.1
troposphere==4.8.3
typing_extensions==4.12.2
urllib3==2.3.0
Werkzeug==3.1.3
wheel==0.45.1
zappa==0.59.0
```
| 1medium
|
Title: Ray tests are flaky
Body: Ray tests have shown to be flaky, especially with GPU (Buildkite CI).
There are two places that cause these flakiness:
1. Some tests fetch `ray.available_resources()` at the beginning of the test and compare the `dict` against `ray.available_resources()` after the test: `assert check_resources(original_resources)`. It looks like that `dict` has race conditions: https://buildkite.com/horovod/horovod/builds/7306#cc0600a5-13ed-479d-ba1b-3bb0f6847992/332-436
```
AssertionError: assert {'CPU': 4.0, ...147712.0, ...} == {'object_stor... 9999999984.0}
Differing items:
{'object_store_memory': 10000000000.0} != {'object_store_memory': 9999999984.0}
Left contains 5 more items:
{'CPU': 4.0,
'GPU': 4.0,
'accelerator_type:T4': 1.0,
'memory': 189132147712.0,...
```
2. Some tests see more GPUs than there should be:
```
assert len(all_envs[0]["CUDA_VISIBLE_DEVICES"].split(",")) == 4
assert 8 == 4
+8
-4
```
First, the tests should work with any number of GPUs (larger than the minimum required GPUs of 4). Second, the test run on a machine with only 4 GPUs on Buildkite, with only one agent per machine. So there cannot be more than 4 GPUs visible to those tests. It looks like the `RayExecutor` provides that environment variable to the worker and somehow start 8 workers rather than 4. | 2hard
|
Title: Unable to start subprocesses
Body: ## Expected behavior
Trying to extract faces from data_src
## Actual behavior
Extracting faces...
Error while subprocess initialization: Traceback (most recent call last):
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 62, in _subprocess_run
self.on_initialize(client_dict)
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\mainscripts\Extractor.py", line 68, in on_initialize
nn.initialize (device_config)
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\core\leras\nn.py", line 123, in initialize
nn.tf_sess = tf.Session(config=nn.tf_sess_config)
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1551, in __init__
super(Session, self).__init__(target, graph, config=config)
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 676, in __init__
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\main.py", line 324, in <module>
arguments.func(arguments)
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\main.py", line 45, in process_extract
force_gpu_idxs = [ int(x) for x in arguments.force_gpu_idxs.split(',') ] if arguments.force_gpu_idxs is not None else None,
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\mainscripts\Extractor.py", line 853, in main
device_config=device_config).run()
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 210, in run
raise Exception ( "Unable to start subprocesses." )
Exception: Unable to start subprocesses.
Press any key to continue . . .
## Steps to reproduce
Well, changing the settings did nothing, so is it because of my hardware?
GTX 1060 3GB
Intel core i5 4570
16GB DDR3
## Other relevant information
Running Windows 10 | 2hard
|
Title: DFL crashed with 3080
Body: DFL crash and then I got his error.
=============== Model Summary ===============
== ==
== Model name: Matt2_SAEHD ==
== ==
== Current iteration: 3594 ==
== ==
==------------- Model Options -------------==
== ==
== resolution: 128 ==
== face_type: wf ==
== models_opt_on_gpu: True ==
== archi: df ==
== ae_dims: 128 ==
== e_dims: 64 ==
== d_dims: 64 ==
== d_mask_dims: 22 ==
== masked_training: True ==
== eyes_mouth_prio: True ==
== uniform_yaw: False ==
== adabelief: True ==
== lr_dropout: n ==
== random_warp: False ==
== true_face_power: 0.01 ==
== face_style_power: 0.01 ==
== bg_style_power: 0.0 ==
== ct_mode: none ==
== clipgrad: False ==
== pretrain: False ==
== autobackup_hour: 0 ==
== write_preview_history: False ==
== target_iter: 0 ==
== random_flip: True ==
== batch_size: 8 ==
== gan_power: 0.1 ==
== gan_patch_size: 16 ==
== gan_dims: 16 ==
== ==
==-------------- Running On ---------------==
== ==
== Device index: 0 ==
== Name: GeForce RTX 3080 ==
== VRAM: 10.00GB ==
== ==
=============================================
Error: Input to reshape is a tensor with 8 values, but the requested shape has 334176
[[node gradients/Mean_11_grad/Reshape (defined at D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]
Original stack trace for 'gradients/Mean_11_grad/Reshape':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug,
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
self.on_initialize()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 468, in on_initialize
gpu_D_code_loss_gvs += [ nn.gradients (gpu_D_code_loss, self.code_discriminator.get_weights() ) ]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 55, in tf_gradients
grads = gradients.gradients(loss, vars, colocate_gradients_with_ops=True )
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 172, in gradients
unconnected_gradients)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 684, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 340, in _MaybeCompile
return grad_fn() # Exit early
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 684, in <lambda>
lambda: grad_fn(op, *out_grads))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 255, in _MeanGrad
sum_grad = _SumGrad(op, grad)[0]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 216, in _SumGrad
grad = array_ops.reshape(grad, output_shape_kept_dims)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 195, in reshape
result = gen_array_ops.reshape(tensor, shape, name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 8377, in reshape
"Reshape", tensor=tensor, shape=shape, name=name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack()
...which was originally created as op 'Mean_11', defined at:
File "threading.py", line 884, in _bootstrap
[elided 3 identical lines from previous traceback]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
self.on_initialize()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 465, in on_initialize
gpu_D_code_loss = (DLoss(gpu_src_code_d_ones , gpu_dst_code_d) + \
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 454, in DLoss
return tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits), axis=[1,2,3])
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2312, in reduce_mean_v1
return reduce_mean(input_tensor, axis, keepdims, name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2372, in reduce_mean
name=name))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5781, in mean
name=name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack()
Traceback (most recent call last):
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
return fn(*args)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
target_list, run_metadata)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 8 values, but the requested shape has 334176
[[{{node gradients/Mean_11_grad/Reshape}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 130, in trainerThread
iter, iter_time = model.train_one_iter()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 462, in train_one_iter
losses = self.onTrainOneIter()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 701, in onTrainOneIter
self.D_train (warped_src, warped_dst)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 545, in D_train
nn.tf_sess.run ([D_loss_gv_op], feed_dict={self.warped_src: warped_src, self.warped_dst: warped_dst})
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
run_metadata_ptr)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
run_metadata)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 8 values, but the requested shape has 334176
[[node gradients/Mean_11_grad/Reshape (defined at D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]
Original stack trace for 'gradients/Mean_11_grad/Reshape':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug,
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
self.on_initialize()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 468, in on_initialize
gpu_D_code_loss_gvs += [ nn.gradients (gpu_D_code_loss, self.code_discriminator.get_weights() ) ]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 55, in tf_gradients
grads = gradients.gradients(loss, vars, colocate_gradients_with_ops=True )
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 172, in gradients
unconnected_gradients)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 684, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 340, in _MaybeCompile
return grad_fn() # Exit early
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 684, in <lambda>
lambda: grad_fn(op, *out_grads))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 255, in _MeanGrad
sum_grad = _SumGrad(op, grad)[0]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 216, in _SumGrad
grad = array_ops.reshape(grad, output_shape_kept_dims)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 195, in reshape
result = gen_array_ops.reshape(tensor, shape, name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 8377, in reshape
"Reshape", tensor=tensor, shape=shape, name=name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack()
...which was originally created as op 'Mean_11', defined at:
File "threading.py", line 884, in _bootstrap
[elided 3 identical lines from previous traceback]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
self.on_initialize()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 465, in on_initialize
gpu_D_code_loss = (DLoss(gpu_src_code_d_ones , gpu_dst_code_d) + \
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 454, in DLoss
return tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits), axis=[1,2,3])
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2312, in reduce_mean_v1
return reduce_mean(input_tensor, axis, keepdims, name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2372, in reduce_mean
name=name))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5781, in mean
name=name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack()
Any Ideas
| 2hard
|
Title: Improve docs and checks for sink when using ASGI
Body: Currently there is no documentation or checks in `add_sink` that require an async function when using the asgi app.
This is a follow up of https://github.com/falconry/falcon/pull/1769#discussion_r531449969
cc @vytas7 | 1medium
|
Title: mallet falls asleep
Body: i'm using mallet-2.0.8 with Python 3 and Jupyter Notebook. Recently it worked just fine. But now it falls asleep while showing it's still working even with a small dataset.
The problem is with following function:
def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=1):
"""
Compute c_v coherence for various number of topics
Parameters:
----------
dictionary : Gensim dictionary
corpus : Gensim corpus
texts : List of input texts
limit : Max num of topics
Returns:
-------
model_list : List of LDA topic models
coherence_values : Coherence values corresponding to the LDA model with respective number of topics
"""
coherence_values = []
model_list = []
for num_topics in range(start, limit, step):
model = gensim.models.wrappers.LdaMallet(mallet_path, corpus=corpus, num_topics=num_topics, id2word=id2word)
model_list.append(model)
coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v')
coherence_values.append(coherencemodel.get_coherence())
return model_list, coherence_values | 1medium
|
Title: Ollama run error
Body: ```error
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\unsia\anaconda3\Scripts\interpreter.exe\__main__.py", line 7, in <module>
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 471, in start_terminal_interface
interpreter = profile(
^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 64, in profile
return apply_profile(interpreter, profile, profile_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 148, in apply_profile
exec(profile["start_script"], scope, scope)
File "<string>", line 1, in <module>
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\core.py", line 145, in local_setup
self = local_setup(self)
^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\terminal_interface\local_setup.py", line 314, in local_setup
interpreter.computer.ai.chat("ping")
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\llm.py", line 86, in run
self.load()
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "C:\Users\unsia\anaconda3\Lib\site-packages\litellm\llms\ollama.py", line 455, in ollama_completion_stream
raise e
File "C:\Users\unsia\anaconda3\Lib\site-packages\litellm\llms\ollama.py", line 433, in ollama_completion_stream
function_call = json.loads(response_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
```
这个错误信息表明在尝试解析JSON数据时遇到了问题。具体来说,`json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)` 表示JSON字符串没有正确终止。
根据堆栈跟踪,错误发生在处理LLM(Large Language Model)响应的地方,特别是当尝试将响应内容解析为JSON对象时。这可能是由于以下几个原因之一:
1. **服务器返回的数据格式不正确**:LLM服务可能返回了非JSON格式的数据,导致无法解析。
2. **网络传输过程中数据丢失或损坏**:在从LLM服务接收数据的过程中,可能存在数据包丢失或损坏的情况。
3. **客户端代码问题**:负责接收和解析响应的代码可能存在逻辑错误。
为了进一步诊断问题,可以采取以下步骤:
- 检查LLM服务的日志,确认其是否正常工作并且返回正确的JSON格式数据。
- 在代码中添加调试信息,打印出接收到的原始响应内容,以便检查其格式是否符合预期。
- 确认网络连接稳定,排除因网络问题导致的数据传输中断或损坏的可能性。
- 审查客户端代码,确保所有相关的请求和响应处理逻辑正确无误。
| 1medium
|
Title: Can't seem to create a stream to get trade updates. (RuntimeError: This event loop is already running)
Body: So I am trying to get a stream of trade updates from the alpaca api, I run the following command.
```
import alpaca_trade_api as tradeapi
import time
import datetime
api = tradeapi.REST('xxx','https://paper-api.alpaca.markets')
conn = tradeapi.StreamConn('xxx','xxx','https://paper-api.alpaca.markets')
account = api.get_account()
def run():
@conn.on(r'trade_updates')
async def on_msg(conn, channel, data):
datasymbol = data.order['symbol']
event = data.event
print('Order executed for',datasymbol, data.order['side'], event, data.order['filled_qty'])
conn.run(['trade_updates'])
```
This is the error I get.
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
D:\Users\user\Anaconda3\envs\ml\lib\site-packages\alpaca_trade_api\stream2.py in run(self, initial_channels)
158 try:
--> 159 loop.run_until_complete(self.subscribe(initial_channels))
160 loop.run_forever()
D:\Users\user\Anaconda3\envs\ml\lib\asyncio\base_events.py in run_until_complete(self, future)
565 try:
--> 566 self.run_forever()
567 except:
D:\Users\user\Anaconda3\envs\ml\lib\asyncio\base_events.py in run_forever(self)
520 if self.is_running():
--> 521 raise RuntimeError('This event loop is already running')
522 if events._get_running_loop() is not None:
RuntimeError: This event loop is already running
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
<ipython-input-2-8a009c05ab30> in <module>
6 print('Order executed for',datasymbol, data.order['side'], event, data.order['filled_qty'])
7
----> 8 conn.run(['trade_updates'])
D:\Users\user\Anaconda3\envs\ml\lib\site-packages\alpaca_trade_api\stream2.py in run(self, initial_channels)
162 logging.info("Exiting on Interrupt")
163 finally:
--> 164 loop.run_until_complete(self.close())
165 loop.close()
166
D:\Users\user\Anaconda3\envs\ml\lib\asyncio\base_events.py in run_until_complete(self, future)
564 future.add_done_callback(_run_until_complete_cb)
565 try:
--> 566 self.run_forever()
567 except:
568 if new_task and future.done() and not future.cancelled():
D:\Users\user\Anaconda3\envs\ml\lib\asyncio\base_events.py in run_forever(self)
519 self._check_closed()
520 if self.is_running():
--> 521 raise RuntimeError('This event loop is already running')
522 if events._get_running_loop() is not None:
523 raise RuntimeError(
RuntimeError: This event loop is already running
```
Is there something wrong with my code? | 1medium
|
Title: Feeds RSS/Atom
Body: Implement RSS feeds
https://github.com/pythonhub/quokka/blob/master/quokka/core/views.py#L228
ContentFeed should return a specific Content SubContents (useful for contentblocks and to return images in a gallery)
ChannelFeed should return all the contents inside that channel, if homepage it return all the Contents.
| 1medium
|
Title: Bug: Litestar Logging in Python3.11: Unable to configure handler 'queue_listener'
Body: ### Description
I have been seeing this issue in our production Litestar application a few times. It happens after a few hours the app has been running, and restarting it (temporarily) fixes the issue.
It looks related to #2469 and #2576, but in my case:
- I am using python 3.11
- it works locally, and only happens after some time
### Logs
```bash
{"event":"Traceback (most recent call last):
File \"/root/.nix-profile/lib/python3.11/logging/config.py\", line 573, in configure
handler = self.configure_handler(handlers[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File \"/root/.nix-profile/lib/python3.11/logging/config.py\", line 758, in configure_handler
result = factory(**kwargs)
^^^^^^^^^^^^^^^^^
File \"/opt/venv/lib/python3.11/site-packages/litestar/logging/standard.py\", line 29, in __init__
self.listener.start()
File \"/root/.nix-profile/lib/python3.11/logging/handlers.py\", line 1539, in start
t.start()
File \"/root/.nix-profile/lib/python3.11/threading.py\", line 957, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File \"/app/src/app/domain/weather_data/controllers/metrics_data.py\", line 135, in _fetch_data_for_metrics
data = RegionAdapter.get_metrics(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File \"/app/src/app/lib/log/__init__.py\", line 132, in get_logger_with_args
logger = get_logger()
^^^^^^^^^^^^
File \"/app/src/app/lib/log/__init__.py\", line 124, in get_logger
config.configure()
File \"/opt/venv/lib/python3.11/site-packages/litestar/logging/config.py\", line 234, in configure
config.dictConfig(values)
File \"/root/.nix-profile/lib/python3.11/logging/config.py\", line 823, in dictConfig
dictConfigClass(config).configure()
File \"/root/.nix-profile/lib/python3.11/logging/config.py\", line 580, in configure
raise ValueError('Unable to configure handler 'ValueError: Unable to configure handler 'queue_listener'","level":"info","timestamp":"2024-01-30T14:29:30.530002Z"}
```
### Litestar Version
2.4.1
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [X] Other (Please specify in the description above) | 1medium
|
Title: How can we stream USD margined quarterly futures order book data using the ThreadedWebSocketManager?
Body: | 1medium
|
Title: Quality Issues
Body: Hi there,
Thank you for the amazing work. This library makes it so easy to perform OCR. However, it seems like the quality of the recognition results is not very good especially with app/web screenshots or rendered text. Detector also fails to detect single letters. Here are some of the failure cases:
1. Detector failure of single letters:

2. Recognition failure of small time boxes (model=en):




3. Bad results on symbols (model=en):

4. Confusion between 0 and o (model=en):

5. Bad 'en' results on (model=en+ja):








## Extra Info:
I am using the default settings (with batch_size=32) on image with scale (height=~800px & width=~400px). Is there any setting I can tweak to improve the results or will I have to train from scratch on my dataset?
Best regards | 1medium
|
Title: jwt_optional does not give None, instead the old token
Body: ```
@api.route("/")
class Payment(Resource):
@jwt_optional
@api.header("Authorization", "access token", required=False)
@api.doc(payment_fields)
@api.expect(payment_fields)
def post(self):
username = get_jwt_identity()
```
This endpoint are jwt optional, which can be either identify with access token or email alone. This endpoint are use for both make payment or first registration.
I first logged in with an account, then i logged out and clear my browser cookie and local storage which I use to store access token, then when I try to register it appear my old username instead of `None`. But everytime when I restart my backend it appear back to `None` again. Are there way to stop the cache? | 1medium
|
Title: how to set timezone in UnitTest
Body: I see there is a way to set timezone in init()
but how can I set timezone in UnitTest
```python
import os
import pytest
from tortoise.contrib import test
from tortoise.contrib.test import finalizer, initializer
@pytest.fixture(scope="session", autouse=True)
def initialize_tests(request):
db_url = os.environ.get("TORTOISE_TEST_DB", "sqlite://:memory:")
initializer(['app.models.models'], db_url=db_url, app_label="models")
request.addfinalizer(finalizer)
```
| 1medium
|
Title: Torchao `int4_weight_only` save error when passing layout
Body: ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.49.0.dev0
- Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.4.5
- Accelerate version: 1.4.0.dev0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: 5,6
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.6.0+cu126 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100 80GB PCIe
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
More details refer to [torchao](https://github.com/pytorch/ao/issues/1704)
### Expected behavior
Hi @SunMarc . Do you think we can handle this in transformers? | 1medium
|
Title: JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Body: Hello Abhishek
I try today with the latest changes but still is not working for me. Now I get "Internal Server Error" using FastAPI Swagger UI (see picture)

If I request a GET info, I get the name and the description of the model (see next picture)

The front end present the same Json decoder error as yesterday.

I tried in windows and linux again.
| 1medium
|
Title: voicemod-pro-crack | voicemod-pro | voicemod | voicemod-crack | voicemod-cracked | voicemod-crack-2024 | voicemod-free | voicemod-2024 | voicemod-pro-free | voicemod-free-crack | voicemod-cracked-2023 | voicemod-pro-crack-2023 | voicemod-2023-crack-free | voicemod-pro-crack-2024-free | voicemod-cracked-version-2024 | voicemod-pro-free-2023-download | voicemod-pro-crack-2024-free-2024 | voicemod-cracked-version-2024-free
Body: | 3misc
|
Title: OAuth2 password credential doesn't work in SwaggerUI
Body: In SwaggerUI which created by flask-restplus, just simple request is working well.
But, if I trying to use oauth2 with password credential then the credential popup doesn't show up.
To make sure this problem, I tried to use SwaggerUI from SwaggerUI project page with Swagger json; /swagger.json.
And it worked. But the other SwaggerUI that created by flask-restplus still doesn't work.
I think, something wrong in swagger template at flask-restplus on pip-py3.
Please check it.
| 1medium
|
Title: [mteamtp] (updating) The cookies provided by FlareSolverr are not valid
Body: **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| 1medium
|
Title: TK网页可登录,用的是v2rayNG全局,VMESS服务器,提取TK视频时候读取很久然后报错
Body: 程序检测到上次运行可能没有正常结束,您的作品下载记录数据可能已经丢失!
数据文件路径:C:\Users\ASUS\Desktop\JDM软件\TikTokDownloader_V5.2_WIN\_internal\cache\IDRecorder.txt
检测到 IDRecorder 备份文件,是否恢复最后一次备份的数据(YES/NO): yes
IDRecorder 已恢复最后一次备份的数据,请重新运行程序!
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
TikTokDownloader v5.2
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
项目地址: https://github.com/JoeanAmier/TikTokDownloader
项目文档: https://github.com/JoeanAmier/TikTokDownloader/wiki/Documentation
开源许可: GNU General Public License v3.0
当前已是最新正式版
代理 http://127.0.0.1:10809 测试成功
请选择 TikTokDownloader 运行模式:
1. 复制粘贴写入 Cookie
2. 扫码登录写入 Cookie
=========================
3. 终端命令行模式
4. Web API 接口模式
5. Web UI 交互模式
6. 服务器部署模式
=========================
7. 禁用自动检查更新
8. 禁用作品下载记录
9. 启用运行日志记录
3
读取缓存数据成功
请选择采集功能:
1. 批量下载账号作品(TikTok)
2. 批量下载账号作品(抖音)
3. 批量下载链接作品(通用)
4. 获取直播推流地址(抖音)
5. 采集作品评论数据(抖音)
6. 批量下载合集作品(抖音)
7. 批量采集账号数据(抖音)
8. 采集搜索结果数据(抖音)
9. 采集抖音热榜数据(抖音)
10. 批量下载收藏作品(抖音)
1
已选择批量下载账号作品(TikTok)模式
请输入 TikTok 主页 HTML 文件(夹)路径: C:\Users\ASUS\Desktop\软件\TikTokDownloader_V5.2_WIN\TK_html
开始处理第 1 个账号
开始提取作品数据
Traceback (most recent call last):
File "main.py", line 343, in <module>
File "main.py", line 321, in run
File "main.py", line 217, in main_menu
File "main.py", line 286, in compatible
File "main.py", line 65, in inner
File "main.py", line 226, in complete
File "src\main_complete.py", line 877, in run
File "src\main_complete.py", line 149, in account_acquisition_interactive_tiktok
File "src\main_complete.py", line 190, in _deal_account_works_tiktok
File "src\main_complete.py", line 354, in _batch_process_works
File "src\DataExtractor.py", line 83, in run
File "src\DataExtractor.py", line 107, in batch
File "src\DataExtractor.py", line 129, in extract_batch
File "src\DataExtractor.py", line 180, in extract_works_info
File "src\DataExtractor.py", line 186, in classifying_works
File "src\DataExtractor.py", line 221, in extract_image_info_tiktok
TypeError: 'types.SimpleNamespace' object is not subscriptable
[13668] Failed to execute script 'main' due to unhandled exception!
| 1medium
|
Title: [Bug]: Batch embedding inference is inconsistent with hf
Body: Below is the minimal reproduction script, you may firstly setup an embedding server of 'intfloat/multilingual-e5-large-instruct' on 8000 port.
[batch_embedding.txt](https://github.com/user-attachments/files/19429471/batch_embedding.txt)
### Your current environment
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.9
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A40
GPU 1: NVIDIA A40
GPU 2: NVIDIA A40
GPU 3: NVIDIA A40
GPU 4: NVIDIA A40
GPU 5: NVIDIA A40
GPU 6: NVIDIA A40
GPU 7: NVIDIA A40
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 86
On-line CPU(s) list: 0-85
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 43
Socket(s): 1
Stepping: 6
BogoMIPS: 5187.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid md_clear arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.7 MiB (86 instances)
L1i cache: 2.7 MiB (86 instances)
L2 cache: 172 MiB (43 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-85
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-dali-cuda120==1.32.0
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvidia-pyindex==1.0.9
[pip3] onnx==1.15.0rc2
[pip3] optree==0.10.0
[pip3] pynvml==11.4.1
[pip3] pytorch-quantization==2.1.2
[pip3] pyzmq==25.1.2
[pip3] sentence-transformers==3.2.1
[pip3] torch==2.5.1
[pip3] torch-tensorrt==2.2.0a0
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.20.1
[pip3] transformers==4.49.0
[pip3] triton==3.1.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.4.post2.dev240+g7c4f9883.d20250321
vLLM Build Flags:
CUDA Archs: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PHB PHB PHB PHB PHB PHB PHB 0-85 0 N/A
GPU1 PHB X PHB PHB PHB PHB PHB PHB 0-85 0 N/A
GPU2 PHB PHB X PHB PHB PHB PHB PHB 0-85 0 N/A
GPU3 PHB PHB PHB X PHB PHB PHB PHB 0-85 0 N/A
GPU4 PHB PHB PHB PHB X PHB PHB PHB 0-85 0 N/A
GPU5 PHB PHB PHB PHB PHB X PHB PHB 0-85 0 N/A
GPU6 PHB PHB PHB PHB PHB PHB X PHB 0-85 0 N/A
GPU7 PHB PHB PHB PHB PHB PHB PHB X 0-85 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NVIDIA_VISIBLE_DEVICES=all
CUBLAS_VERSION=12.3.4.1
NVIDIA_REQUIRE_CUDA=cuda>=9.0
CUDA_CACHE_DISABLE=1
TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX
NCCL_VERSION=2.19.3
NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
NVIDIA_PRODUCT_NAME=PyTorch
CUDA_VERSION=12.3.2.001
PYTORCH_VERSION=2.2.0a0+81ea7a4
PYTORCH_BUILD_NUMBER=0
CUDNN_VERSION=8.9.7.29+cuda12.2
PYTORCH_HOME=/opt/pytorch/pytorch
LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/cv2/../../lib64:/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NVIDIA_BUILD_ID=76438008
CUDA_DRIVER_VERSION=545.23.08
PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
CUDA_MODULE_LOADING=LAZY
NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
NVIDIA_PYTORCH_VERSION=23.12
TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
### 🐛 Describe the bug
when i use vllm to create embeddings, it turns out weird in the behavior between batching and send requests one by one.
My model is "intfloat/e5-mistral-7b-instruct", my test data is a list with 100 strings.
When i set the max-num-seqs=1, i can pass the test in https://github.com/vllm-project/vllm/commits/main/tests/models/embedding/language/test_embedding.py .
But when i use batch inference, the result is inconsistent with huggingface or sentence-transformers, only the first 20 of embeddings can stay consistent with hf, others are inconsistent with cosine_similarity of 0.98 or lower, do you have any ideas to solve this batch inference problem? Thanks

### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 1medium
|
Title: append not working as expected
Body: I have a dataframe stored in mongodb using arctic and I would like to append to the existing dataframe, e.g. updating daily prices.
I've tried using version storage and the append() function, however it gives me not implemented for handler error
" File "C:\Anaconda\lib\site-packages\arctic\store\version_store.py", line 496, in append
raise Exception("Append not implemented for handler %s" % handler)
Exception: Append not implemented for handler <arctic.store._pickle_store.PickleStore object at 0x09274AB0>"
've tried register_library_type('dataframestore', PandasDataFrameStore) but received some other error.
Do you have an example of how to update existing dataframe/series data or is there a rule of thumb?
| 1medium
|
Title: [🐛 BUG] Filtering on string in tables has a bad icon
Body: ### What went wrong? 🤔
Since [PR#2087](https://github.com/Avaiga/taipy/pull/2087), addressing #426, there's an icon in the string field indicating whether or not the filtering should take into account the casing.
This icon is ugly:

### Expected Behavior
A more explicit icon (which should not be difficult to find) is visible.
### Version of Taipy
' develop' branch at the time of creating this issue.
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | 1medium
|
Title: mfdataset - ds.encoding["source"] to retrieve filename not valid key
Body: ### What happened?
Looking at the doc https://docs.xarray.dev/en/stable/generated/xarray.open_mfdataset.html
> preprocess ([callable()](https://docs.python.org/3/library/functions.html#callable), optional) – If provided, call this function on each dataset prior to concatenation. You can find the file-name from which each dataset was loaded in ds.encoding["source"].
I expected to be able to use ds.encoding["source"] in my preprocess function to retrieve the filename. However I get
### What did you expect to happen?
I expected the doc to be correct? unless I missed something trivial.
### Minimal Complete Verifiable Example
```Python
def preprocess_xarray_no_class(ds):
filename = ds.encoding["source"]
ds = ds.assign(
filename=("time"), [filename])
) # add new filename variable with time dimension
return ds
ds = xr.open_mfdataset(
fileset,
preprocess=preprocess_xarray_no_class,
engine='h5netcdf',
concat_characters=True,
mask_and_scale=True,
decode_cf=True,
decode_times=True,
use_cftime=True,
parallel=True,
decode_coords=True,
compat="equals",
)
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
- [ ] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
...
1 def preprocess_xarray_no_class(ds):
----> 2 filename = ds.encoding["source"]
3 ds = ds.assign(
4 filename=("time",), [filename])
5 ) # add new filename variable with time dimension
KeyError: 'source'
```
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0]
python-bits: 64
OS: Linux
OS-release: 6.5.0-1023-oem
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C.UTF-8
LANG: en_IE.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.2
libnetcdf: 4.9.3-development
xarray: 2024.6.0
pandas: 2.2.2
numpy: 1.26.4
scipy: 1.13.1
netCDF4: 1.7.1
pydap: None
h5netcdf: 1.3.0
h5py: 3.11.0
zarr: 2.18.2
cftime: 1.6.4
nc_time_axis: 1.4.1
iris: None
bottleneck: 1.3.8
dask: 2024.6.0
distributed: 2024.6.0
matplotlib: 3.9.0
cartopy: None
seaborn: 0.13.2
numbagg: 0.8.1
fsspec: 2024.6.0
cupy: None
pint: None
sparse: None
flox: 0.9.7
numpy_groupies: 0.11.1
setuptools: 70.0.0
pip: 24.0
conda: None
pytest: 8.2.2
mypy: 1.10.0
IPython: 7.34.0
sphinx: None
</details>
| 1medium
|
Title: Is polygon.historic_agg_v2 adjusted for dividends as well?
Body: I know that the polygon.historic_agg_v2 is adjusted for splits, but is it adjusted for dividends as well? If not, what is a good way to adjust both dividends and splits for historical prices? | 1medium
|
Title: Forwarding remote tracks received from one peer to all other peers .
Body: Hi @jlaine ,
First of all thank you for this amazing frame work
I'm facing issue with forwarding tracks received from one peer to all other peers which are running in separate threads
1. I'm receiving remote tracks and i'm adding to global set from one peer connection
2. When new peer joins the i'm adding all the tracks in the peer with addtrack
3. The tracks are received in remote side but not playing and not receiving any frames except for only one user i,e the firt user to receive the remote track and playing but other peer connections not receiving any frames and the track is muted not unmutes
thanks a lot in advance
| 1medium
|
Title: Are there any suggestions for one to many queries?
Body: Are there any suggestions for one to many queries? | 1medium
|
Title: 404 error on a valid URL with the '/url' API
Body: Hi,
A few weeks ago, I uploaded a ipynb file to a web site available at http://www.logilab.org/file/187482/raw/quandl-data-with-pandas.ipynb and could see the notebook thanks to a simple http://nbviewer.ipython.org/url/www.logilab.org/file/187482/raw/quandl-data-with-pandas.ipynb
Unfortunately, I would like to show the notebook to someone and I was surprised to have a 404 error while the URL is still valid and the file has not been changed.
Is there a recent change about the '/url' API or is there a problem with my ipynb file?
Thanks,
Damien G.
| 1medium
|
Title: Portable Orange3-3.33.0. could not find pythonw.exe
Body: win10 64bit
Portable Orange
[Orange3-3.33.0.zip](https://download.biolab.si/download/files/Orange3-3.33.0.zip)
No installation needed. Just extract the archive and open the shortcut in the extracted folder.
double click the
shortcut,Orange ,it‘s target:%COMSPEC% /C start Orange\pythonw.exe -m Orange.canvas
could not find pythonw.exe, please check the name is correct?
checked the pythonw.exe,it is in the "G:\Orange3-3.33.0\Orange" all the time.


| 1medium
|
Title: Create Daily Sentiment Reader for IEX?
Body: Hi - all - an additional reader for IEX for the link below would be great. I tried creating something based on pandas_datareader.iex.daily.IEXDailyReader but couldn't get it to work. Ideally getting a data frame back with daily sentiment between two dates (start, end) would be extremely useful.
Here is the API I'm trying to hit....
https://iexcloud.io/docs/api/#social-sentiment.
Anyone have any suggestions ? | 1medium
|
Title: Custom colors Graph dashboard doesn't work
Body: ## Mycodo Issue Report:
- Specific Mycodo Version: 6.1.2
- Chromium Version 60.0.3112.89 (Developer Build) Built on Ubuntu 14.04, running on Raspbian 9.4 (32-bit)
#### Problem Description
Please list: When enabling custom colors in Graph dash the color pickers won't show. Can't select any colors.
### Errors
- No errors
### Steps to Reproduce the issue:
How can this issue be reproduced?
1. Make graph -> save
2. collapse graph -> tick custom colors -> save
3. collapse graph -> no colorpicker or input fields
Not a biggie but I thought I bring it to your attention. | 1medium
|
Title: How can i load pretrained model?
Body: I have pretrained XLNET model in Georgian lagnuage. Training has generated this files:
.
Now i want to load pretrained XLNET model and for one sentence get sentence_embedding vector.
Can you help me ? | 1medium
|
Title: docker-compose and gunicorn_conf.py file preparation?
Body: Hi,
I want to pass custom settings for Gunicorn and Uvicorn for `workers` settings. I have followed this [file ](https://github.com/tiangolo/uvicorn-gunicorn-docker/blob/622470ec9aedb5da2cd2235bbca3f9e8e6256cdb/docker-images/gunicorn_conf.py#L21)
So I have added `gunicorn_conf.py` file in my `/app/` folder. Directory structure is as follows
```
fastapi
|-app
|-main.py
|- gunicorn_conf.py
|-docker-compose.yml
|-Dockerfile
```
The content of `gunicorn_conf.py`
```
import json
import multiprocessing
import os
workers_per_core_str = os.getenv("WORKERS_PER_CORE", "10")
max_workers_str = os.getenv("MAX_WORKERS")
use_max_workers = None
if max_workers_str:
use_max_workers = int(max_workers_str)
web_concurrency_str = os.getenv("WEB_CONCURRENCY", None)
host = os.getenv("HOST", "0.0.0.0")
port = os.getenv("PORT", "80")
bind_env = os.getenv("BIND", None)
use_loglevel = os.getenv("LOG_LEVEL", "info")
if bind_env:
use_bind = bind_env
else:
use_bind = f"{host}:{port}"
cores = multiprocessing.cpu_count()
workers_per_core = float(workers_per_core_str)
default_web_concurrency = workers_per_core * cores
if web_concurrency_str:
web_concurrency = int(web_concurrency_str)
assert web_concurrency > 0
else:
web_concurrency = max(int(default_web_concurrency), 2)
if use_max_workers:
web_concurrency = min(web_concurrency, use_max_workers)
accesslog_var = os.getenv("ACCESS_LOG", "-")
use_accesslog = accesslog_var or None
errorlog_var = os.getenv("ERROR_LOG", "-")
use_errorlog = errorlog_var or None
graceful_timeout_str = os.getenv("GRACEFUL_TIMEOUT", "120")
timeout_str = os.getenv("TIMEOUT", "120")
keepalive_str = os.getenv("KEEP_ALIVE", "5")
# Gunicorn config variables
loglevel = use_loglevel
workers = web_concurrency
bind = use_bind
errorlog = use_errorlog
worker_tmp_dir = "/dev/shm"
accesslog = use_accesslog
graceful_timeout = int(graceful_timeout_str)
timeout = int(timeout_str)
keepalive = int(keepalive_str)
# For debugging and testing
log_data = {
"loglevel": loglevel,
"workers": workers,
"bind": bind,
"graceful_timeout": graceful_timeout,
"timeout": timeout,
"keepalive": keepalive,
"errorlog": errorlog,
"accesslog": accesslog,
# Additional, non-gunicorn variables
"workers_per_core": workers_per_core,
"use_max_workers": use_max_workers,
"host": host,
"port": port,
}
print(json.dumps(log_data))
```
And content of `docker-compose.yml`
```
version: '3'
services:
web:
build:
context: .
volumes:
- ./app:/app
ports:
- "80:80"
#environment:
command: bash -c "uvicorn main:app --reload --host 0.0.0.0 --port 80"
# Infinite loop, to keep it alive, for debugging
# command: bash -c "while true; do echo 'sleeping...' && sleep 10; done"
```
My server is not picking parameters of `gunicorn_conf.py`.
Am I missing something here?
| 1medium
|
Title: Seaborn Heatmap Documentation
Body: Hi all I was looking at your [heatmap example](https://seaborn.pydata.org/examples/spreadsheet_heatmap.html) and found that the `pandasDataframe.pivot()` function does not work locally as called.
I had to change
`flights = flights_long.pivot("month", "year", "passengers")`
to
`flights = flights_long.pivot(index="month", columns="year", values="passengers")`, specifying the kwargs.
I'm working through this because I am making an Advanced Visualization Cookbook for [Project Pythia](https://projectpythia.org/) and trying to provide an overview of all the different plotting libraries scientific python programmers have ever asked me about during plotting tutorials. If you'd like input or feedback on how your project is summarized or if you'd like a workflow to be featured in our interactive plotting chapter please let me know. | 0easy
|
Title: Document endpoint supporting both many=True and many=False
Body: I have a viewset that currently supports creation of a single or multiple items at once. It looks something like this:
```python
class FooViewSet(viewsets.ModelViewSet):
def create(self, request, *args, **kwargs):
if not isinstance(request.data, list):
return super().create(request, *args, **kwargs)
else:
serializer = self.get_serializer(data=request.data, many=True)
serializer.is_valid(raise_exception=True)
self.perform_bulk_create(serializer)
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
```
There are two ways this could be documented. Either by reusing the component schema with something like this:
```yaml
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/Foo'
- type: array
items:
$ref: '#/components/schemas/Foo'
```
<details>
<summary>schema.yaml reusing component schemas</summary>
```yaml
openapi: 3.0.3
info:
title: ''
version: 0.0.0
paths:
/api/foos/foo/:
post:
operationId: foo_foo_create
description: ''
requestBody:
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/FooRequest'
- type: array
items:
$ref: '#/components/schemas/FooRequest'
required: true
responses:
'201':
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/Foo'
- type: array
items:
$ref: '#/components/schemas/Foo'
description: ''
components:
schemas:
Foo:
type: object
properties:
id:
type: integer
some_field:
type: integer
required:
- id
- some_field
FooRequest:
type: object
properties:
some_field:
type: integer
required:
- some_field
```
</details>
Or perhaps one could define multiple component schemas:
```yaml
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/Foo'
- $ref: '#/components/schemas/FooList'
```
<details>
<summary>schema.yaml with multiple component schemas</summary>
```yaml
openapi: 3.0.3
info:
title: ''
version: 0.0.0
paths:
/api/foos/foo/:
post:
operationId: foo_foo_create
description: ''
requestBody:
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/FooRequest'
- $ref: '#/components/schemas/FooRequestList'
required: true
responses:
'201':
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/Foo'
- $ref: '#/components/schemas/FooList'
description: ''
components:
schemas:
Foo:
type: object
properties:
id:
type: integer
some_field:
type: integer
required:
- id
- some_field
FooList:
type: array
items:
$ref: '#/components/schemas/Foo'
FooRequest:
type: object
properties:
some_field:
type: integer
required:
- some_field
FooRequestList:
type: array
items:
$ref: '#/components/schemas/FooRequest'
```
</details>
I've tried the PolymorphicProxySerializer but that doesn't seem to work here.
This just generates an empty request:
```python
@extend_schema(
request=PolymorphicProxySerializer(
"DifferentRequests",
serializers=[
FooSerializer,
inline_serializer("ListSerializer", fields={"items": FooSerializer()}),
],
resource_type_field_name=None,
)
)
```
This just gives an error `'child' is a required argument`:
```python
@extend_schema(
request=PolymorphicProxySerializer(
"DifferentRequests",
serializers=[
FooSerializer, # FooSerializer.Meta.list_serializer_class == FooListSerializer
FooListSerializer, # FooListSerializer isinstance of ListSerializer
],
resource_type_field_name=None,
)
)
```
This just fails:
```python
@extend_schema(
request=PolymorphicProxySerializer(
"DifferentRequests",
serializers=[
FooSerializer,
FooSerializer(many=True), # extend_schema expecting type, not an instance
],
resource_type_field_name=None,
)
)
```
How can I have drf-spectacular generate the correct documentation for me in this case? I want to have an api that supports both single objects and list of objects. | 2hard
|
Title: Version releases according to semantic versioning
Body: Hello,
The release version numbers of `tableauserverclient` follow `<major.minor>`. Would it be possible to use `<major.minor.patch>`, as advised by semantic versioning (https://semver.org/)?
Following your current releases, it should be easy to simply use 0 as the patch number, and keep the rest unchanged.
Thank you! | 0easy
|
Title: Handle Django multi-table inheritance
Body: I try to blend instance of child model with `commit=False`. The model is inherited from `auth.User` model, using multi-table inheritance. I get following error:
```
Cannot generate a unique value for user_ptr
```
| 1medium
|
Title: [FEATURE] More time parameters on make_simpleseries
Body: Good morning,
currently make_simple_series only generates daily data, this is not fit for my example where I reed a higher granularity. Given that in order to generate dates it is using pd.date_range, I want to add those parameters in the generation of the time dimension.
Alaso the possibility to set the time as an index or even to create the object as pd.Series if input.
Thank you,
Gonxo | 1medium
|
Title: pip install d2l==1.0.0b0 Fails to Install on Linux Mint/Ubuntu 22.04
Body: Error Message:
Collecting d2l==1.0.0b0
Using cached d2l-1.0.0b0-py3-none-any.whl (141 kB)
Collecting jupyter (from d2l==1.0.0b0)
Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Requirement already satisfied: numpy in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (1.24.3)
Requirement already satisfied: matplotlib in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (3.7.1)
Requirement already satisfied: matplotlib-inline in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (0.1.6)
Requirement already satisfied: requests in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (2.31.0)
Requirement already satisfied: pandas in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (1.5.3)
Collecting gym==0.21.0 (from d2l==1.0.0b0)
Using cached gym-0.21.0.tar.gz (1.5 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
error in gym setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Thank you! | 1medium
|
Title: Create workflow management systems (WMS)
Body: We should industrialise our workflow management by developing a system following this [scheme](https://www.figma.com/board/s8D352vFbVGXMERi6XOPV6/Workflow-Management?node-id=0-1&node-type=canvas&t=pSjkemXnvBGdf2wO-0) | 2hard
|
Title: Can you do a start_background_task on eventlet async mode and never join/kill it?
Body: Before anything else, I would like to apologize in advance for the grammar and english mistakes I'm probably going to make. It's not my first language.
So i'm working on a project and so far everything is working perfectly. Thank you for this amazing extension. I have a quick question. In my application, a user can place a sort of "bet". When they do, the server has to wait 30 seconds, while checking if there are any new players. If there are, the timer stops and the game begins. If not, the game is cancelled.
My question is, if I use the `socketio.start_background_task()` function, on the eventlet server, can I never actually do a `join()` on the thread? Will there be any memory leaks or will I lose performance because of "sleeping" threads?
I'll include some code to further illustrate my question. (The `self.get_state()` is simply a way to check (using redis) if any other process, such as another server when the app scales, has changed the state). I'm storing pretty much everything to do with the game's state in a redis DB.
```
# On a route file.
game = Game()
@socket.on("bet-jackpot")
def bet_jackpot(amount):
sid = request.sid
# Omitting some database work and checks.
try:
game.handle_bet(user_id, sid, user_name, user_avatar, amount)
except RuntimeError:
emit("jck-error", "Can't bet now", room=sid)
return
# On the models file.
class Game:
# Also omitting some logic.
def handle_bet(self, id, sid, username, avatar, amount):
if self.get_state() == "R":
raise RuntimeError("Cant bet now")
if self.get_state() == "W":
self._add_player_to_game(id, sid, username, avatar, amount)
socket.start_background_task(target=self.start_game_loop)
elif self.get_state() == "O" or self.get_state() == "L":
self._add_player_to_game(id, sid, username, avatar, amount)
def start_game_loop(self):
self.set_state("O")
socket.emit("jck-state", "O")
socket.emit("jck-timer", 30.00)
self._start_one_player_timer(30.00) # Timer function that loops until there are 2 players or the 30 seconds end.
if self.get_player_amount() <= 1: # Cancelling game
socket.emit("jck-state", "W")
self.reset()
return
self.set_state("L")
socket.emit("jck-state", "L")
socket.emit("jck-timer", 25.00)
self._start_main_timer(25.00) # Same as above, but no checking for players.
# After, we do some simple db work.
```
If this function gets called every time a game is played, will the server eventually lag and lose performance because of the threads that are never joined? Or will the eventlet web server know how to "stop" a finished thread?
Thank you and sorry if it seems like a "noob" question, I'm not very experienced in multithreading and these types of apps.
| 2hard
|
Title: Failed to install Horovod
Body: **Environment:**
1. **Framework**: TensorFlow, PyTorch
2. **Framework version**:
- TensorFlow: 2.18.0
- PyTorch: 2.4.1
3. **Horovod version**: Attempting to install latest via pip
4. **MPI version**: Microsoft MPI (attempted with version 10)
5. **CUDA version**: 11.8
6. **NCCL version**: None
7. **Python version**: 3.11.9
8. **Spark / PySpark version**: N/A
9. **Ray version**: N/A
10. **OS and version**: Windows 10
11. **GCC version**: Not installed (Windows environment)
12. **CMake version**: 3.30
---
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? Yes
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? N/A
4. Did you check if your question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
---
**Bug report:**
I'm encountering issues installing Horovod on Windows 10 with TensorFlow and PyTorch frameworks. Here’s a summary of the setup and error details:
**Steps Taken**:
1. Set environment variables:
```cmd
set HOROVOD_WITH_MPI=1
set HOROVOD_WITH_CUDA=1
set HOROVOD_CUDA_HOME=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8
set MPI_HOME="E:/Program Files/Microsoft MPI"
set MPIEXEC_EXECUTABLE="E:\Program Files\Microsoft MPI\Bin\mpiexec.exe"
```
2. Attempted installation command:
```bash
pip install horovod --no-cache-dir
```
**Error**:
```plaintext
Could NOT find MPI (missing: MPI_CXX_FOUND)
CMake Error: The source directory "E:/Projects" does not appear to contain CMakeLists.txt.
```
**Additional Details**:
- Running `mpiexec --version` did not provide the expected version output.
- Verified CUDA 11.8 installation via `nvcc --version`.
- Using Microsoft MPI, but suspect compatibility issues with Horovod on Windows.
| 2hard
|
Title: [Question] Does RIFE interpolate by only using there two frames next to each other, or there are something more when Video Interpolation?
Body: There is no discuss panel so I would just only ask a question here.
Does RIFE interpolate by only using there two frames next to each other? (Like the **Image Interpolation**)
Or there are something more when **Video Interpolation**?
------
TL;DR
Actually I just have image sequence, I want to know if there will be a difference in between
- Convert image sequence to video and then using **RIFE Image Interpolation**
- Use **RIFE Image Interpolation** and then convert the output image sequence into video
(I already wrote shell script for iterating image sequence in folder so this won't be a problem) | 3misc
|
Title: [ASK] Question on ndcg_at_k calculation
Body: ### Description
<!--- Describe your general ask in detail -->
Why are we using rank ('first) to get the order of the ideal ranking instead of rank('min') or rank('average)?
https://github.com/microsoft/recommenders/blob/main/recommenders/evaluation/python_evaluation.py#L687
line 597
df_idcg["irank"] = df_idcg.groupby(col_user, as_index=False, sort=False)[
col_rating
].rank("first", ascending=False)
In this case, if there is a tied in the rating, for example, item A, B, C, D, rating is 1, 0, 0,0. Using rank('first), irank = 1,2,3,4. But should we take the tied condition into consideration, that means the irank = 1,2,2,2?
### Other Comments
| 1medium
|
Title: Clipped outputs
Body: For several of the large output results, the output is clipped. Can the max output length be increased.

| 0easy
|
Title: Update experiment instance status failed: the object has been modified
Body: /kind bug
**What steps did you take and what happened:**
I got error when update experiment status in experiment controller.
```
{"level":"info","ts":"2024-03-04T01:39:38Z","logger":"experiment-controller","msg":"Update experiment instance status failed, reconciler requeued","Experiment":{"name":"a10702550312415232282375","namespace":"heros-user"},"err":"Operation cannot be fulfilled on experiments.kubeflow.org \"a10702550312415232282375\": the object has been modified; please apply your changes to the latest version and try again"}
```
**What did you expect to happen:**
The code of experiment status update as follow. It's not supposed to raise error cause it only updates status even if experiment object is modified. I'm not sure my understanding is ok.
https://github.com/kubeflow/katib/blob/master/pkg/controller.v1beta1/experiment/experiment_controller.go#L237
```go
if !equality.Semantic.DeepEqual(original.Status, instance.Status) {
// assuming that only status change
err = r.updateStatusHandler(instance)
if err != nil {
logger.Info("Update experiment instance status failed, reconciler requeued", "err", err)
return reconcile.Result{
Requeue: true,
}, nil
}
}
```
**Environment:**
- Katib version: v0.16
- Kubernetes version: v1.25.13
- OS: Linux 5.15.47-1.el7.x86_64 x86_64
---
Impacted by this bug? Give it a 👍 We prioritize the issues with the most 👍
| 1medium
|
Title: Pydantic >2.0 makes `prisma generate` crash
Body: Thank you for the awesome work on this project.
## Bug description
Prisma Generate fails when using Pydantic >2.0 because of a warning
## How to reproduce
* Step 1. In a project with an existing prisma.schema, install Prisma as well as Pydantic > 2.0.
* Step 2. Run `prisma generate`
Generation fails with the following error, and no Prisma classes are generated.
```
(.venv) monarch@Monarch-Legion:~/workspace/startedup/backend$ prisma generate
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Error:
Traceback (most recent call last):
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/prisma/generator/generator.py", line 112, in run
self._on_request(request)
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/prisma/generator/generator.py", line 170, in _on_request
self.generate(data)
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/prisma/generator/generator.py", line 268, in generate
render_template(rootdir, name, params)
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/prisma/generator/generator.py", line 309, in render_template
output = template.render(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/prisma/generator/templates/client.py.jinja", line 42, in top-level template code
BINARY_PATHS = model_parse(BinaryPaths, {{ binary_paths.dict(by_alias=True) }})
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/typing_extensions.py", line 2498, in wrapper
warnings.warn(msg, category=category, stacklevel=stacklevel + 1)
pydantic.warnings.PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
```
## Expected behavior
Should generate Prisma classes and not print error
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
```prisma
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-py"
interface = "asyncio"
recursive_type_depth = 5
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model User {
id String @id @default(uuid())
is_admin Boolean @default(false)
email String @unique
password String @unique
created_at DateTime @default(now())
updated_at DateTime @updatedAt
GeneratedContent GeneratedContent[]
}
model GeneratedContent {
id String @id @default(uuid())
content String
user User @relation(fields: [user_id], references: [id])
user_id String
created_at DateTime @default(now())
updated_at DateTime @updatedAt
}
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: WSL on Windows
- Database: PostgreSQL
- Python version: Tested with 3.11.4 and 3.12
- Prisma version:
<!--[Run `prisma py version` to see your Prisma version and paste it between the ´´´]-->
```
prisma : 5.4.2
prisma client python : 0.11.0
platform : debian-openssl-1.1.x
expected engine version : ac9d7041ed77bcc8a8dbd2ab6616b39013829574
```
| 1medium
|
Title: Question About Long Running Mutations And Asynchronous Tasks
Body: Hello,
We're using Flask, Graphene and SQLAlchemy on our project. The API is currently executed on our server using uWSGI and Nginx. Some of the mutations we have created trigger long running jobs (like 2 to 5 minutes). We realized that:
- When one of the long running job is triggered / running, no other http request would be treated by our Flask application. Is this a known limitation from Graphene / SQLAlchemy? Or are we doing something wrong?
- What would you say is the best way to manage this kind of long running request from the end user point of view? I'm thinking about returning immediately a message saying the job is triggered and then let the job run in the background, but I'm not really sure how to manage such asynchronous task in Python.
Thank you
Alexis
| 1medium
|
Title: [RFC] Support for Vault authentication through EC2 metadata service
Body: **Is your feature request related to a problem? Please describe.**
I'm currently running an app in a EC2 instance, where I'd like to integrate dynaconf with Vault for my configurations. However, it seems dynaconf [current only supports AWS authentication through boto3 session](https://github.com/rochacbruno/dynaconf/blob/master/dynaconf/loaders/vault_loader.py), but not through [EC2 metadata service](https://hvac.readthedocs.io/en/stable/usage/auth_methods/aws.html#ec2-metadata-service) that I'm using. It will be nice if we could add support for it.
**Describe the solution you'd like**
Providing optional configurations to accept EC2 roles to authenticate through EC2 metadata service.
**Describe alternatives you've considered**
I'm currently using HVAC client directly to walk around the issue.
**Additional context**
A unrelated observation is, even for boto3 session authentication, it seems to me that we need to add `header_value` in the call to `client.auth.aws.iam_login()` as well, otherwise I'm receiving error. `hvac.exceptions.InvalidRequest: error validating X-Vault-AWS-IAM-Server-ID header: missing header "X-Vault-AWS-IAM-Server-ID", on post`
| 1medium
|
Title: Vertical recognition
Body: Hi there,
I think there are few issues with vertical words.
Shouldn't this function https://github.com/JaidedAI/EasyOCR/blob/master/easyocr/easyocr.py#L344 output `max_width` as well in order to update `max_width = max(max_width, imgH)` in the next line ? It seems that if there is a long vertical word then it's capped by imgH and recognition is usually wrong.
Also, I realized that images are cropped and resized in here https://github.com/JaidedAI/EasyOCR/blob/master/easyocr/easyocr.py#L341 based on their ratio which makes long image crops, that is h >> w, very small (their width is squeezed a lot). Then, these resized images are rotated (90, 180 and 270) in https://github.com/JaidedAI/EasyOCR/blob/master/easyocr/easyocr.py#L344. I think the images should be rotated before they get resized. | 1medium
|
Title: Table loading-state behaves incorrectly
Body: Using the same example as defined in the server test https://github.com/plotly/dash-table/blob/dev/tests/cypress/dash/v_data_loading.py, typing into the input causes the focus to be moved back to the table's cell in `dash>=1.3.0`.
The table should not steal away the focus from the input and yet refresh/renderer itself correctly and implying focus correctly if the table is selected, when the `loading_state` switches. | 1medium
|
Title: [Bug]: 在使用一些过滤符操作的时候,出现了datasets.builder.DatasetGenerationError: An error occurred while generating the dataset报错,想知道原因,谢谢。
Body: ### Before Reporting 报告之前
- [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)
### Search before reporting 先搜索,再报告
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。
### OS 系统
ubuntu
### Installation Method 安装方式
source
### Data-Juicer Version Data-Juicer版本
v0.1.2
### Python Version Python版本
3.10.11
### Describe the bug 描述这个bug


### To Reproduce 如何复现
只是编辑了analyser.yaml文件,同时,输入数据是一个文件夹(里面包含了json文件以及txt文件和.sh文件)
### Configs 配置信息
_No response_
### Logs 报错日志
_No response_
### Screenshots 截图
_No response_
### Additional 额外信息
_No response_ | 1medium
|
Title: Field Directive Is Not "Inherited" From Interface
Body: I was adding query complexity analysis into [graphql-utilities](https://github.com/melvinkcx/graphql-utilities) while I came across this strange behavior.
In the following schema, the `@cost` directive of `createdAt` in `TimestampedType` is not found in `Announcement -> createdAt`.
```
interface TimestampedType {
createdAt: String @cost(complexity: 2)
updatedAt: String @cost(complexity: 2)
}
type Announcement implements TimestampedType {
createdAt: String
updatedAt: String
announcementId: String! @cost(complexity: 4)
title: String
text: String
}
```
This is the screenshots of my debugger:
1. `<AnnouncementField> -> ast_node -> fields -> createdAt`:

2. `<AnnouncementField> -> interfaces[0] -> ast_node -> fields -> createdAt`:

As I couldn't find any relevant answer from the spec, I'm not certain if the directive is supposed to be "inherited" from the interface. However, from what I observed in `graphql-js`, inheriting directive seems to be the correct behavior.
I appreciate any answer or help, thanks! | 1medium
|
Title: Update template tag documentation
Body: There are undocumented template tags, as noted in #48
Ideally the [documentation](https://django-plotly-dash.readthedocs.io/en/latest/template_tags.html) should be extended to cover them.
| 1medium
|
Title: 字错误率100%,loss下降很慢
Body: 大家好,我在Google Cloab上运行python train_mspeech.py,一直得到语音单字错误率和dev单字错误率为100%,而且loss到了210左右下降很慢,请问正常吗?
`
*[测试结果]语音识别dev集语音单字错误率:100.0%
[message epodh. Have train datas11000+
Epoch 1/1
500/500[ ========================]-145s291ms/step-loss:209.9455
测试进度:0/4
*[测试结果]语音识别 train集语音单字错误率:100.0%
测试进度:0/4
*[测试结果]语音识别dev集语音单字错误率:100.0%
[message] epoch 0. Have train datas 11500+
Epoch 1/1
500/500[=====================]-144s288ms/step-loss:210.5319
测试进度:0/4
*[测试结果]语音识别train集语音单字错误率:100.0%
测试进度:0/4
*[测试结果]语音识别dev集语音单字错误率:100.0%
[message] epoch 0. Have train datas 12000+
Epoch 1/1
500/500[======================]-144s288ms/step-loss:209.1676
测试进度:0/4
*[测试结果]语音识别 train集语音单错误率:100.0%
测试进度:0/4
*[测试结果]语音识别dev集语音单字错误率:100.0%
[message] epoch 0. Have train datas 12500+
Epoch 1/1
500/500[=========================]-143s285ms/step-loss:209.7521
测试进度:0/4
*[测试结果]语音识别train集语音单字错误率:100.0%
测试进度:0/4
*[测试结果]语音识别dev集语音单字错误率:100.0%
[message] epoch 0. Have train datas 13000+
Epoch 1/1
227/500
8-1.2065
` | 1medium
|
Title: Torchmetrics Accuracy issue when dont shuffle test data.
Body: ### Bug description
I am creating CNN model to recognize dogs and cats. I trained it and when I evaluate accuracy of it by hand it has 80-85% accuracy on an unseen data.
But, when I try to use library torchmetrics.accuracy to calculate my accuracy then for some reason I get wrong accuracy calculations. Let me explain:
The code of the model(I use python, torch, lightning to optimize the model and code):
```
import lightning as L
import torch
import torchmetrics
import torchvision
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision import transforms, datasets
from torchvision.transforms import ToTensor
from CustomDataset import CustomDataset
class Model(L.LightningModule):
def __init__(self, batch_size, learning_rate, num_classes):
super(Model, self).__init__()
self.save_hyperparameters()
## HERE GOES MODEL LAYERS CRITERION etc
self.accuracy = torchmetrics.Accuracy(num_classes=2, average='macro', task='multiclass')
self.test_transform = transforms.Compose([
transforms.Resize((200, 200)), # Resize images to 256x256
transforms.ToTensor(), # Convert images to PyTorch tensors
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # Normalize images
])
self.transform = transforms.Compose([
transforms.RandomResizedCrop(200), # Randomly crops and resizes images to 224x224
transforms.RandomHorizontalFlip(p=0.5), # Randomly flips images horizontally
transforms.RandomRotation(15), # Resize images to 256x256
transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1),
transforms.ToTensor(), # Convert images to PyTorch tensors
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # Normalize images
])
def forward(self, image):
image = F.relu(self.conv1(image))
image = self.pool(image)
image = F.relu(self.conv2(image))
image = self.pool(image)
image = F.relu(self.conv3(image))
image = self.pool(image) # Output is now (128, 25, 25)
image = torch.flatten(image, 1) # Flatten the output
image = F.relu(self.fc1(image))
image = self.fc2(image)
return image
def training_step(self, batch, batch_idx):
images, labels = batch
predictions = self(images) # Forward pass
loss = self.criterion(predictions, labels) # Compute the loss
predicted_classes = torch.argmax(F.softmax(predictions, dim=1), dim=1)
predictions_softmax = F.softmax(predictions, dim=1)
acc = self.accuracy(predictions_softmax, labels)
self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True)
self.log('train_acc', acc, on_step=True, on_epoch=True, prog_bar=True)
return loss # Returning the loss for backpropagation
def validation_step(self, batch, batch_idx):
images, labels = batch
predictions = self(images)
loss = self.criterion(predictions, labels)
predicted_classes = torch.argmax(F.softmax(predictions, dim=1), dim=1)
predictions_softmax = F.softmax(predictions, dim=1)
acc = self.accuracy(predictions_softmax, labels)
self.log('val_loss', loss, prog_bar=True)
self.log('val_acc', acc, prog_bar=True)
return loss
def test_step(self, batch, batch_idx):
images, labels = batch
predictions = self(images) # Forward pass
loss = self.criterion(predictions, labels) # Compute the loss
predicted_classes = torch.argmax(F.softmax(predictions, dim=1), dim=1)
predictions_softmax = F.softmax(predictions, dim=1)
acc = self.accuracy(predictions_softmax, labels)
self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True)
self.log('train_acc', acc, on_step=True, on_epoch=True, prog_bar=True)
return loss # Returning the loss for backpropagation
# images, labels = batch
# predictions = self(images)
# loss = self.criterion(predictions, labels)
# predicted_classes = torch.argmax(F.softmax(predictions, dim=1), dim=1)
# predictions_softmax = F.softmax(predictions, dim=1)
# acc = self.accuracy(predictions_softmax, labels)
# real_step_acc = (labels == predicted_classes).sum() / self.batch_size
# self.log('test_loss', loss, prog_bar=True)
# self.log('real_test_acc', real_step_acc, prog_bar=True)
# self.log('test_acc', acc, prog_bar=True)
# return loss
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.parameters(), lr=self.learning_rate, momentum=0.9)
return optimizer
def train_dataloader(self):
# Set up and return the training DataLoader
filepath_train = "dataset/test/"
train_dataset = datasets.ImageFolder(root=filepath_train, transform=self.transform)
train_loader = DataLoader(train_dataset, batch_size=self.batch_size, shuffle=False, num_workers=16)
return train_loader
def test_dataloader(self):
# Set up and return the training DataLoader
filepath_train = "dataset/test/"
test_dataset = datasets.ImageFolder(root=filepath_train, transform=self.transform)
test_loader = DataLoader(test_dataset, batch_size=self.batch_size, shuffle=True, num_workers=16)
return test_loader
def val_dataloader(self):
# Set up and return the validation DataLoader
filepath_train = "dataset/val/"
val_dataset = datasets.ImageFolder(root=filepath_train, transform=self.test_transform)
val_loader = DataLoader(val_dataset, batch_size=self.batch_size, shuffle=False, num_workers=16)
return val_loader
```
Output is like this:
train_acc_epoch 0.7635096907615662
real_test_acc 0.7901701927185059
test_acc 0.39825108647346497
Real test accuracy I compute like this:
```
predictions_softmax = F.softmax(predictions, dim=1)
acc = self.accuracy(predictions_softmax, labels)
real_step_acc = (labels == predicted_classes).sum() / self.batch_size
```
So the problem is:
When I run the testing then the test accuracy inside test_step method is 40% but the real test accuracy that I compute myself is 80-85%. so what I tried: When I enable shuffling on test data(I know it is bad practice but it was part of the debugging), torchmetrics.accuracy becomes correct! It outputs 80-85% accuracy.
So why the shuffling changes the thing? I think that it might also be some kind of bug. Or maybe I have issue somewhere.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | 1medium
|
Title: Remove dependency on scikit-learn's six
Body: We don't support Python 2 any more so can remove this anyhow. | 1medium
|
Title: 玩锤子,demo都报错
Body: **Describe the bug 描述bug**
A clear and concise description of what the bug is.
**To Reproduce 复现方法**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Screenshots 截图**
If applicable, add screenshots to help explain your problem.
**Tools or Programming Language 使用的工具或编程语言**
Describe in detail the GPT tool or programming language you used to encounter the problem
**Additional context 其他内容**
Add any other context about the problem here.
| 1medium
|
Title: Violin plot graph export after new widget-base
Body: After biolab/orange-widget-base/pull/208 was merged Violin plot graph export does not work.
The problem should be looked at.
Also, we should add a widget test that executes graph saving for all widgets which allow that. Just to see if anything crashes... | 1medium
|
Title: [Type 1] Implement Tableau Cloud-specific requests for the Subscriptions endpoint
Body: ## Description:
The Subscriptions endpoint works somewhat differently for Tableau Cloud & Tableau Server, in that the subscription schedule needs to be defined as part of the request for Tableau Cloud. As of now, TSC only supports the request format for the Server endpoint, where a schedule id needs to be provided. This feature would implement the Tableau Cloud request format alongside the Tableau Server format. The subscriptions REST API documentation: [https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref_subscriptions.htm#tableau-cloud-request](url)
A "quick-and-dirty" implementation could allow the user to specify in the SubscriptionItem definition that instead of schedule_id, they'd like to set all the Tableau Cloud-specific fields. However, if it is expected that more API methods will have Tableau Server & Cloud versions, it could be beneficial to automatically detect Tableau Cloud vs Tableau Server during the construction of the Server object and pick the correct endpoint specs accordingly. TSC doesn't currently seem to have a way to distinguish between requests made to Tableau Cloud & Tableau Server, so this would need to be added first, potentially by checking the server URL for (online.tableau.com). | 1medium
|
Title: About the problem of falling mAP when learning YOLOV8
Body: ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
With the help of answering previous questions, I finished the YOLOV8X.pt version of the study with batch=8 with imgsz=1920. Looking at the learning outcomes, the mAP index started at 80 and went up to 87.2 and then finally dropped to 56. Should I think the training was not good in this case?
### Additional
_No response_ | 1medium
|
Title: Unable to deploy a project with Werkzeug >= 2.0
Body:
## Context
After creating a new virtual environment and installing my project dependencies, including Zappa 0.54.1, I am no longer able to deploy my project.
My Django project does not use Werkzeug, but Werkzeug 2.0.2 gets installed by Zappa. After downgrading to Werkzeug<2.0.0, I am able to deploy my project again.
Updating Zappa to 0.54.1 from an older version that installed Werkzeug 1.0.1 still works because the version of Werkzeug is left unchanged.
I have confirmed this behavior with both Python 3.6 and Python 3.8 and with MacOS 10.15.7 and MacOS 12.1.
## Expected Behavior
Your updated Zappa deployment is live!:
## Actual Behavior
Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 502 response code.
Digging into the Cloud Watch logs, I see the error as described in #1087.
## Possible Fix
Specify a specific version of Werkzeug in Zappa dependencies. Werkzeug 1.0.1 works for me.
## Steps to Reproduce
Install Zappa 0.54.1 into a new virtual environment. Attempt to deploy your project.
| 1medium
|
Title: Validate button shows incomprehensible SyntaxError when the solution trips the timeout limit
Body: ### Operating system
Arch Linux
### `nbgrader --version`
```
Python version 3.11.5 (main, Sep 2 2023, 14:16:33) [GCC 13.2.1 20230801]
nbgrader version 0.9.1
```
### `jupyterhub --version` (if used with JupyterHub)
4.0.2
### `jupyter notebook --version`
7.0.6
### Expected behavior
Nbgrader should not display a SyntaxError when some internal operation fails.
### Actual behavior
When a student tries to validate an assignment that gets stuck due to infinite loop, Nbgrader shows this error:
```
Validation failed
Cannot validate: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data
```

### Steps to reproduce the behavior
Create an assignment with an infinite loop and click on "Validate":
```
while True:
pass
``` | 1medium
|
Title: the request for pywsgi WSGIServer still in queue, not parallel.
Body: * gevent version: 20.6.2
* Python version: Please be as specific as possible: "pyenv Python3.7.5 downloaded from python.org"
* Operating System: Please be as specific as possible: "Centos7.4"
### Description:
the request for pywsgi WSGIServer still in queue, not parallel.
1. I use WSGIServer to wrap a flask web server, and I put this server in a Process.
2. I create this Process in a thread.
3. as your recommand, I put this code before I import everything. but the request still process one by one. how could I do to slove this.
```
from gevent import monkey
monkey.patch_all()
```
this main code for this web server as below
```
class JudgeHTTPProxy(Process):
def create_flask_app(self):
try:
from flask import Flask, request
from flask_compress import Compress
from flask_cors import CORS
from flask_json import FlaskJSON, as_json, JsonError
except ImportError:
raise ImportError('Flask or its dependencies are not fully installed, '
'they are required for serving HTTP requests.'
'Please use "pip install -U flask flask-compress flask-cors flask-json" to install it.')
client = ConcurrentJudgeClient(self.args)
app = Flask(__name__)
CORS(app)
FlaskJSON(app)
@app.route(self.args.url, methods=['POST'])
def _judge():
some logics
return app
def run(self):
app = self.create_flask_app()
server = WSGIServer(('0.0.0.0', self.args.http_port), app, log=None)
server.serve_forever()
```
### What I've run:
in `main.py`. it will create a thread, and the thread will create this Process.
```python
python main.py
```
| 1medium
|
Title: Cannot replicate LSTUR results for MIND large test
Body: Hello, I cannot replicate the results of LSTUR model with MIND test set. I used the scripts provided to generate `embedding.npy`, `word_dict.pkl` and `uid2index.pkl` for test set because they are not provided with MINDlarge_utils.zip.
I use the last lines of code in lstur_MIND.pynb to make predictions in test set, but the results of metrics in validations and test are very differents.
For example, I obtained
`group_auc: 0.65, mean_mrr: 0.31, ndcg@5: 0.34, ndcg@10: 0.40` in validation and `auc: 0.5075, mrr: 0.2259, ndcg@5: 0.2309, nDCG@10: 0.2868` in test set, with the model trained for 10 epochs. | 1medium
|
Title: [QUESTION] How can I set num_workers in the underlying torch module?
Body: When running the `score` function from a `ForecastingAnomalyModel` I am getting this warning:
```
[python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:425): The 'predict_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=15` in the `DataLoader` to improve performance.
```
It seems linked to be linked to PyTorch Lightning, is there any way I can pass the `num_workers` argument?
| 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.