text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: TensorBoardLogger has the wrong epoch numbers much more than the fact
Body: ### Bug description
I used the following code to log the metrics, but I found that the epoch recorded in the tensorboard logger is much more than it should have:
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("train_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("valid_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
pl.Train(..., logger=TensorBoardLogger(save_dir='store',version=log_path), ....)
In the configure, I set max_epoch=10000, but in the logger, I got epoches more than 650k:


### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
```python
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("train_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("valid_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
pl.Train(..., logger=TensorBoardLogger(save_dir='store',version=log_path), ....) # u can use any path you like
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0): 2.1.3
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9): 2.1.2
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source): pip
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | 1medium
|
Title: [Bug] Performance decay in XTTS-v2
Body: ### Describe the bug
I noticed a significant decrease in quality and audio similarity when using the hugging space demo for xtts v2.0.3 before that version the quality and similarly between input and output audios was miles better.
### To Reproduce
Use hugging face space to compare xtts 2.0.3 with older versions.
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
Hugging face demo for xtts v2.0.3
```
### Additional context
_No response_ | 1medium
|
Title: [BUG] False positive error message if the hooks file itself raises `ModuleNotFound`
Body: | 1medium
|
Title: ASGI: iterating over `req.stream` hangs for chunked requests
Body: When reading a streaming request without `Content-Length` (i.e., using "chunked" `Transfer-Encoding`), iterating over `req.stream` hangs in the case the request payload consists of more than one chunk.
It looks like this is caused by the receive loop implementation in `asgi/stream.py`. It only checks for the number of remaining bytes, and disregards the `more_body: False` hint in the case of an empty body. The logic to account for it is in place, but it is not reached due to the `continue` clause for an empty body chunk in the following event sequence:
```
RECV {'type': 'http.request', 'body': b'123456789abcdef\n', 'more_body': True}
RECV {'type': 'http.request', 'body': b'123456789abcdef\n', 'more_body': True}
RECV {'type': 'http.request', 'body': b'', 'more_body': False}
```
Eventually, the client (I was using `httpx`) times out, and an `http.disconnect` event is received which is handled correctly. But the request has already failed (timeout) from the client's perspective at this point. | 1medium
|
Title: Transfer Learning
Body: I'm new to this so I apologize a head of time if this is the wrong way to ask this, please correct me if it is.
I have a model that I have trained, but I have decided that I want to change the output layer. Is there a way to change the output layer without completely retraining the whole model. | 1medium
|
Title: how set_transform affects batch size?
Body: ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_features[0]
input_length = len(input_features)
labels = processor.tokenizer(batch["text"], padding=False).input_ids
batch = {
"input_features": [input_features],
"input_length": [input_length],
"labels": [labels]
}
return batch
train_ds.set_transform(prepare_dataset)
val_ds.set_transform(prepare_dataset)
```
After this, I also had to change the DataCollatorCTCWithPadding class like this:
```
@dataclass
class DataCollatorCTCWithPadding:
processor: Wav2Vec2BertProcessor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# Separate input_features and labels
input_features = [{"input_features": feature["input_features"][0]} for feature in features]
labels = [feature["labels"][0] for feature in features]
# Pad input features
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
# Pad and process labels
label_features = self.processor.tokenizer.pad(
{"input_ids": labels},
padding=self.padding,
return_tensors="pt",
)
labels = label_features["input_ids"]
attention_mask = label_features["attention_mask"]
# Replace padding with -100 to ignore these tokens during loss calculation
labels = labels.masked_fill(attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
```
But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake?
### Steps to reproduce the bug
i can share my code if needed
### Expected behavior
Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch.
### Environment info
all updated versions | 1medium
|
Title: Geocaching: unable to add integration
Body: ### The problem
After the update of HassOS from 14.2 to 15, the Geocaching integration could not be loaded. I deleted the integration and tried to add it again, however I always get an error message after putting in the credentials and allow the exchange of information.
### What version of Home Assistant Core has the issue?
2025.3.3
### What was the last working version of Home Assistant Core?
2025.3.3
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Geocaching
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/geocaching/
### Diagnostics information
No information available as the integration can not be added.
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
 | 1medium
|
Title: kernel keeps dying on Jupiter notebook and inference is slow / no such problem if ran on gradio
Body: I am trying out and validating translations from Bulgarian to English. The problem is that on Jupyter, the kernel keeps dying and is rather slow when translating. However, if ran on gradio, things happen very quickly.
This is the code I am using, where `article` is some article from a dataframe.
```
start_time = time.time()
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer = NllbTokenizerFast.from_pretrained("facebook/nllb-200-distilled-600M")
translator = pipeline('translation',
model=model,
tokenizer=tokenizer,
src_lang='bul_Cyrl',
tgt_lang='eng_Latn')
article = df.truncated[3]
output = translator(article, max_length=512)
end_time = time.time()
output = output[0]['translation_text']
result = {'inference_time': end_time - start_time,
'result': output}
result
```
I am on an M1 Mac. Translations on Jupyter range from 15 to 35 seconds, whereas on gradio from 4-10 seconds max. What could be the case as I am even using NllbTokenizerFast? Is there a way to improve this? | 1medium
|
Title: README.rst contains non-ASCII characters fails installtion
Body: ```powershell
> pip install sandman2
Collecting sandman2
Using cached https://files.pythonhosted.org/packages/34/43/65317a5a01c16d494a68b37bc21d9cbe17c3fd089b76835fdbda60f1973b/sandman2-1.2.0.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\34357\AppData\Local\Temp\pip-install-1toj7xsk\sandman2\setup.py", line 19, in <module>
LONG_DESCRIPTION = read('README.rst')
File "C:\Users\34357\AppData\Local\Temp\pip-install-1toj7xsk\sandman2\setup.py", line 17, in read
return codecs.open(os.path.join(HERE, *parts), 'r').read()
UnicodeDecodeError: 'gbk' codec can't decode byte 0xa6 in position 2084: illegal multibyte sequence
```
# Solution1
README.rst contains non-ASCII character `’`, should replace it with ASCII character `'`
See https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
# Solution2
https://github.com/jeffknupp/sandman2/blob/1ce21d6f7a6df77fa96fab694b0f9bb8469c166b/setup.py#L16-L17
Intentionally *do* adding an 'utf-8' encoding option to open | 1medium
|
Title: settings.ACCOUNT_AUTHENTICATION_METHOD MANDATORY Phone Number with Verification
Body: Currently when I try to set a CustomUser without username field and use USERNAME_FIELD = 'phone_number'
I cannot login to django admin because allauth's authentication backend tries to search the login user with 'username field'
setting ACCOUNT_USER_MODEL_USERNAME_FIELD = 'phone_number'
enables admin login.
However it is still hard to setup the social login functionality with mandatory phone number verification with allauth.
I think there should be mountable PhoneNumberAuthBackend, etc. | 1medium
|
Title: [Migrated] No module named 'app': ModuleNotFoundError , slim_handler : true
Body: Originally from: https://github.com/Miserlou/Zappa/issues/1482 by [houdinisparks](https://github.com/houdinisparks)
<!--- Provide a general summary of the issue in the Title above -->
## Context
I am trying to deploy a >100 mb flask app with slim_handler : true. Below is my file structure:
```
+-- app
| +-- routes
| +-- static
| +-- __init__.py
| +-- load.py
+-- venv
+-- run.py
+-- setup.py
+-- requirements.txt
```
However, when i try to deploy to zappa, its gives me the following error below:
```
No module named 'app': ModuleNotFoundError
Traceback (most recent call last):
File "/var/task/handler.py", line 566, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 237, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 129, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/var/lang/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 936, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 948, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'app'
```
this is my zappa_settings.json
```
{
"dev": {
"app_function": "run.app",
"profile_name": "work",
"project_name": "<projectname>",
"runtime": "python3.6",
"s3_bucket": "<s3bucketname>",
"slim_handler": true
}
}
```
and my run.py
```
from app.load import create_app
app = create_app()
# We only need this for local development.
if __name__ == '__main__':
print("initiating the web app...")
app.run(debug=True)
```
and my load.py file:
```
from flask import Flask
from app import routes
def create_app():
"""An application factory, as explained here: http://flask.pocoo.org/docs/patterns/appfactories/.
:param config_object: The configuration object to use.
"""
app = Flask(__name__, static_url_path='', template_folder="static/pages")
register_blueprints(app)
return app
def register_blueprints(app):
"""Register Flask blueprints."""
app.register_blueprint(routes.api.api_bp)
app.register_blueprint(routes.index.webpage_bp)
return None
```
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: latest from pip installed from github <0.45.11>
* Operating System and Python version: Win 10, python 3.6
* The output of `pip freeze`:
```
aniso8601==3.0.0
argcomplete==1.9.3
asn1crypto==0.24.0
azure-common==1.1.8
azure-nspkg==2.0.0
azure-storage==0.36.0
base58==0.2.4
boto3==1.6.4
botocore==1.9.4
certifi==2018.1.18
cffi==1.11.5
cfn-flip==1.0.0
chardet==3.0.4
click==6.7
-e git+https://wyy95.visualstudio.com/IVAN/_git/ModelDeployment@213574193aa6759d2b7767871714f5c7e3079a11#egg=cognizant
cryptography==2.2.1
docutils==0.14
durationpy==0.5
Flask==0.12.2
Flask-REST==1.3
Flask-RESTful==0.3.6
future==0.16.0
hjson==3.0.1
idna==2.6
itsdangerous==0.24
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.19.0
lightgbm==2.1.0
MarkupSafe==1.0
numpy==1.14.1
pandas==0.22.0
placebo==0.8.1
pycparser==2.18
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2018.3
PyYAML==3.12
requests==2.18.4
s3transfer==0.1.13
scikit-learn==0.19.1
scipy==1.0.0
six==1.11.0
toml==0.9.4
tqdm==4.19.1
troposphere==2.2.0
Unidecode==1.0.22
urllib3==1.22
Werkzeug==0.13
wsgi-request-logger==0.4.6
zappa==0.45.1
```
* Link to your project (optional):
| 1medium
|
Title: Tabs dont respond when using nested cache functions
Body: ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
[](https://issues.streamlitapp.com/?issue=gh-9951)
I detected a critical problem with tabs when updating streamlit to a version newer than 1.35.0 (1.36 and up all have this problem). I found the issue in mobile, but it reproduces also in PC.
In my app I have the following scenario:
- Multiple tabs
- Several of them call functions that are cached
- And those functions call also (sometimes several times) nested cache functions.
In version 1.35 everything works fine on mobile and pc, but when I tried to update to a newer version, I detected that changing between tabs dont work (they become super irresponsibe and the app seems to crash). This is weird because my understanding was that changing tabs didnt trigger any runs/calculations.
If you erase all the @st.cache_data from the reproducible code example, all the code works just fine. So the problem seem to be that streamlit is doing somthing with the cache data when I try to switch tabs.
### Reproducible Code Example
```Python
import streamlit as st
st.header(body = "Testing problem switching tabs")
@st.cache_data(ttl=None)
def cached_func_level4():
return "test"
@st.cache_data(ttl=None)
def cached_func_level3():
return cached_func_level4()
@st.cache_data(ttl=None)
def cached_func_level2():
return cached_func_level3()
@st.cache_data(ttl=None)
def cached_func_level1():
return cached_func_level2()
@st.cache_data(ttl=None)
def cached_func_level0():
# If you iterate more times than 2000, the tab problem is even bigger
for _ in range(2000):
x = cached_func_level1()
return x
# In this testing tabs I only print a value and execute the
# "root" cached function, which calls other cached funcs
admin_tabs = st.tabs(["test1", "test2"])
with admin_tabs[0]:
st.write("Hello")
val = cached_func_level0()
with admin_tabs[1]:
st.write("World!")
val = cached_func_level0()
```
### Steps To Reproduce
Just run streamlit and when the page renders try to switch between the tabs.
### Expected Behavior
The expected behavior would be to be able to switch tabs without delay
### Current Behavior
Now the tabs crash when you try to switch between them, and the app does not respond or it does but super slowly.
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.36 and up
- Python version: 3.11.5
- Operating System: Windows and iOS
- Browser: Testing in both safari and chrome
### Additional Information
_No response_ | 2hard
|
Title: Admin export confirm page runs a query for the entire table
Body: When the export page is loaded, the code runs a query for the entire table of the model (i.e. the equivalent of `SELECT * FROM table`):
https://github.com/django-import-export/django-import-export/blob/9839a28089575baef1ecab686ab81682751ed761/import_export/admin.py#L736
When trying to export even a moderately sized table, which I think is a common scenario for django-import-export, such a query is very problematic, especially because it ignores any filtering on the queryset.
I'm not sure if this query is really intentional, but if it is, would it possible to add a way to turn it off? | 1medium
|
Title: Documentation Enhancement
Body: ### Which package?
vizro
### What's the problem this feature will solve?
It will save developers and analysts countless hours of implementing selectors. It will do this by giving a detailed gif or webp video of what each selector does with an example of that selector in use.
It does this by giving a visual representation reinforcing what it is that the selector does before implementation, guiding user in the correct application of a selector for their use case.
### Describe the solution you'd like
Under each selector in the documentation section. Have a use case gif/webp video of that selector for a use case.
### Alternative Solutions
Knowing what each selector already does through implementing. Trial and error.
### Additional context
This enhancement can save developers time in understanding what does what for their implementations.
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | 1medium
|
Title: pip install hanlp failed
Body:
**Describe the bug**
pip install hanlp failed
**Code to reproduce the issue**
```
pipx install hanlp
```
**Describe the current behavior**
```
Fatal error from pip prevented installation. Full pip output in file:
/home/neoe/.local/pipx/logs/cmd_2023-10-12_20.44.33_pip_errors.log
pip failed to build package:
tokenizers
Some possibly relevant errors from pip install:
error: subprocess-exited-with-error
error: casting `&T` to `&mut T` is undefined behavior, even if the reference is unused, consider instead using an `UnsafeCell`
error: could not compile `tokenizers` (lib) due to previous error; 3 warnings emitted
error: `cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib --` failed with code 101
Error installing hanlp.
```
**Expected behavior**
install ok
**System information**
- debian 12
- Python version: Python 3.11.2
- HanLP version: newest
**Other info / logs**
* [x] I've completed this form and searched the web for solutions. | 1medium
|
Title: Default value for "training" parameter for BatchNorm custom layer call method
Body: ```
class BatchNorm(tf.keras.layers.Layer):
.
.
.
@tf.function
def call(self, inputs, training):
if training:
axes = list(range(len(inputs.shape) - 1))
batch_mean = tf.reduce_mean(inputs, axes, keepdims=True)
batch_variance = tf.reduce_mean(tf.math.squared_difference(
inputs, tf.stop_gradient(batch_mean)), axes, keepdims=True)
batch_mean = tf.squeeze(batch_mean, axes)
batch_variance = tf.squeeze(batch_variance, axes)
.
.
.
else:
.
.
return output
```
[Above is the scratch implementation of Batch Normalization layer](https://d2l.ai/chapter_convolutional-modern/batch-norm.html#implementation-from-scratch).
The call method doesn't have a default training parameter( **None or True**).
[When this particular implementation is plugged into a Sequential model as shown in D2l Lenet implementation, ](https://d2l.ai/chapter_convolutional-modern/batch-norm.html#applying-batch-normalization-in-lenet)
```
def net():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=6, kernel_size=5,
input_shape=(28, 28, 1)),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.AvgPool2D(pool_size=2, strides=2),
tf.keras.layers.Conv2D(filters=16, kernel_size=5),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.AvgPool2D(pool_size=2, strides=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.Dense(84),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.Dense(10)]
)
```
I get the **" tf__call() missing 1 required positional argument: 'training'"** error as shown below
```
2022-03-31 06:14:37.125158: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
Traceback (most recent call last):
File "D:/codebase/d2l-2/MCN/exercises/GoogleNet/runner.py", line 11, in <module>
X = layer(X)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "D:\codebase\d2l-2\MCN\exercises\GoogleNet\blocks\B1.py", line 14, in call
bridged_input = self.conv(bridged_input)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "D:\codebase\d2l-2\MCN\exercises\GoogleNet\ConvBNRelu.py", line 22, in call
bridged_input = self.batch_norm(bridged_input)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\def_function.py", line 618, in _call
results = self._stateful_fn(*args, **kwds)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\function.py", line 2419, in __call__
graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\framework\func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\function.py", line 3299, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\framework\func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
TypeError: tf__call() missing 1 required positional argument: 'training'
```
I have handled the unassigned training parameter for now as shown below
```
@tf.function
def call(self, inputs, training= None):
if training is None:
training = tf.keras.backend.learning_phase()
if training:
axes = list(range(len(inputs.shape) - 1))
batch_mean = tf.reduce_mean(inputs, axes, keepdims=True)
```
Any other suggestions would be great.
And the D2L code needs a change as the TensorFlow lenet with BN is still spitting out the above error.
| 1medium
|
Title: Edit .gitignore
Body: Hi everyone
please add ".idea" to ".gitignore" file. | 0easy
|
Title: Feature request: Support for Django form validation
Body: Hey,
is there any possibility to add some support for the Django form validation by also returning the field parameters via GraphQL? What I was thinking of is something like this which takes whatever is defined in the model and passes it to the frontend so that I'd be possible to create some auto-validation based on the backend.
```json
User {
"firstname": {
"value": "James",
"validation": {
"type": "String",
"max_length": 30
}
}
}
``` | 1medium
|
Title: Unable to load model on VM, getting error 'utf-8' codec can't decode 0x86 in position 0: UnicodeDecodeError
Body: Hello,guys
I have successfully trained the model using GPU enabled system and now I want this model to be used on my VM.
While performing ``` spacy.load("/Users/home/djnago/model-last")```
getting the error
```
/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/util.py:877: UserWarning: [W095] Model 'en_pipeline' (0.0.0) was trained with spaCy v3.4 and may not be 100% compatible with the current version (3.5.0). If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate
warnings.warn(warn_msg)
Traceback (most recent call last):
File "/home/ubuntu/ResumeParser/manage.py", line 22, in <module>
main()
File "/home/ubuntu/ResumeParser/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/management/base.py", line 402, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/management/base.py", line 443, in execute
self.check()
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/management/base.py", line 475, in check
all_issues = checks.run_checks(
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/checks/registry.py", line 88, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/checks/urls.py", line 14, in check_url_config
return check_resolver(resolver)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/checks/urls.py", line 24, in check_resolver
return check_method()
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/urls/resolvers.py", line 494, in check
for pattern in self.url_patterns:
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/utils/functional.py", line 57, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/urls/resolvers.py", line 715, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/utils/functional.py", line 57, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/urls/resolvers.py", line 708, in urlconf_module
return import_module(self.urlconf_name)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/ubuntu/ResumeParser/cv_parser_api/urls.py", line 18, in <module>
from resume_parser.views import home
File "/home/ubuntu/ResumeParser/resume_parser/views.py", line 21, in <module>
entityNlp = spacy.load(os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))),"model/model-last-ner-18"))
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/__init__.py", line 54, in load
return util.load_model(
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/util.py", line 434, in load_model
return load_model_from_path(Path(name), **kwargs) # type: ignore[arg-type]
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/util.py", line 514, in load_model_from_path
return nlp.from_disk(model_path, exclude=exclude, overrides=overrides)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/language.py", line 2125, in from_disk
util.from_disk(path, deserializers, exclude) # type: ignore[arg-type]
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/util.py", line 1352, in from_disk
reader(path / key)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/language.py", line 2119, in <lambda>
deserializers[name] = lambda p, proc=proc: proc.from_disk( # type: ignore[misc]
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy_transformers/pipeline_component.py", line 419, in from_disk
util.from_disk(path, deserialize, exclude) # type: ignore
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/util.py", line 1352, in from_disk
reader(path / key)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy_transformers/pipeline_component.py", line 393, in load_model
self.model.from_bytes(mfile.read())
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/thinc/model.py", line 619, in from_bytes
return self.from_dict(msg)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/thinc/model.py", line 657, in from_dict
node.shims[i].from_bytes(shim_bytes)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy_transformers/layers/hf_shim.py", line 89, in from_bytes
msg = srsly.msgpack_loads(bytes_data)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/srsly/_msgpack_api.py", line 27, in msgpack_loads
msg = msgpack.loads(data, raw=False, use_list=use_list)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/srsly/msgpack/__init__.py", line 79, in unpackb
return _unpackb(packed, **kwargs)
File "srsly/msgpack/_unpacker.pyx", line 191, in srsly.msgpack._unpacker.unpackb
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x86 in position 0: invalid start byte
```
Kindly provide the solution, or do let me know how to get rid of this error!
## Info about spaCy
- **spaCy version:** 3.5.0
- **Platform:** Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- **Python version:** 3.10.10
- **Pipelines:** en_core_web_sm (3.5.0)
* Operating System: Ubuntu 22.04 LTS
* Python Version Used: Python 3.10.10
* spaCy Version Used: 3.5.0
* Environment Information: venv
| 1medium
|
Title: Where or how to obtain a database session for a task
Body: Thanks for this amazing work!
Can you help tell me where or how to obtain a database session for a task. I saw that the initial function of CRUDBase has db param, but I didn't see initialization. Thank you very much.
| 1medium
|
Title: Big performance difference between statsmodels MixedLM and R lmer
Body: Hi,
Using statsmodels version 0.14.0 I notice a big difference in performance when running the same model in R. Using statsmodels' `mixedlm`, the model takes 41 minutes to run while only 1-2 seconds using R's `lmer` package on the same machine. Why is there such a large difference in performance and is there anything I can do speed things up?
I have tried different `method` arguments, but most of them have trouble converging.
My dataset contains 125,066 groups (pairs), has 2 categorical variables and 1 numerical. I am comparing the following code:
```
lmm_model = smf.mixedlm(f'value ~ C(cat_var1) + C(cat_var2) + numerical_var', data=pdf, groups=pdf['unit'])
lmm_results = lmm_model.fit()
print(lmm_results.summary())
```
And in R:
```
summary(lmer('value ~ cat_var1 + cat_var2 + numerical_var + (1|unit)', data=data_r))
```
Thanks in advance! | 1medium
|
Title: Savefig saves only background color
Body: Is there a fix for it? I'd like to be able to save my figs in higher dpi. But for some reason it only saves as a dark rectangle of the background color. | 1medium
|
Title: Error when running run_pretraining.py (error recorded from training loop: indices[] is not in ...)
Body: I'm trying to create my own `.ckpt.model` file by running the `run_pretraining.py file` on Google Colab using this command :
> !python run_pretraining.py \
> --bert_config_file "../bert-multi-cased/bert_config.json" \
> --input_file "../bert-model-custom/pretrain.tfrecord" \
> --output_dir "../bert-model-custom" \
> --init_checkpoint "../bert-multi-cased/bert_model.ckpt" \
> --do_train True \
> --train_batch_size 2
but i encountered this error :
> ....
> INFO:tensorflow:Saving checkpoints for 0 into ../bert-model-custom/model.ckpt.
> I0814 08:29:48.793563 140475286189952 basic_session_run_hooks.py:606] Saving checkpoints for 0 into ../bert-model-custom/model.ckpt.
> ERROR:tensorflow:Error recorded from training_loop: indices[254] = 164930 is not in [0, 119547)
> [[node bert/embeddings/GatherV2 (defined at /content/bert/modeling.py:419) ]]
I've read article somewhere this problem is becase vocabulary size outbounding issue, i mean the **164930** above is outbounding the **119547**, am i right?
If it's right then where can i change the vocabulary size?
Thankyou in advance,
sorry for bad english. | 1medium
|
Title: Do an automatic stream flush before rendering a progress bar
Body: 4.60.0 3.7.6 (default, Jan 8 2020, 20:23:39) [MSC v.1916 64 bit (AMD64)] win32
Running Python in Spyder
While tqdm is great for the negligible effort required to have a progress bar, I've found it necessary to add a `sys.stdout.flush()` call before usage to avoid possible interleaved output. This taints the code and reduces the ease of use. e.g. below, where although the `1` was printed before the bar was started, the output was interleaved.
```
1%| | 14/2408 [00:00<00:17, 135.41it/s]1
11%|█ | 263/2408 [00:02<00:17, 123.80it/s]
```
An option to have tqdm automatically flush the output stream it's about to use would be a huge benefit. | 1medium
|
Title: Test case `test_pagination_queries` is flaky
Body: Two successive runs of the test suite resulted in failure and success without any changes to the code.
The first failure was caused by AssertionError in `test_pagination_queries` :
```
members = (Member(id=0, first_name='Andrew', last_name='Brookins', email='[email protected]', join_date=datetime.date(2023, 2, 11), ...com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.'))
m = Models(BaseHashModel=<class 'tests_sync.test_hash_model.m.<locals>.BaseHashModel'>, Order=<class 'tests_sync.test_hash_model.m.<locals>.Order'>, Member=<class 'tests_sync.test_hash_model.m.<locals>.Member'>)
@py_test_mark_sync
def test_pagination_queries(members, m):
member1, member2, member3 = members
actual = m.Member.find(m.Member.last_name == "Brookins").page()
assert actual == [member1, member2]
actual = m.Member.find().page(1, 1)
> assert actual == [member2]
E AssertionError: assert [Member(id=2, first_name='Andrew', last_name='Smith', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.')] == [Member(id=1, first_name='Kim', last_name='Brookins', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')]
E At index 0 diff: Member(id=2, first_name='Andrew', last_name='Smith', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.') != Member(id=1, first_name='Kim', last_name='Brookins', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')
E Full diff:
E - [Member(id=1, first_name='Kim', last_name='Brookins', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')]
E + [Member(id=2, first_name='Andrew', last_name='Smith', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.')]
tests_sync/test_hash_model.py:187: AssertionError
```
---
Attached is a full log showing the first failure and immediate re-run resulting in success.
<details>
<summary>Full console output</summary>
```
~/Repositories/redis-om-python fix-model-typings* 9s
redis-om-DEJACET3-py3.10 ❯ make test
/opt/homebrew/bin/poetry install
Installing dependencies from lock file
No dependencies to install or update
Installing the current project: redis-om (0.1.2)
touch .install.stamp
/opt/homebrew/bin/poetry run python make_sync.py
docker-compose up -d
[+] Running 7/7
⠿ oss_redis Pulled 3.6s
⠿ 5731adb3a4ab Already exists 0.0s
⠿ e78ad00da4bd Pull complete 0.6s
⠿ acf81d284940 Pull complete 0.8s
⠿ c19f7ed7779d Pull complete 1.5s
⠿ 9df49c3f82f2 Pull complete 1.5s
⠿ cf4fe2915070 Pull complete 1.5s
[+] Running 3/3
⠿ Network redis-om-python_default Created 0.0s
⠿ Container redis-om-python-oss_redis-1 Started 0.4s
⠿ Container redis-om-python-redis-1 Started 0.5s
REDIS_OM_URL=""redis://localhost:6380?decode_responses=True"" /opt/homebrew/bin/poetry run pytest -n auto -vv ./tests/ ./tests_sync/ --cov-report term-missing --cov aredis_om redis_om
=============================================================================== test session starts ================================================================================
platform darwin -- Python 3.10.8, pytest-7.2.1, pluggy-1.0.0 -- /Users/marian/Library/Caches/pypoetry/virtualenvs/redis-om-DEJACET3-py3.10/bin/python
cachedir: .pytest_cache
rootdir: /Users/marian/Repositories/redis-om-python, configfile: pytest.ini
plugins: xdist-3.2.0, asyncio-0.20.3, cov-4.0.0
asyncio: mode=strict
[gw0] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw1] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw2] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw3] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw4] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw5] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw6] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw7] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw0] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw1] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw2] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw3] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw4] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw5] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw6] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw7] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
gw0 [152] / gw1 [152] / gw2 [152] / gw3 [152] / gw4 [152] / gw5 [152] / gw6 [152] / gw7 [152]
scheduling tests via LoadScheduling
tests/test_hash_model.py::test_recursive_query_resolution
tests/test_hash_model.py::test_numeric_queries
tests/test_hash_model.py::test_validation_passes
tests/test_hash_model.py::test_raises_error_with_dicts
tests/test_hash_model.py::test_delete
tests/test_hash_model.py::test_access_result_by_index_not_cached
tests/test_hash_model.py::test_delete_many
tests/test_hash_model.py::test_exact_match_queries
[gw3] [ 0%] PASSED tests/test_hash_model.py::test_validation_passes
[gw5] [ 1%] PASSED tests/test_hash_model.py::test_raises_error_with_dicts
tests/test_hash_model.py::test_retrieve_first
tests/test_hash_model.py::test_raises_error_with_sets
[gw4] [ 1%] PASSED tests/test_hash_model.py::test_delete
tests/test_hash_model.py::test_expire
[gw1] [ 2%] PASSED tests/test_hash_model.py::test_recursive_query_resolution
[gw6] [ 3%] PASSED tests/test_hash_model.py::test_delete_many
tests/test_hash_model.py::test_updates_a_model
tests/test_hash_model.py::test_tag_queries_boolean_logic
[gw5] [ 3%] PASSED tests/test_hash_model.py::test_raises_error_with_sets
[gw7] [ 4%] PASSED tests/test_hash_model.py::test_access_result_by_index_not_cached
tests/test_hash_model.py::test_raises_error_with_lists
tests/test_hash_model.py::test_schema
[gw2] [ 5%] PASSED tests/test_hash_model.py::test_numeric_queries
tests/test_hash_model.py::test_sorting
[gw0] [ 5%] PASSED tests/test_hash_model.py::test_exact_match_queries
tests/test_hash_model.py::test_delete_non_exist
[gw3] [ 6%] PASSED tests/test_hash_model.py::test_retrieve_first
tests/test_hash_model.py::test_saves_model_and_creates_pk
[gw4] [ 7%] PASSED tests/test_hash_model.py::test_expire
tests/test_hash_model.py::test_raises_error_with_embedded_models
[gw1] [ 7%] PASSED tests/test_hash_model.py::test_tag_queries_boolean_logic
tests/test_hash_model.py::test_tag_queries_punctuation
[gw5] [ 8%] PASSED tests/test_hash_model.py::test_raises_error_with_lists
[gw7] [ 9%] PASSED tests/test_hash_model.py::test_schema
tests/test_hash_model.py::test_saves_many
tests/test_hash_model.py::test_primary_key_model_error
[gw6] [ 9%] PASSED tests/test_hash_model.py::test_updates_a_model
tests/test_hash_model.py::test_paginate_query
[gw3] [ 10%] PASSED tests/test_hash_model.py::test_saves_model_and_creates_pk
tests/test_hash_model.py::test_all_pks
[gw4] [ 11%] PASSED tests/test_hash_model.py::test_raises_error_with_embedded_models
tests/test_hash_model.py::test_raises_error_with_dataclasses
[gw2] [ 11%] PASSED tests/test_hash_model.py::test_sorting
tests/test_hash_model.py::test_validates_required_fields
[gw0] [ 12%] PASSED tests/test_hash_model.py::test_delete_non_exist
[gw5] [ 13%] PASSED tests/test_hash_model.py::test_saves_many
tests/test_hash_model.py::test_count
tests/test_hash_model.py::test_full_text_search_queries
[gw1] [ 13%] PASSED tests/test_hash_model.py::test_tag_queries_punctuation
tests/test_hash_model.py::test_tag_queries_negation
[gw6] [ 14%] PASSED tests/test_hash_model.py::test_paginate_query
[gw2] [ 15%] PASSED tests/test_hash_model.py::test_validates_required_fields
tests/test_hash_model.py::test_access_result_by_index_cached
tests/test_hash_model.py::test_validates_field
[gw4] [ 15%] PASSED tests/test_hash_model.py::test_raises_error_with_dataclasses
tests/test_json_model.py::test_updates_a_model
[gw7] [ 16%] PASSED tests/test_hash_model.py::test_primary_key_model_error
tests/test_hash_model.py::test_primary_pk_exists
[gw3] [ 17%] PASSED tests/test_hash_model.py::test_all_pks
[gw0] [ 17%] PASSED tests/test_hash_model.py::test_full_text_search_queries
tests/test_json_model.py::test_all_pks
tests/test_hash_model.py::test_pagination_queries
[gw5] [ 18%] PASSED tests/test_hash_model.py::test_count
tests/test_json_model.py::test_validates_required_fields
[gw2] [ 19%] PASSED tests/test_hash_model.py::test_validates_field
tests/test_json_model.py::test_list_field_limitations
[gw1] [ 19%] PASSED tests/test_hash_model.py::test_tag_queries_negation
tests/test_json_model.py::test_in_query
[gw6] [ 20%] PASSED tests/test_hash_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_tag_queries_negation
[gw0] [ 21%] PASSED tests/test_hash_model.py::test_pagination_queries
tests/test_json_model.py::test_allows_and_serializes_lists
[gw5] [ 21%] PASSED tests/test_json_model.py::test_validates_required_fields
tests/test_json_model.py::test_validates_field
[gw4] [ 22%] PASSED tests/test_json_model.py::test_updates_a_model
tests/test_json_model.py::test_paginate_query
[gw1] [ 23%] PASSED tests/test_json_model.py::test_in_query
tests/test_json_model.py::test_update_query
[gw7] [ 23%] PASSED tests/test_hash_model.py::test_primary_pk_exists
tests/test_json_model.py::test_recursive_query_field_resolution
[gw5] [ 24%] PASSED tests/test_json_model.py::test_validates_field
[gw6] [ 25%] PASSED tests/test_json_model.py::test_tag_queries_negation
tests/test_json_model.py::test_validation_passes
tests/test_json_model.py::test_numeric_queries
[gw2] [ 25%] PASSED tests/test_json_model.py::test_list_field_limitations
tests/test_json_model.py::test_allows_dataclasses
[gw3] [ 26%] PASSED tests/test_json_model.py::test_all_pks
tests/test_json_model.py::test_delete
[gw0] [ 26%] PASSED tests/test_json_model.py::test_allows_and_serializes_lists
tests/test_json_model.py::test_schema
[gw4] [ 27%] PASSED tests/test_json_model.py::test_paginate_query
[gw5] [ 28%] PASSED tests/test_json_model.py::test_validation_passes
tests/test_json_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_saves_model_and_creates_pk
[gw1] [ 28%] PASSED tests/test_json_model.py::test_update_query
tests/test_json_model.py::test_exact_match_queries
[gw3] [ 29%] PASSED tests/test_json_model.py::test_delete
tests/test_json_model.py::test_saves_many_implicit_pipeline
[gw2] [ 30%] PASSED tests/test_json_model.py::test_allows_dataclasses
tests/test_json_model.py::test_allows_and_serializes_dicts
[gw0] [ 30%] PASSED tests/test_json_model.py::test_schema
tests/test_json_model.py::test_count
[gw6] [ 31%] PASSED tests/test_json_model.py::test_numeric_queries
tests/test_json_model.py::test_sorting
[gw3] [ 32%] PASSED tests/test_json_model.py::test_saves_many_implicit_pipeline
tests/test_json_model.py::test_saves_many_explicit_transaction
[gw4] [ 32%] PASSED tests/test_json_model.py::test_access_result_by_index_cached
[gw7] [ 33%] PASSED tests/test_json_model.py::test_recursive_query_field_resolution
tests/test_json_model.py::test_access_result_by_index_not_cached
[gw5] [ 34%] PASSED tests/test_json_model.py::test_saves_model_and_creates_pk
tests/test_json_model.py::test_full_text_search
tests/test_oss_redis_features.py::test_not_found
[gw1] [ 34%] PASSED tests/test_json_model.py::test_exact_match_queries
tests/test_json_model.py::test_recursive_query_expression_resolution
[gw2] [ 35%] PASSED tests/test_json_model.py::test_allows_and_serializes_dicts
[gw6] [ 36%] PASSED tests/test_json_model.py::test_sorting
tests/test_json_model.py::test_allows_and_serializes_sets
tests/test_json_model.py::test_not_found
[gw0] [ 36%] PASSED tests/test_json_model.py::test_count
tests/test_oss_redis_features.py::test_all_keys
[gw3] [ 37%] PASSED tests/test_json_model.py::test_saves_many_explicit_transaction
tests/test_json_model.py::test_delete_many_implicit_pipeline
[gw5] [ 38%] PASSED tests/test_oss_redis_features.py::test_not_found
tests/test_oss_redis_features.py::test_validates_required_fields
[gw6] [ 38%] PASSED tests/test_json_model.py::test_not_found
tests_sync/test_hash_model.py::test_recursive_query_resolution
[gw1] [ 39%] PASSED tests/test_json_model.py::test_recursive_query_expression_resolution
[gw4] [ 40%] PASSED tests/test_json_model.py::test_access_result_by_index_not_cached
tests/test_pydantic_integrations.py::test_email_str
tests/test_oss_redis_features.py::test_saves_model_and_creates_pk
[gw2] [ 40%] PASSED tests/test_json_model.py::test_allows_and_serializes_sets
tests_sync/test_hash_model.py::test_delete_non_exist
[gw7] [ 41%] PASSED tests/test_json_model.py::test_full_text_search
tests/test_json_model.py::test_tag_queries_boolean_logic
[gw3] [ 42%] PASSED tests/test_json_model.py::test_delete_many_implicit_pipeline
tests_sync/test_hash_model.py::test_validates_required_fields
[gw5] [ 42%] PASSED tests/test_oss_redis_features.py::test_validates_required_fields
tests/test_oss_redis_features.py::test_validates_field
[gw6] [ 43%] PASSED tests_sync/test_hash_model.py::test_recursive_query_resolution
tests_sync/test_hash_model.py::test_tag_queries_boolean_logic
[gw4] [ 44%] PASSED tests/test_oss_redis_features.py::test_saves_model_and_creates_pk
tests/test_oss_redis_features.py::test_raises_error_with_embedded_models
[gw3] [ 44%] PASSED tests_sync/test_hash_model.py::test_validates_required_fields
tests_sync/test_hash_model.py::test_validates_field
[gw2] [ 45%] PASSED tests_sync/test_hash_model.py::test_delete_non_exist
tests_sync/test_hash_model.py::test_full_text_search_queries
[gw1] [ 46%] PASSED tests/test_pydantic_integrations.py::test_email_str
tests/test_redis_type.py::test_redis_type
[gw1] [ 46%] PASSED tests/test_redis_type.py::test_redis_type
tests_sync/test_hash_model.py::test_exact_match_queries
[gw0] [ 47%] PASSED tests/test_oss_redis_features.py::test_all_keys
[gw6] [ 48%] PASSED tests_sync/test_hash_model.py::test_tag_queries_boolean_logic
tests_sync/test_hash_model.py::test_tag_queries_negation
tests_sync/test_hash_model.py::test_tag_queries_punctuation
[gw5] [ 48%] PASSED tests/test_oss_redis_features.py::test_validates_field
tests/test_oss_redis_features.py::test_validation_passes
[gw7] [ 49%] PASSED tests/test_json_model.py::test_tag_queries_boolean_logic
tests/test_json_model.py::test_tag_queries_punctuation
[gw3] [ 50%] PASSED tests_sync/test_hash_model.py::test_validates_field
tests_sync/test_hash_model.py::test_validation_passes
[gw2] [ 50%] PASSED tests_sync/test_hash_model.py::test_full_text_search_queries
tests_sync/test_hash_model.py::test_pagination_queries
[gw4] [ 51%] PASSED tests/test_oss_redis_features.py::test_raises_error_with_embedded_models
tests/test_oss_redis_features.py::test_saves_many
[gw3] [ 51%] PASSED tests_sync/test_hash_model.py::test_validation_passes
tests_sync/test_hash_model.py::test_raises_error_with_sets
[gw6] [ 52%] PASSED tests_sync/test_hash_model.py::test_tag_queries_punctuation
tests_sync/test_hash_model.py::test_all_pks
[gw5] [ 53%] PASSED tests/test_oss_redis_features.py::test_validation_passes
tests_sync/test_hash_model.py::test_expire
[gw0] [ 53%] PASSED tests_sync/test_hash_model.py::test_tag_queries_negation
tests_sync/test_hash_model.py::test_numeric_queries
[gw1] [ 54%] PASSED tests_sync/test_hash_model.py::test_exact_match_queries
tests_sync/test_hash_model.py::test_retrieve_first
[gw3] [ 55%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_sets
tests_sync/test_hash_model.py::test_raises_error_with_lists
[gw7] [ 55%] PASSED tests/test_json_model.py::test_tag_queries_punctuation
tests_sync/test_hash_model.py::test_raises_error_with_dataclasses
[gw5] [ 56%] PASSED tests_sync/test_hash_model.py::test_expire
tests_sync/test_hash_model.py::test_raises_error_with_embedded_models
[gw3] [ 57%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_lists
tests_sync/test_hash_model.py::test_updates_a_model
[gw2] [ 57%] FAILED tests_sync/test_hash_model.py::test_pagination_queries
tests_sync/test_hash_model.py::test_saves_many
[gw1] [ 58%] PASSED tests_sync/test_hash_model.py::test_retrieve_first
tests_sync/test_hash_model.py::test_saves_model_and_creates_pk
[gw4] [ 59%] PASSED tests/test_oss_redis_features.py::test_saves_many
tests/test_oss_redis_features.py::test_updates_a_model
[gw0] [ 59%] PASSED tests_sync/test_hash_model.py::test_numeric_queries
tests_sync/test_hash_model.py::test_sorting
[gw5] [ 60%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_embedded_models
tests_sync/test_hash_model.py::test_access_result_by_index_cached
[gw7] [ 61%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_dataclasses
tests_sync/test_hash_model.py::test_raises_error_with_dicts
[gw2] [ 61%] PASSED tests_sync/test_hash_model.py::test_saves_many
tests_sync/test_hash_model.py::test_delete_many
[gw3] [ 62%] PASSED tests_sync/test_hash_model.py::test_updates_a_model
[gw1] [ 63%] PASSED tests_sync/test_hash_model.py::test_saves_model_and_creates_pk
tests_sync/test_hash_model.py::test_paginate_query
tests_sync/test_hash_model.py::test_schema
[gw6] [ 63%] PASSED tests_sync/test_hash_model.py::test_all_pks
tests_sync/test_hash_model.py::test_delete
[gw7] [ 64%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_dicts
tests_sync/test_hash_model.py::test_count
[gw0] [ 65%] PASSED tests_sync/test_hash_model.py::test_sorting
tests_sync/test_hash_model.py::test_primary_pk_exists
[gw2] [ 65%] PASSED tests_sync/test_hash_model.py::test_delete_many
[gw1] [ 66%] PASSED tests_sync/test_hash_model.py::test_schema
tests_sync/test_json_model.py::test_validates_required_fields
tests_sync/test_json_model.py::test_validation_passes
[gw5] [ 67%] PASSED tests_sync/test_hash_model.py::test_access_result_by_index_cached
tests_sync/test_hash_model.py::test_access_result_by_index_not_cached
[gw6] [ 67%] PASSED tests_sync/test_hash_model.py::test_delete
[gw4] [ 68%] PASSED tests/test_oss_redis_features.py::test_updates_a_model
tests_sync/test_json_model.py::test_saves_model_and_creates_pk
tests_sync/test_hash_model.py::test_primary_key_model_error
[gw3] [ 69%] PASSED tests_sync/test_hash_model.py::test_paginate_query
tests_sync/test_json_model.py::test_validates_field
[gw7] [ 69%] PASSED tests_sync/test_hash_model.py::test_count
tests_sync/test_json_model.py::test_all_pks
[gw2] [ 70%] PASSED tests_sync/test_json_model.py::test_validates_required_fields
[gw1] [ 71%] PASSED tests_sync/test_json_model.py::test_validation_passes
tests_sync/test_json_model.py::test_saves_many_implicit_pipeline
tests_sync/test_json_model.py::test_saves_many_explicit_transaction
[gw0] [ 71%] PASSED tests_sync/test_hash_model.py::test_primary_pk_exists
tests_sync/test_json_model.py::test_delete
[gw3] [ 72%] PASSED tests_sync/test_json_model.py::test_validates_field
[gw5] [ 73%] PASSED tests_sync/test_hash_model.py::test_access_result_by_index_not_cached
tests_sync/test_json_model.py::test_access_result_by_index_cached
tests_sync/test_json_model.py::test_delete_many_implicit_pipeline
[gw6] [ 73%] PASSED tests_sync/test_json_model.py::test_saves_model_and_creates_pk
tests_sync/test_json_model.py::test_updates_a_model
[gw4] [ 74%] PASSED tests_sync/test_hash_model.py::test_primary_key_model_error
tests_sync/test_json_model.py::test_paginate_query
[gw2] [ 75%] PASSED tests_sync/test_json_model.py::test_saves_many_implicit_pipeline
tests_sync/test_json_model.py::test_in_query
[gw1] [ 75%] PASSED tests_sync/test_json_model.py::test_saves_many_explicit_transaction
tests_sync/test_json_model.py::test_update_query
[gw3] [ 76%] PASSED tests_sync/test_json_model.py::test_access_result_by_index_cached
[gw0] [ 76%] PASSED tests_sync/test_json_model.py::test_delete
tests_sync/test_json_model.py::test_recursive_query_expression_resolution
tests_sync/test_json_model.py::test_exact_match_queries
[gw5] [ 77%] PASSED tests_sync/test_json_model.py::test_delete_many_implicit_pipeline
tests_sync/test_json_model.py::test_recursive_query_field_resolution
[gw2] [ 78%] PASSED tests_sync/test_json_model.py::test_in_query
tests_sync/test_json_model.py::test_tag_queries_punctuation
[gw4] [ 78%] PASSED tests_sync/test_json_model.py::test_paginate_query
tests_sync/test_json_model.py::test_tag_queries_boolean_logic
[gw6] [ 79%] PASSED tests_sync/test_json_model.py::test_updates_a_model
tests_sync/test_json_model.py::test_full_text_search
[gw1] [ 80%] PASSED tests_sync/test_json_model.py::test_update_query
[gw3] [ 80%] PASSED tests_sync/test_json_model.py::test_recursive_query_expression_resolution
tests_sync/test_json_model.py::test_tag_queries_negation
tests_sync/test_json_model.py::test_numeric_queries
[gw7] [ 81%] PASSED tests_sync/test_json_model.py::test_all_pks
tests_sync/test_json_model.py::test_access_result_by_index_not_cached
[gw2] [ 82%] PASSED tests_sync/test_json_model.py::test_tag_queries_punctuation
tests_sync/test_json_model.py::test_list_field_limitations
[gw5] [ 82%] PASSED tests_sync/test_json_model.py::test_recursive_query_field_resolution
tests_sync/test_json_model.py::test_not_found
[gw0] [ 83%] PASSED tests_sync/test_json_model.py::test_exact_match_queries
tests_sync/test_json_model.py::test_sorting
[gw6] [ 84%] PASSED tests_sync/test_json_model.py::test_full_text_search
[gw4] [ 84%] PASSED tests_sync/test_json_model.py::test_tag_queries_boolean_logic
tests_sync/test_json_model.py::test_allows_and_serializes_dicts
tests_sync/test_json_model.py::test_allows_dataclasses
[gw1] [ 85%] PASSED tests_sync/test_json_model.py::test_tag_queries_negation
tests_sync/test_json_model.py::test_allows_and_serializes_sets
[gw3] [ 86%] PASSED tests_sync/test_json_model.py::test_numeric_queries
tests_sync/test_json_model.py::test_allows_and_serializes_lists
[gw5] [ 86%] PASSED tests_sync/test_json_model.py::test_not_found
tests_sync/test_oss_redis_features.py::test_all_keys
[gw7] [ 87%] PASSED tests_sync/test_json_model.py::test_access_result_by_index_not_cached
tests_sync/test_json_model.py::test_schema
[gw6] [ 88%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_dicts
tests_sync/test_oss_redis_features.py::test_validates_required_fields
[gw0] [ 88%] PASSED tests_sync/test_json_model.py::test_sorting
tests_sync/test_oss_redis_features.py::test_not_found
[gw4] [ 89%] PASSED tests_sync/test_json_model.py::test_allows_dataclasses
[gw2] [ 90%] PASSED tests_sync/test_json_model.py::test_list_field_limitations
tests_sync/test_oss_redis_features.py::test_validates_field
tests_sync/test_json_model.py::test_count
[gw1] [ 90%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_sets
[gw3] [ 91%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_lists
tests_sync/test_oss_redis_features.py::test_validation_passes
tests_sync/test_oss_redis_features.py::test_saves_model_and_creates_pk
[gw7] [ 92%] PASSED tests_sync/test_json_model.py::test_schema
tests_sync/test_oss_redis_features.py::test_saves_many
[gw6] [ 92%] PASSED tests_sync/test_oss_redis_features.py::test_validates_required_fields
[gw0] [ 93%] PASSED tests_sync/test_oss_redis_features.py::test_not_found
tests_sync/test_oss_redis_features.py::test_updates_a_model
[gw2] [ 94%] PASSED tests_sync/test_json_model.py::test_count
tests_sync/test_pydantic_integrations.py::test_email_str
[gw4] [ 94%] PASSED tests_sync/test_oss_redis_features.py::test_validates_field
tests_sync/test_redis_type.py::test_redis_type
[gw4] [ 95%] PASSED tests_sync/test_redis_type.py::test_redis_type
[gw3] [ 96%] PASSED tests_sync/test_oss_redis_features.py::test_saves_model_and_creates_pk
[gw1] [ 96%] PASSED tests_sync/test_oss_redis_features.py::test_validation_passes
[gw7] [ 97%] PASSED tests_sync/test_oss_redis_features.py::test_saves_many
[gw6] [ 98%] PASSED tests_sync/test_oss_redis_features.py::test_updates_a_model
[gw5] [ 98%] PASSED tests_sync/test_oss_redis_features.py::test_all_keys
tests_sync/test_oss_redis_features.py::test_raises_error_with_embedded_models
[gw0] [ 99%] PASSED tests_sync/test_pydantic_integrations.py::test_email_str
[gw5] [100%] PASSED tests_sync/test_oss_redis_features.py::test_raises_error_with_embedded_models
===================================================================================== FAILURES =====================================================================================
_____________________________________________________________________________ test_pagination_queries ______________________________________________________________________________
[gw2] darwin -- Python 3.10.8 /Users/marian/Library/Caches/pypoetry/virtualenvs/redis-om-DEJACET3-py3.10/bin/python
members = (Member(id=0, first_name='Andrew', last_name='Brookins', email='[email protected]', join_date=datetime.date(2023, 2, 11), ...com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.'))
m = Models(BaseHashModel=<class 'tests_sync.test_hash_model.m.<locals>.BaseHashModel'>, Order=<class 'tests_sync.test_hash_model.m.<locals>.Order'>, Member=<class 'tests_sync.test_hash_model.m.<locals>.Member'>)
@py_test_mark_sync
def test_pagination_queries(members, m):
member1, member2, member3 = members
actual = m.Member.find(m.Member.last_name == "Brookins").page()
assert actual == [member1, member2]
actual = m.Member.find().page(1, 1)
> assert actual == [member2]
E AssertionError: assert [Member(id=2, first_name='Andrew', last_name='Smith', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.')] == [Member(id=1, first_name='Kim', last_name='Brookins', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')]
E At index 0 diff: Member(id=2, first_name='Andrew', last_name='Smith', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.') != Member(id=1, first_name='Kim', last_name='Brookins', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')
E Full diff:
E - [Member(id=1, first_name='Kim', last_name='Brookins', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')]
E + [Member(id=2, first_name='Andrew', last_name='Smith', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.')]
tests_sync/test_hash_model.py:187: AssertionError
---------- coverage: platform darwin, python 3.10.8-final-0 ----------
Name Stmts Miss Cover Missing
----------------------------------------------------------------------
aredis_om/__init__.py 5 0 100%
aredis_om/async_redis.py 1 0 100%
aredis_om/checks.py 21 12 43% 9-10, 15-18, 23-28
aredis_om/connections.py 10 1 90% 20
aredis_om/model/__init__.py 2 0 100%
aredis_om/model/cli/__init__.py 0 0 100%
aredis_om/model/cli/migrate.py 13 13 0% 1-18
aredis_om/model/encoders.py 72 35 51% 68, 70, 73-86, 94, 96, 98, 132-147, 150-155, 159-173
aredis_om/model/migrations/__init__.py 0 0 100%
aredis_om/model/migrations/migrator.py 87 15 83% 24-35, 45, 56, 83-84, 89-90, 101, 112-114
aredis_om/model/model.py 888 115 87% 100, 111, 128, 136, 145-152, 166, 185, 193, 199, 203, 207, 211-214, 218, 241, 245, 297, 305, 352, 394, 401, 419, 446, 474, 499, 502-508, 527, 529, 533, 561-571, 592-595, 606, 653, 667-672, 685, 699, 701, 703, 705, 768, 787, 823-828, 844-854, 904, 927-928, 1072, 1135, 1157, 1161, 1166, 1190, 1221-1224, 1232, 1308, 1314, 1374-1382, 1396, 1436-1445, 1449, 1464-1472, 1483-1493, 1506, 1606-1607, 1634-1637, 1721, 1725-1729
aredis_om/model/query_resolver.py 23 23 0% 1-103
aredis_om/model/render_tree.py 33 31 6% 24-75
aredis_om/model/token_escaper.py 13 1 92% 16
aredis_om/sync_redis.py 1 1 0% 1
aredis_om/util.py 6 1 83% 7
----------------------------------------------------------------------
TOTAL 1175 248 79%
============================================================================= short test summary info ==============================================================================
FAILED tests_sync/test_hash_model.py::test_pagination_queries - AssertionError: assert [Member(id=2, first_name='Andrew', last_name='Smith', email='[email protected]', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who i...
========================================================================== 1 failed, 151 passed in 1.54s ===========================================================================
make: *** [test] Error 1
~/Repositories/redis-om-python fix-model-typings* 8s
redis-om-DEJACET3-py3.10 ❯ make test
/opt/homebrew/bin/poetry run python make_sync.py
docker-compose up -d
[+] Running 2/2
⠿ Container redis-om-python-oss_redis-1 Started 0.5s
⠿ Container redis-om-python-redis-1 Running 0.0s
REDIS_OM_URL=""redis://localhost:6380?decode_responses=True"" /opt/homebrew/bin/poetry run pytest -n auto -vv ./tests/ ./tests_sync/ --cov-report term-missing --cov aredis_om redis_om
=============================================================================== test session starts ================================================================================
platform darwin -- Python 3.10.8, pytest-7.2.1, pluggy-1.0.0 -- /Users/marian/Library/Caches/pypoetry/virtualenvs/redis-om-DEJACET3-py3.10/bin/python
cachedir: .pytest_cache
rootdir: /Users/marian/Repositories/redis-om-python, configfile: pytest.ini
plugins: xdist-3.2.0, asyncio-0.20.3, cov-4.0.0
asyncio: mode=strict
[gw0] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw1] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw2] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw3] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw4] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw5] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw6] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw7] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw0] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw1] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw2] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw3] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw4] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw5] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw6] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw7] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
gw0 [152] / gw1 [152] / gw2 [152] / gw3 [152] / gw4 [152] / gw5 [152] / gw6 [152] / gw7 [152]
scheduling tests via LoadScheduling
tests/test_hash_model.py::test_exact_match_queries
tests/test_hash_model.py::test_numeric_queries
tests/test_hash_model.py::test_delete
tests/test_hash_model.py::test_validation_passes
tests/test_hash_model.py::test_access_result_by_index_not_cached
tests/test_hash_model.py::test_recursive_query_resolution
tests/test_hash_model.py::test_delete_many
tests/test_hash_model.py::test_raises_error_with_dicts
[gw5] [ 0%] PASSED tests/test_hash_model.py::test_raises_error_with_dicts
[gw3] [ 1%] PASSED tests/test_hash_model.py::test_validation_passes
tests/test_hash_model.py::test_retrieve_first
tests/test_hash_model.py::test_raises_error_with_sets
[gw4] [ 1%] PASSED tests/test_hash_model.py::test_delete
tests/test_hash_model.py::test_expire
[gw6] [ 2%] PASSED tests/test_hash_model.py::test_delete_many
tests/test_hash_model.py::test_updates_a_model
[gw1] [ 3%] PASSED tests/test_hash_model.py::test_recursive_query_resolution
tests/test_hash_model.py::test_tag_queries_boolean_logic
[gw0] [ 3%] PASSED tests/test_hash_model.py::test_exact_match_queries
tests/test_hash_model.py::test_delete_non_exist
[gw5] [ 4%] PASSED tests/test_hash_model.py::test_raises_error_with_sets
tests/test_hash_model.py::test_raises_error_with_lists
[gw2] [ 5%] PASSED tests/test_hash_model.py::test_numeric_queries
tests/test_hash_model.py::test_sorting
[gw4] [ 5%] PASSED tests/test_hash_model.py::test_expire
[gw7] [ 6%] PASSED tests/test_hash_model.py::test_access_result_by_index_not_cached
tests/test_hash_model.py::test_raises_error_with_embedded_models
tests/test_hash_model.py::test_schema
[gw3] [ 7%] PASSED tests/test_hash_model.py::test_retrieve_first
tests/test_hash_model.py::test_saves_model_and_creates_pk
[gw6] [ 7%] PASSED tests/test_hash_model.py::test_updates_a_model
tests/test_hash_model.py::test_paginate_query
[gw5] [ 8%] PASSED tests/test_hash_model.py::test_raises_error_with_lists
tests/test_hash_model.py::test_saves_many
[gw1] [ 9%] PASSED tests/test_hash_model.py::test_tag_queries_boolean_logic
tests/test_hash_model.py::test_tag_queries_punctuation
[gw4] [ 9%] PASSED tests/test_hash_model.py::test_raises_error_with_embedded_models
tests/test_hash_model.py::test_raises_error_with_dataclasses
[gw7] [ 10%] PASSED tests/test_hash_model.py::test_schema
tests/test_hash_model.py::test_primary_key_model_error
[gw3] [ 11%] PASSED tests/test_hash_model.py::test_saves_model_and_creates_pk
tests/test_hash_model.py::test_all_pks
[gw0] [ 11%] PASSED tests/test_hash_model.py::test_delete_non_exist
[gw2] [ 12%] PASSED tests/test_hash_model.py::test_sorting
tests/test_hash_model.py::test_validates_required_fields
tests/test_hash_model.py::test_full_text_search_queries
[gw5] [ 13%] PASSED tests/test_hash_model.py::test_saves_many
tests/test_hash_model.py::test_count
[gw2] [ 13%] PASSED tests/test_hash_model.py::test_validates_required_fields
tests/test_hash_model.py::test_validates_field
[gw6] [ 14%] PASSED tests/test_hash_model.py::test_paginate_query
[gw4] [ 15%] PASSED tests/test_hash_model.py::test_raises_error_with_dataclasses
tests/test_hash_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_all_pks
[gw1] [ 15%] PASSED tests/test_hash_model.py::test_tag_queries_punctuation
tests/test_hash_model.py::test_tag_queries_negation
[gw0] [ 16%] PASSED tests/test_hash_model.py::test_full_text_search_queries
tests/test_hash_model.py::test_pagination_queries
[gw2] [ 17%] PASSED tests/test_hash_model.py::test_validates_field
[gw3] [ 17%] PASSED tests/test_hash_model.py::test_all_pks
tests/test_json_model.py::test_list_field_limitations
tests/test_json_model.py::test_updates_a_model
[gw5] [ 18%] PASSED tests/test_hash_model.py::test_count
[gw7] [ 19%] PASSED tests/test_hash_model.py::test_primary_key_model_error
tests/test_json_model.py::test_validates_required_fields
tests/test_hash_model.py::test_primary_pk_exists
[gw6] [ 19%] PASSED tests/test_hash_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_in_query
[gw1] [ 20%] PASSED tests/test_hash_model.py::test_tag_queries_negation
tests/test_json_model.py::test_recursive_query_field_resolution
[gw0] [ 21%] PASSED tests/test_hash_model.py::test_pagination_queries
tests/test_json_model.py::test_allows_and_serializes_lists
[gw4] [ 21%] PASSED tests/test_json_model.py::test_all_pks
tests/test_json_model.py::test_delete
[gw5] [ 22%] PASSED tests/test_json_model.py::test_validates_required_fields
tests/test_json_model.py::test_validates_field
[gw6] [ 23%] PASSED tests/test_json_model.py::test_in_query
tests/test_json_model.py::test_update_query
[gw3] [ 23%] PASSED tests/test_json_model.py::test_updates_a_model
tests/test_json_model.py::test_paginate_query
[gw5] [ 24%] PASSED tests/test_json_model.py::test_validates_field
tests/test_json_model.py::test_validation_passes
[gw2] [ 25%] PASSED tests/test_json_model.py::test_list_field_limitations
tests/test_json_model.py::test_allows_dataclasses
[gw4] [ 25%] PASSED tests/test_json_model.py::test_delete
tests/test_json_model.py::test_saves_many_implicit_pipeline
[gw0] [ 26%] PASSED tests/test_json_model.py::test_allows_and_serializes_lists
[gw1] [ 26%] PASSED tests/test_json_model.py::test_recursive_query_field_resolution
tests/test_json_model.py::test_schema
tests/test_json_model.py::test_full_text_search
[gw7] [ 27%] PASSED tests/test_hash_model.py::test_primary_pk_exists
tests/test_json_model.py::test_tag_queries_negation
[gw5] [ 28%] PASSED tests/test_json_model.py::test_validation_passes
tests/test_json_model.py::test_saves_model_and_creates_pk
[gw6] [ 28%] PASSED tests/test_json_model.py::test_update_query
tests/test_json_model.py::test_exact_match_queries
[gw3] [ 29%] PASSED tests/test_json_model.py::test_paginate_query
[gw2] [ 30%] PASSED tests/test_json_model.py::test_allows_dataclasses
tests/test_json_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_allows_and_serializes_dicts
[gw0] [ 30%] PASSED tests/test_json_model.py::test_schema
tests/test_json_model.py::test_count
[gw4] [ 31%] PASSED tests/test_json_model.py::test_saves_many_implicit_pipeline
tests/test_json_model.py::test_saves_many_explicit_transaction
[gw1] [ 32%] PASSED tests/test_json_model.py::test_full_text_search
tests/test_json_model.py::test_tag_queries_boolean_logic
[gw5] [ 32%] PASSED tests/test_json_model.py::test_saves_model_and_creates_pk
tests/test_oss_redis_features.py::test_not_found
[gw3] [ 33%] PASSED tests/test_json_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_access_result_by_index_not_cached
[gw6] [ 34%] PASSED tests/test_json_model.py::test_exact_match_queries
tests/test_json_model.py::test_recursive_query_expression_resolution
[gw2] [ 34%] PASSED tests/test_json_model.py::test_allows_and_serializes_dicts
tests/test_json_model.py::test_allows_and_serializes_sets
[gw0] [ 35%] PASSED tests/test_json_model.py::test_count
tests/test_oss_redis_features.py::test_all_keys
[gw7] [ 36%] PASSED tests/test_json_model.py::test_tag_queries_negation
tests/test_json_model.py::test_numeric_queries
[gw4] [ 36%] PASSED tests/test_json_model.py::test_saves_many_explicit_transaction
tests/test_json_model.py::test_delete_many_implicit_pipeline
[gw1] [ 37%] PASSED tests/test_json_model.py::test_tag_queries_boolean_logic
tests/test_json_model.py::test_tag_queries_punctuation
[gw5] [ 38%] PASSED tests/test_oss_redis_features.py::test_not_found
[gw6] [ 38%] PASSED tests/test_json_model.py::test_recursive_query_expression_resolution
tests/test_oss_redis_features.py::test_validates_required_fields
tests/test_pydantic_integrations.py::test_email_str
[gw3] [ 39%] PASSED tests/test_json_model.py::test_access_result_by_index_not_cached
tests/test_oss_redis_features.py::test_saves_model_and_creates_pk
[gw4] [ 40%] PASSED tests/test_json_model.py::test_delete_many_implicit_pipeline
tests_sync/test_hash_model.py::test_tag_queries_negation
[gw1] [ 40%] PASSED tests/test_json_model.py::test_tag_queries_punctuation
tests_sync/test_hash_model.py::test_validates_required_fields
[gw2] [ 41%] PASSED tests/test_json_model.py::test_allows_and_serializes_sets
tests_sync/test_hash_model.py::test_delete_non_exist
[gw5] [ 42%] PASSED tests/test_oss_redis_features.py::test_validates_required_fields
tests/test_oss_redis_features.py::test_validates_field
[gw7] [ 42%] PASSED tests/test_json_model.py::test_numeric_queries
tests/test_json_model.py::test_sorting
[gw1] [ 43%] PASSED tests_sync/test_hash_model.py::test_validates_required_fields
tests_sync/test_hash_model.py::test_validates_field
[gw3] [ 44%] PASSED tests/test_oss_redis_features.py::test_saves_model_and_creates_pk
tests/test_oss_redis_features.py::test_raises_error_with_embedded_models
[gw6] [ 44%] PASSED tests/test_pydantic_integrations.py::test_email_str
tests/test_redis_type.py::test_redis_type
[gw6] [ 45%] PASSED tests/test_redis_type.py::test_redis_type
tests_sync/test_hash_model.py::test_exact_match_queries
[gw4] [ 46%] PASSED tests_sync/test_hash_model.py::test_tag_queries_negation
tests_sync/test_hash_model.py::test_numeric_queries
[gw5] [ 46%] PASSED tests/test_oss_redis_features.py::test_validates_field
[gw2] [ 47%] PASSED tests_sync/test_hash_model.py::test_delete_non_exist
tests_sync/test_hash_model.py::test_full_text_search_queries
[gw1] [ 48%] PASSED tests_sync/test_hash_model.py::test_validates_field
tests/test_oss_redis_features.py::test_validation_passes
tests_sync/test_hash_model.py::test_validation_passes
[gw0] [ 48%] PASSED tests/test_oss_redis_features.py::test_all_keys
tests_sync/test_hash_model.py::test_recursive_query_resolution
[gw1] [ 49%] PASSED tests_sync/test_hash_model.py::test_validation_passes
tests_sync/test_hash_model.py::test_expire
[gw3] [ 50%] PASSED tests/test_oss_redis_features.py::test_raises_error_with_embedded_models
tests/test_oss_redis_features.py::test_saves_many
[gw7] [ 50%] PASSED tests/test_json_model.py::test_sorting
[gw2] [ 51%] PASSED tests_sync/test_hash_model.py::test_full_text_search_queries
tests/test_json_model.py::test_not_found
tests_sync/test_hash_model.py::test_pagination_queries
[gw4] [ 51%] PASSED tests_sync/test_hash_model.py::test_numeric_queries
tests_sync/test_hash_model.py::test_sorting
[gw6] [ 52%] PASSED tests_sync/test_hash_model.py::test_exact_match_queries
[gw5] [ 53%] PASSED tests/test_oss_redis_features.py::test_validation_passes
tests_sync/test_hash_model.py::test_retrieve_first
tests_sync/test_hash_model.py::test_all_pks
[gw1] [ 53%] PASSED tests_sync/test_hash_model.py::test_expire
[gw0] [ 54%] PASSED tests_sync/test_hash_model.py::test_recursive_query_resolution
tests_sync/test_hash_model.py::test_raises_error_with_embedded_models
tests_sync/test_hash_model.py::test_tag_queries_boolean_logic
[gw2] [ 55%] PASSED tests_sync/test_hash_model.py::test_pagination_queries
tests_sync/test_hash_model.py::test_raises_error_with_sets
[gw6] [ 55%] PASSED tests_sync/test_hash_model.py::test_retrieve_first
[gw1] [ 56%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_embedded_models
tests_sync/test_hash_model.py::test_updates_a_model
tests_sync/test_hash_model.py::test_saves_model_and_creates_pk
[gw4] [ 57%] PASSED tests_sync/test_hash_model.py::test_sorting
[gw7] [ 57%] PASSED tests/test_json_model.py::test_not_found
tests_sync/test_hash_model.py::test_saves_many
tests_sync/test_hash_model.py::test_raises_error_with_dataclasses
[gw3] [ 58%] PASSED tests/test_oss_redis_features.py::test_saves_many
tests/test_oss_redis_features.py::test_updates_a_model
[gw0] [ 59%] PASSED tests_sync/test_hash_model.py::test_tag_queries_boolean_logic
tests_sync/test_hash_model.py::test_tag_queries_punctuation
[gw2] [ 59%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_sets
tests_sync/test_hash_model.py::test_raises_error_with_lists
[gw6] [ 60%] PASSED tests_sync/test_hash_model.py::test_saves_model_and_creates_pk
tests_sync/test_hash_model.py::test_access_result_by_index_cached
[gw4] [ 61%] PASSED tests_sync/test_hash_model.py::test_saves_many
tests_sync/test_hash_model.py::test_delete_many
[gw7] [ 61%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_dataclasses
tests_sync/test_hash_model.py::test_raises_error_with_dicts
[gw2] [ 62%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_lists
[gw1] [ 63%] PASSED tests_sync/test_hash_model.py::test_updates_a_model
tests_sync/test_hash_model.py::test_paginate_query
tests_sync/test_hash_model.py::test_primary_pk_exists
[gw0] [ 63%] PASSED tests_sync/test_hash_model.py::test_tag_queries_punctuation
tests_sync/test_hash_model.py::test_primary_key_model_error
[gw6] [ 64%] PASSED tests_sync/test_hash_model.py::test_access_result_by_index_cached
tests_sync/test_hash_model.py::test_access_result_by_index_not_cached
[gw4] [ 65%] PASSED tests_sync/test_hash_model.py::test_delete_many
[gw7] [ 65%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_dicts
tests_sync/test_hash_model.py::test_count
tests_sync/test_json_model.py::test_validates_required_fields
[gw3] [ 66%] PASSED tests/test_oss_redis_features.py::test_updates_a_model
tests_sync/test_hash_model.py::test_schema
[gw5] [ 67%] PASSED tests_sync/test_hash_model.py::test_all_pks
tests_sync/test_hash_model.py::test_delete
[gw0] [ 67%] PASSED tests_sync/test_hash_model.py::test_primary_key_model_error
tests_sync/test_json_model.py::test_saves_model_and_creates_pk
[gw7] [ 68%] PASSED tests_sync/test_json_model.py::test_validates_required_fields
tests_sync/test_json_model.py::test_saves_many_implicit_pipeline
[gw1] [ 69%] PASSED tests_sync/test_hash_model.py::test_paginate_query
tests_sync/test_json_model.py::test_validates_field
[gw2] [ 69%] PASSED tests_sync/test_hash_model.py::test_primary_pk_exists
tests_sync/test_json_model.py::test_validation_passes
[gw3] [ 70%] PASSED tests_sync/test_hash_model.py::test_schema
[gw4] [ 71%] PASSED tests_sync/test_hash_model.py::test_count
tests_sync/test_json_model.py::test_saves_many_explicit_transaction
tests_sync/test_json_model.py::test_delete
[gw6] [ 71%] PASSED tests_sync/test_hash_model.py::test_access_result_by_index_not_cached
tests_sync/test_json_model.py::test_all_pks
[gw5] [ 72%] PASSED tests_sync/test_hash_model.py::test_delete
tests_sync/test_json_model.py::test_delete_many_implicit_pipeline
[gw7] [ 73%] PASSED tests_sync/test_json_model.py::test_saves_many_implicit_pipeline
tests_sync/test_json_model.py::test_paginate_query
[gw1] [ 73%] PASSED tests_sync/test_json_model.py::test_validates_field
tests_sync/test_json_model.py::test_access_result_by_index_cached
[gw4] [ 74%] PASSED tests_sync/test_json_model.py::test_delete
tests_sync/test_json_model.py::test_update_query
[gw2] [ 75%] PASSED tests_sync/test_json_model.py::test_validation_passes
tests_sync/test_json_model.py::test_access_result_by_index_not_cached
[gw0] [ 75%] PASSED tests_sync/test_json_model.py::test_saves_model_and_creates_pk
tests_sync/test_json_model.py::test_updates_a_model
[gw5] [ 76%] PASSED tests_sync/test_json_model.py::test_delete_many_implicit_pipeline
[gw3] [ 76%] PASSED tests_sync/test_json_model.py::test_saves_many_explicit_transaction
tests_sync/test_json_model.py::test_recursive_query_expression_resolution
tests_sync/test_json_model.py::test_in_query
[gw7] [ 77%] PASSED tests_sync/test_json_model.py::test_paginate_query
[gw1] [ 78%] PASSED tests_sync/test_json_model.py::test_access_result_by_index_cached
tests_sync/test_json_model.py::test_recursive_query_field_resolution
tests_sync/test_json_model.py::test_full_text_search
[gw4] [ 78%] PASSED tests_sync/test_json_model.py::test_update_query
tests_sync/test_json_model.py::test_tag_queries_boolean_logic
[gw5] [ 79%] PASSED tests_sync/test_json_model.py::test_recursive_query_expression_resolution
tests_sync/test_json_model.py::test_numeric_queries
[gw2] [ 80%] PASSED tests_sync/test_json_model.py::test_access_result_by_index_not_cached
[gw3] [ 80%] PASSED tests_sync/test_json_model.py::test_in_query
tests_sync/test_json_model.py::test_tag_queries_punctuation
tests_sync/test_json_model.py::test_sorting
[gw0] [ 81%] PASSED tests_sync/test_json_model.py::test_updates_a_model
tests_sync/test_json_model.py::test_tag_queries_negation
[gw7] [ 82%] PASSED tests_sync/test_json_model.py::test_recursive_query_field_resolution
[gw4] [ 82%] PASSED tests_sync/test_json_model.py::test_tag_queries_boolean_logic
tests_sync/test_json_model.py::test_not_found
tests_sync/test_json_model.py::test_allows_dataclasses
[gw1] [ 83%] PASSED tests_sync/test_json_model.py::test_full_text_search
tests_sync/test_json_model.py::test_list_field_limitations
[gw6] [ 84%] PASSED tests_sync/test_json_model.py::test_all_pks
tests_sync/test_json_model.py::test_exact_match_queries
[gw3] [ 84%] PASSED tests_sync/test_json_model.py::test_sorting
tests_sync/test_json_model.py::test_allows_and_serializes_lists
[gw5] [ 85%] PASSED tests_sync/test_json_model.py::test_numeric_queries
tests_sync/test_json_model.py::test_allows_and_serializes_dicts
[gw7] [ 86%] PASSED tests_sync/test_json_model.py::test_not_found
tests_sync/test_json_model.py::test_count
[gw2] [ 86%] PASSED tests_sync/test_json_model.py::test_tag_queries_punctuation
tests_sync/test_json_model.py::test_allows_and_serializes_sets
[gw4] [ 87%] PASSED tests_sync/test_json_model.py::test_allows_dataclasses
[gw0] [ 88%] PASSED tests_sync/test_json_model.py::test_tag_queries_negation
tests_sync/test_oss_redis_features.py::test_all_keys
tests_sync/test_json_model.py::test_schema
[gw3] [ 88%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_lists
[gw5] [ 89%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_dicts
tests_sync/test_oss_redis_features.py::test_validates_field
tests_sync/test_oss_redis_features.py::test_validation_passes
[gw7] [ 90%] PASSED tests_sync/test_json_model.py::test_count
[gw0] [ 90%] PASSED tests_sync/test_json_model.py::test_schema
tests_sync/test_oss_redis_features.py::test_saves_model_and_creates_pk
tests_sync/test_oss_redis_features.py::test_updates_a_model
[gw1] [ 91%] PASSED tests_sync/test_json_model.py::test_list_field_limitations
[gw6] [ 92%] PASSED tests_sync/test_json_model.py::test_exact_match_queries
tests_sync/test_oss_redis_features.py::test_not_found
tests_sync/test_oss_redis_features.py::test_validates_required_fields
[gw2] [ 92%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_sets
tests_sync/test_oss_redis_features.py::test_raises_error_with_embedded_models
[gw5] [ 93%] PASSED tests_sync/test_oss_redis_features.py::test_validation_passes
tests_sync/test_redis_type.py::test_redis_type
[gw5] [ 94%] PASSED tests_sync/test_redis_type.py::test_redis_type
[gw3] [ 94%] PASSED tests_sync/test_oss_redis_features.py::test_validates_field
[gw7] [ 95%] PASSED tests_sync/test_oss_redis_features.py::test_saves_model_and_creates_pk
tests_sync/test_pydantic_integrations.py::test_email_str
[gw6] [ 96%] PASSED tests_sync/test_oss_redis_features.py::test_validates_required_fields
[gw0] [ 96%] PASSED tests_sync/test_oss_redis_features.py::test_updates_a_model
[gw1] [ 97%] PASSED tests_sync/test_oss_redis_features.py::test_not_found
[gw2] [ 98%] PASSED tests_sync/test_oss_redis_features.py::test_raises_error_with_embedded_models
[gw4] [ 98%] PASSED tests_sync/test_oss_redis_features.py::test_all_keys
tests_sync/test_oss_redis_features.py::test_saves_many
[gw3] [ 99%] PASSED tests_sync/test_pydantic_integrations.py::test_email_str
[gw4] [100%] PASSED tests_sync/test_oss_redis_features.py::test_saves_many
---------- coverage: platform darwin, python 3.10.8-final-0 ----------
Name Stmts Miss Cover Missing
----------------------------------------------------------------------
aredis_om/__init__.py 5 0 100%
aredis_om/async_redis.py 1 0 100%
aredis_om/checks.py 21 12 43% 9-10, 15-18, 23-28
aredis_om/connections.py 10 1 90% 20
aredis_om/model/__init__.py 2 0 100%
aredis_om/model/cli/__init__.py 0 0 100%
aredis_om/model/cli/migrate.py 13 13 0% 1-18
aredis_om/model/encoders.py 72 35 51% 68, 70, 73-86, 94, 96, 98, 132-147, 150-155, 159-173
aredis_om/model/migrations/__init__.py 0 0 100%
aredis_om/model/migrations/migrator.py 87 15 83% 24-35, 45, 56, 83-84, 89-90, 101, 112-114
aredis_om/model/model.py 888 115 87% 100, 111, 128, 136, 145-152, 166, 185, 193, 199, 203, 207, 211-214, 218, 241, 245, 297, 305, 352, 394, 401, 419, 446, 474, 499, 502-508, 527, 529, 533, 561-571, 592-595, 606, 653, 667-672, 685, 699, 701, 703, 705, 768, 787, 823-828, 844-854, 904, 927-928, 1072, 1135, 1157, 1161, 1166, 1190, 1221-1224, 1232, 1308, 1314, 1374-1382, 1396, 1436-1445, 1449, 1464-1472, 1483-1493, 1506, 1606-1607, 1634-1637, 1721, 1725-1729
aredis_om/model/query_resolver.py 23 23 0% 1-103
aredis_om/model/render_tree.py 33 31 6% 24-75
aredis_om/model/token_escaper.py 13 1 92% 16
aredis_om/sync_redis.py 1 1 0% 1
aredis_om/util.py 6 1 83% 7
----------------------------------------------------------------------
TOTAL 1175 248 79%
=============================================================================== 152 passed in 1.45s ================================================================================
docker-compose down
[+] Running 3/3
⠿ Container redis-om-python-oss_redis-1 Removed 0.2s
⠿ Container redis-om-python-redis-1 Removed 0.1s
⠿ Network redis-om-python_default Removed 0.0s
~/Repositories/redis-om-python fix-model-typings*
redis-om-DEJACET3-py3.10 ❯
```
</details> | 1medium
|
Title: Loading donut transformers model getting error
Body: self.model = self.model.to(self.device)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 1145, in to
return self._apply(convert)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 797, in _apply
module._apply(fn)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 797, in _apply
module._apply(fn)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 820, in _apply
param_applied = fn(param)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: device-side assert triggered
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
code:-
class Donut:
def __init__(self):
try:
self.processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
self.model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
self.device = "cuda" if torch.cuda.is_available() else "cpu"
if torch.cuda.is_available():
try:
self.device = torch.device("cuda")
self.model = self.model.to(self.device)
torch.cuda.empty_cache()
except RuntimeError as e:
console_logger.warning(f"{str(e)}")
except Exception as e:
console_logger.error(f"Failed to initialize Donut: {str(e)}")
raise
Versions:-
torch==2.0.1+cu118
torchaudio==2.0.2+cu118
torchvision==0.15.2+cu118
transformers==4.24.0
Nvidia Driver :- Driver Version: 550.144.03 CUDA Version: 12.4
Docker image :- nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04
| 1medium
|
Title: AttributeError: 'ProgressBarsConsole' object has no attribute 'set_error'
Body: Hi.
The progress bar may not be displayed.
```
$ pip list
tqdm 4.36.1
pandas 0.25.3
pandarallel 1.4.1
```
```
0.00% | 0 / 6164 |
0.00% | 0 / 6164 |
0.00% | 0 / 6163 |
0.00% | 0 / 6163 |
File "/home/ubuntu/test.py", line 60, in _run
df['result'] = df.parallel_apply(_func, axis=1)
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/pandarallel/pandarallel.py", line 384, in closure
map_result,
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/pandarallel/pandarallel.py", line 327, in get_workers_result
progress_bars.set_error(worker_index)
AttributeError: 'ProgressBarsConsole' object has no attribute 'set_error'
```
Because `set_error` seems to be only `ProgressBarsNotebookLab`.
https://github.com/nalepae/pandarallel/blob/master/pandarallel/utils/progress_bars.py#L91
But It seems to be called either `ProgressBarsNotebookLab` or `ProgressBarsConsole`.
https://github.com/nalepae/pandarallel/blob/master/pandarallel/pandarallel.py#L322
It seems that it was from the time of refactoring > https://github.com/nalepae/pandarallel/commit/f297b7547766edba9e9fbdcdac62b88d9b33f4fa | 1medium
|
Title: leafmap.add_raster only recognizes some colormap names?
Body: <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.42.6
- Python version: 3.12.3
- Operating System: Ubuntu 24.04.1
### Description
leafmap seems to recognize only a (very) limited range of matplotlib colormap names.
### What I Did
```
import leafmap
m = leafmap.Map()
m.add_raster('output/freq.tif', vmin=0, vmax=100, colormap = 'rainbow')
m
```
This works fine (freq.tif is a single band GTiff float32 file with values between 0 and 100).
```
m.add_raster('output/freq.tif', vmin=0, vmax=100, colormap = 'Blues')
m
```
throws:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[6], line 5
1 import leafmap
3 m = leafmap.Map()
----> 5 m.add_raster('output[/freq.tif](http://localhost:8888/freq.tif)', vmin=0, vmax=100, colormap = 'blues')
7 m
File [/opt/conda/lib/python3.11/site-packages/leafmap/leafmap.py:2384](http://localhost:8888/opt/conda/lib/python3.11/site-packages/leafmap/leafmap.py#line=2383), in Map.add_raster(self, source, indexes, colormap, vmin, vmax, nodata, attribution, layer_name, layer_index, zoom_to_layer, visible, opacity, array_args, client_args, **kwargs)
2381 if isinstance(source, np.ndarray) or isinstance(source, xr.DataArray):
2382 source = common.array_to_image(source, **array_args)
-> 2384 tile_layer, tile_client = common.get_local_tile_layer(
2385 source,
2386 indexes=indexes,
2387 colormap=colormap,
2388 vmin=vmin,
2389 vmax=vmax,
2390 nodata=nodata,
2391 opacity=opacity,
2392 attribution=attribution,
2393 layer_name=layer_name,
2394 client_args=client_args,
2395 return_client=True,
2396 **kwargs,
2397 )
2398 tile_layer.visible = visible
2400 self.add(tile_layer, index=layer_index)
File [/opt/conda/lib/python3.11/site-packages/leafmap/common.py:3002](http://localhost:8888/opt/conda/lib/python3.11/site-packages/leafmap/common.py#line=3001), in get_local_tile_layer(source, port, debug, indexes, colormap, vmin, vmax, nodata, attribution, tile_format, layer_name, client_args, return_client, quiet, **kwargs)
3000 else:
3001 if tile_format == "ipyleaflet":
-> 3002 tile_layer = get_leaflet_tile_layer(
3003 tile_client,
3004 port=port,
3005 debug=debug,
3006 indexes=indexes,
3007 colormap=colormap,
3008 vmin=vmin,
3009 vmax=vmax,
3010 nodata=nodata,
3011 attribution=attribution,
3012 name=layer_name,
3013 **kwargs,
3014 )
3015 else:
3016 tile_layer = get_folium_tile_layer(
3017 tile_client,
3018 port=port,
(...)
3028 **kwargs,
3029 )
File [/opt/conda/lib/python3.11/site-packages/localtileserver/widgets.py:105](http://localhost:8888/opt/conda/lib/python3.11/site-packages/localtileserver/widgets.py#line=104), in get_leaflet_tile_layer(source, port, debug, indexes, colormap, vmin, vmax, nodata, attribution, **kwargs)
98 bounds = Union((Tuple(),), default_value=None, allow_none=True).tag(sync=True, o=True)
100 source, created = get_or_create_tile_client(
101 source,
102 port=port,
103 debug=debug,
104 )
--> 105 url = source.get_tile_url(
106 indexes=indexes,
107 colormap=colormap,
108 vmin=vmin,
109 vmax=vmax,
110 nodata=nodata,
111 client=True,
112 )
113 if attribution is None:
114 attribution = DEFAULT_ATTRIBUTION
File [/opt/conda/lib/python3.11/site-packages/localtileserver/client.py:461](http://localhost:8888/opt/conda/lib/python3.11/site-packages/localtileserver/client.py#line=460), in TileServerMixin.get_tile_url(self, indexes, colormap, vmin, vmax, nodata, client)
458 colormap = json.dumps(colormap)
459 else:
460 # make sure palette is valid
--> 461 palette_valid_or_raise(colormap)
463 params["colormap"] = colormap
464 if vmin is not None:
File [/opt/conda/lib/python3.11/site-packages/localtileserver/tiler/palettes.py:31](http://localhost:8888/opt/conda/lib/python3.11/site-packages/localtileserver/tiler/palettes.py#line=30), in palette_valid_or_raise(name)
29 def palette_valid_or_raise(name: str):
30 if not is_mpl_cmap(name):
---> 31 raise ValueError(f"Please use a valid matplotlib colormap name. Invalid: {name}")
ValueError: Please use a valid matplotlib colormap name. Invalid: blues
```
'blues' (or 'Blues') **is** a valid matplotlib colormap name. Most other names do not work either. | 1medium
|
Title: Sometimes I get Dataset Errors when using the lightning module in a distributed manor
Body: ### Bug description
I use a Lightning Datamodule. In this module I initialize (according to your
tutorials a torch dataset:
```
class CustomImageDataset(Dataset):
# Torch dataset to handle basic file operations
```
```
class DataModule(L.LightningDataModule):
# Lightning DataModule to handle dataloaders and train/test split
dset = CustomImageDataset()
```
In most cases it works perfectly fine, but sometimes I get an error when initializing my training, which forces me to start it again until the bug does not appear anymore. This only happens in distributed training.
It happens when I read in my dataset in the CustomImageDataset() by using a csv reader. The error is:
```
train.py 74 <module>
mydata.setup(stage="fit")
dataset.py 206 setup
self.train_set = self.create_dataset("train")
dataset.py 190 create_dataset
dset = CustomImageDataset(self.data_dir,
dataset.py 50 __init__
self.data_paths, self.targets = self._load_data()
dataset.py 59 _load_data
paths, targets = get_paths(self.data_dir, "train", self.seed)
dataset.py 22 get_paths
r = list(reader)
_csv.Error:
line contains NUL
```
Since the list conversion seems to trigger the bug I am bit lost on how to solve it, but maybe you guys already stumbled upon it.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 1.5.0):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_ | 2hard
|
Title: Python11 can not install gensim, if it is possible, I wish Python11 can have the right version for gensim too
Body: <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
What are you trying to achieve? What is the expected result? What are you seeing instead?
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
If your problem is with a specific Gensim model (word2vec, lsimodel, doc2vec, fasttext, ldamodel etc), include the following:
```python
print(my_model.lifecycle_events)
```
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import struct; print("Bits", 8 * struct.calcsize("P"))
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
| 2hard
|
Title: Cannot use custom Icon for oauth login
Body: As described in the doc (https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-oauth) we currently support font-awesome as provider icons for oauth login form. Many companies use custom oauth providers instead of the public ones. It will be great if one can use an external resource as the login icon. | 1medium
|
Title: copy code button
Body: ### Is your feature request related to a problem? Please describe.
no
### Describe the solution you'd like
add a copy icon top right corner to generated scripts so they are easy to copy
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: ImportError: cannot import name 'get_device' from 'basicsr.utils.misc'
Body: ImportError: cannot import name 'get_device' from 'basicsr.utils.misc'

| 1medium
|
Title: Expose material property 'shininess' for objects
Body: I have a use-case where I would need some meshes to appear shiny whereas others should not. Setting plot.lighting = 0 achieves the latter effect, but then all objects in the scene are affected. I believe this could be controlled individually for each object if the material shininess property, e.g. for a Surface object, https://github.com/K3D-tools/K3D-jupyter/blob/46bec8581e213351aa5df621c1825b95c732486b/js/src/providers/threejs/objects/Surface.js#L36
would be exposed and not hard-coded.
I believe exposing this would also to some degree address the comment raised previously in #51. | 1medium
|
Title: ImportError: DLL load failed while importing onnx_cpp2py_export: 动态链接库(DLL)初始化例程失败。
Body: # Bug Report
### Is the issue related to model conversion?
1.16.2

<img width="952" alt="onnx_bug" src="https://github.com/user-attachments/assets/4f0d6581-a62e-4fbb-931b-65eb844a7aae">
| 2hard
|
Title: How to plot confusion matrix in yolov5-cls
Body: ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello! I am trying to find out how to plot confusion matrix for yolov5 classification task. I want to compare confusion matrix together with other metrics of trained models on custom dataset of both yolov5 and yolov8. Any ideas , suggestions or help would be appreciated!
### Additional
_No response_ | 3misc
|
Title: not able to login after fresh installation of tracecat
Body: After installing tracecat getting the below error
Unhandled Runtime Error
Error: Error creating new user

Below are the logs files from docker.
○ Compiling /_error ...
✓ Compiled /_error in 1661ms
Start new user flow
Auth is disabled, using test token.
⨯ Error: Error creating new user
at newUserFlow (/app/.next/server/chunks/[root of the server]__62c00d._.js:736:23)
Start new user flow
Auth is disabled, using test token.
⨯ Error: Error creating new user
at newUserFlow (/app/.next/server/chunks/[root of the server]__62c00d._.js:736:23)
Call Stack
newUserFlow
/app/.next/server/chunks/[root of the server]__62c00d._.js (736:23)
process.processTicksAndRejections
node:internal/process/task_queues (95:5)
async
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/compiled/next-server/app-page.runtime.dev.js (39:406)
async t2
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/compiled/next-server/app-page.runtime.dev.js (38:6412)
async rS
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/compiled/next-server/app-page.runtime.dev.js (41:1369)
async doRender
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js (1395:30)
async cacheEntry.responseCache.get.routeKind
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js (1544:40)
async DevServer.renderToResponseWithComponentsImpl
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js (1464:28)
async DevServer.renderPageComponent
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js (1861:24)
async DevServer.renderToResponseImpl
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js (1899:32)
async DevServer.pipeImpl
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js (912:25)
async NextNodeServer.handleCatchallRenderRequest
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/next-server.js (269:17)
async DevServer.handleRequestImpl
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js (808:17)
async
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/dev/next-dev-server.js (331:20)
async Span.traceAsyncFn
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/trace/trace.js (151:20)
async DevServer.handleRequest
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/dev/next-dev-server.js (328:24)
async invokeRender
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/lib/router-server.js (136:21)
async handleRequest
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/lib/router-server.js (315:24)
async requestHandlerImpl
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/lib/router-server.js (339:13)
async Server.requestListener
/app/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/next/dist/server/lib/start-server.js (140:13) | 1medium
|
Title: Updating model for Coreference Resolution
Body: I noticed a new SoTA on Ontonotes 5.0 Coreference task on [paperswithcode](https://paperswithcode.com/paper/word-level-coreference-resolution#code)
The author provides the model (.pt) file in [their git repo](https://github.com/vdobrovolskii/wl-coref#preparation) and claims it to be faster (since it uses RoBERTa) while having an improvement on avg F1 score over SpanBERT.
What would be the steps to use this checkpoint in the AllenNLP Predictor? | 2hard
|
Title: Update docs to include more community projects
Body: | 0easy
|
Title: Apple M1: python -m pip install tensorflow-macos installs 2.9.2
Body: Be advised when following the M1 setup instructions, the tensorflow installation instructions at https://developer.apple.com/metal/tensorflow-plugin/ will now install 2.9.2, which will throw an error when you run `python faceswap.py gui`.
Until faceswap supports TensorFlow 2.9, the following change worked for me:
use:
```
conda install -c apple tensorflow-deps==2.8.0
python -m pip install tensorflow-macos==2.8.0
```
instead of:
```
conda install -c apple tensorflow-deps
python -m pip install tensorflow-macos
``` | 1medium
|
Title: Add install section to README
Body: Because it's a standard good practice to tell people how to install something. ;) | 0easy
|
Title: More bitstamp l2 data?
Body: It looks like a different websocket address has more L2 book data than the one currently in the code. At `diff_order_book_v2.html`. It's the 'live full order book' instead of just the 'live order book'. Should we change it or set an option to use the full order book? https://www.bitstamp.net/websocket/v2/ | 1medium
|
Title: Fixture into test Class are not displayed in Allure report when used parametrize fixture
Body: [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [ ] bug report
#### What is the current behavior?
Pytest fixture write into test class are not displayed in Allure report, if there are parametrize pytest fixture before them.
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
I have parametrize pytest fixture
```
@pytest.fixture(scope="session", params=auth_params)
def public_api_client(request)
pass
```
and fixture into test class
```
class TestPublic:
@pytest.fixture(scope="class", autouse=True)
def delete_ws(self, request):
pass
```
After run tests I see that **delete_ws** are not displayed in Allure report.

When I delete **params** in public_api_client I found **delete_ws** fixture into set up \ tear down section.

#### What is the expected behavior?
All pytest fixture are displayed in Allure report in set up \ tear down section.
#### Please tell us about your environment:
- Allure version: 2.16.1
- Test framework: pytest 6.2.5
- Allure adaptor: [email protected]
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| 1medium
|
Title: The word transitions to the wrong prototype
Body: 
| 1medium
|
Title: What does the visualization look like after TNet matmul input?
Body: Thanks for the excellent work! The authors say, "We predict an affine transformation matrix by a mini-network and directly apply this transformation to the coordinates of input points."
I'm curious what the point cloud looks like after the transformation. Is it a three-dimensional representation that a person can still understand? | 3misc
|
Title: some extraction duplicated in xml
Body: hi,
I was setting a test site and playing with trafilatura and found a weird bug.
site URL:
`https://milkfriends.s1-tastewp.com/2024/06/27/ok-this/`
as this test site is only available for 2 days, so I also attached the simple Gutenberg block code below for you to replicate
Command:
```
html = trafilatura.fetch_url(url, no_ssl=True,)
ts = trafilatura.extract(html, output_format='xml', include_comments=False)
```
the Wordpress Gutenberg htmls below
```
<!-- wp:paragraph -->
<p>this is sample intro</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">intro 2</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>table below</p>
<!-- /wp:paragraph -->
<!-- wp:table -->
<figure class="wp-block-table"><table><tbody><tr><td>a</td><td>b</td><td></td></tr><tr><td>f</td><td>s</td><td>s</td></tr><tr><td>g</td><td></td><td>b</td></tr></tbody></table></figure>
<!-- /wp:table -->
<!-- wp:paragraph -->
<p>header table below</p>
<!-- /wp:paragraph -->
<!-- wp:table -->
<figure class="wp-block-table"><table><thead><tr><th>b</th><th>s</th><th>h</th></tr></thead><tbody><tr><td>a</td><td>b</td><td></td></tr><tr><td>f</td><td>s</td><td>s</td></tr><tr><td>g</td><td></td><td>b</td></tr></tbody></table></figure>
<!-- /wp:table -->
<!-- wp:paragraph -->
<p>list below</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li>this is 1</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>this is 2</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>this is 3</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p>numbered list below</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><!-- wp:list-item -->
<li>this is 1</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>this is 2</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>this is 3</li>
<!-- /wp:list-item --></ol>
<!-- /wp:list -->
```
It is very simple extraction but I find some elements are extracted twice.
elements below "this is sample intro" appeared twice but not all of the elements appear twice. some of the list elements only show up once.
See the extraction below:
```
<doc sitename="milkfriends.s1-tastewp.com" title="ok this" author="Admin" date="2024-06-27" url="https://milkfriends.s1-tastewp.com/2024/06/27/ok-this/" hostname="s1-tastewp.com" fingerprint="f69d7033beefe32d">
<main>
<p>this is sample intro</p>
<head rend="h3">intro 2</head>
<p>table below</p>
<table>
<row span="3">
<cell>a</cell>
<cell>b</cell>
</row>
<row span="3">
<cell>f</cell>
<cell>s</cell>
<cell>s</cell>
</row>
<row>
<cell>g</cell>
<cell>b</cell>
</row>
</table>
<p>header table below</p>
<table>
<row span="3">
<cell role="head">b</cell>
<cell role="head">s</cell>
<cell role="head">h</cell>
</row>
<row span="3">
<cell>a</cell>
<cell>b</cell>
</row>
<row span="3">
<cell>f</cell>
<cell>s</cell>
<cell>s</cell>
</row>
<row>
<cell>g</cell>
<cell>b</cell>
</row>
</table>
<p>list below</p>
<list rend="ul">
<item>this is 1</item>
<item>this is 2</item>
<item>this is 3</item>
</list>
<p>numbered list below</p>
<list rend="ol">
<item>this is 1</item>
<item>this is 2</item>
<item>this is 3</item>
</list>
<p>this is sample intro</p>
<p>table below</p>
<table>
<row span="3">
<cell>a</cell>
<cell>b</cell>
</row>
<row span="3">
<cell>f</cell>
<cell>s</cell>
<cell>s</cell>
</row>
<row>
<cell>g</cell>
<cell>b</cell>
</row>
</table>
<p>header table below</p>
<table>
<row span="3">
<cell role="head">b</cell>
<cell role="head">s</cell>
<cell role="head">h</cell>
</row>
<row span="3">
<cell>a</cell>
<cell>b</cell>
</row>
<row span="3">
<cell>f</cell>
<cell>s</cell>
<cell>s</cell>
</row>
<row>
<cell>g</cell>
<cell>b</cell>
</row>
</table>
<p>list below</p>
<p>numbered list below</p>
</main>
</doc>
```
| 1medium
|
Title: Easy Install Fails to configure nginx
Body: Bone stock install on a brand new Debian 12 vm.
nginx config doesn't work and just a blank page is displayed instead of an admin login.
I'll try the manual install now... | 1medium
|
Title: why no Windows win32 (32-bit) wheels on PyPI?
Body: https://pypi.org/project/orjson/#files only contains `win_amd64` wheels for 64-bit Python for Windows, but no `win32` wheels for the 32 bit Windows Python, which AFAIK is still the default download from Python.org.
I'd like to depend on orjson in my python app which supports both win32 and win_amd64 and was wondering if the absence of the win32 wheels is a technical limiation (orjson doesn't support 32 bit) or simply you decided not to build those for other reasons.
Thanks | 3misc
|
Title: FastCRUD class docs don't match signature
Body: **Describe the bug or question**
According to the [documentation](https://igorbenav.github.io/fastcrud/api/fastcrud/), the `FastCRUD` class takes optional create, update, and delete schemas as arguments, but this doesn't make sense according to the calling signature for `FastCRUD.__init__()` and indeed it doesn't seem to work in practice.
(As a secondary but related matter, the documentation references a bunch of example model/schema classes that are never fully defined anywhere, including the code and tests, which made figuring out how to articulate this fairly tricky. If what the docs say is supposed to work, it's something that probably needs some actual tests written for to confirm.)
**To Reproduce**
I had to do a little fudging using the `Item` model/schema classes to get a set that'd work for the `User` model referenced in the docs, but it seems to me like this:
```python
from fastcrud import FastCRUD
from pydantic import BaseModel
from sqlalchemy import Boolean, Column, DateTime, Integer, String
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id = Column(Integer, primary_key=True)
name = Column(String)
archived = Column(Boolean, default=False)
archived_at = Column(DateTime)
class UserCreateSchema(BaseModel):
name: str
class UserUpdateSchema(BaseModel):
name: str
```
should work for this (Example 6 from the page listed above, in turn taken from the `FastCRUD` class doc string), and yet:
```python
>>> custom_user_crud = FastCRUD(User, UserCreateSchema, UserUpdateSchema, is_deleted_column="archived", deleted_at_column="archived_at")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: FastCRUD.__init__() got multiple values for argument 'is_deleted_column'
```
For that matter, Example 1 _seems_ like it works, but the case of it "working" is actually a typing violation:
```python
>>> user_crud = FastCRUD(User, UserCreateSchema, UserUpdateSchema)
>>> user_crud.is_deleted_column
<class '__main__.UserCreateSchema'>
>>> user_crud.deleted_at_column
<class '__main__.UserUpdateSchema'>
```
| 1medium
|
Title: 抖音好像无法解析了
Body: 如题 | 1medium
|
Title: How to recognize negative number?
Body: I used this model to recognize negative number like '-264.27' with CPU only.
But I get the list ['264.27'] without the negative sign.Its kinda weird.
What's wrong with my code?Any suggestions?
Thanks a lot!
| 1medium
|
Title: New event loop is created across different runs in asyncio
Body: I think this is not intentional - the implication is that setting new event loop policies will possibly lose the current event loop:
https://github.com/agronholm/anyio/blob/14daa2bad967bf6dd0f96d04ecffe8e383985195/anyio/_backends/_asyncio.py#L71-L86
This causes the pytest plugin to have different event loops across fixtures and tests:
```python
import asyncio
import pytest
@pytest.fixture
async def loop():
return asyncio.get_event_loop()
@pytest.mark.anyio
async def test(loop):
assert loop is asyncio.get_event_loop() # fails here
```
In many cases, the fixture should share the same loop as the test, for example:
```python
import asyncpg
import pytest
@pytest.fixture
async def conn():
return await asyncpg.connect("postgresql:///")
@pytest.mark.anyio
async def test(conn):
assert await conn.fetchval("SELECT 123") == 123
```
```
E RuntimeError: Task <Task pending name='Task-5' coro=<run.<locals>.wrapper() running at .../anyio/_backends/_asyncio.py:67> cb=[run_until_complete.<locals>.<lambda>()]> got Future <Future pending cb=[Protocol._on_waiter_completed()]> attached to a different loop
```
One possible solution is to cache all policy instances under different setups, and switch to use the right one. I'll create a PR with a more complete test. | 2hard
|
Title: Could you please share some hyperparameters for Resnet-50 in several datasets?
Body: I am really grateful towards your project and I am just starting Person-Reid Project as a fresh man.
However, except some setting you shared or default settings, the Models(Resnet50) I tranined have a great distances worse than benchmark.
Could please share the hyperparameters of Resnet50 with xent loss in Market1501 | CUHK03 | DukeMTMC-reID | MSMT17 datasets ?
I am really appreciated if you can reply to me.
| 1medium
|
Title: Explain the status and the future of the library (in PR template, README, docs site)
Body: Attempting to file a new feature request shows the text:
> Requests is not accepting feature requests at this time.
Fair enough, there must be reason for that (e.g. lack of maintainers).
However, maybe explain a bit more - if new features are not accepted, what is the future of the library in general? You could:
- Pin a ticket on the issue tracker
- Add a note to the README
- And/or maybe to the docs site
Current state leaves people (at least me) searching/googling for the status and explanation, yet nothing is found? | 3misc
|
Title: Creating Multiple streams from Client
Body: What I see in the current implementation is when you create a stream from the client-side by calling:
https://github.com/aiortc/aioquic/blob/c99b43f4c7a1903e9ab2431932395bb7e0b29232/src/aioquic/asyncio/protocol.py#L63-L75
You get the stream_id and create a reader, writer object, however, you don't create a new entry in QuicConnection._streams. When a new stream is created it gets the same stream_id of 0 because there are no entries in QuicConnection._streams. I believe the create_stream should also call:
https://github.com/aiortc/aioquic/blob/c99b43f4c7a1903e9ab2431932395bb7e0b29232/src/aioquic/quic/connection.py#L1157-L1197
| 1medium
|
Title: I wrote a customized state space class but could not figure out the error. Please help.
Body: Statsmodels - 0.14.0.dev535
@ChadFulton Could you please help me with this error? Thanks!
```
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_46580/847275192.py in <module>
----> 1 two_factor_fit = two_factor_mod.fit()
~\Anaconda3\envs\new_base\lib\site-packages\statsmodels\tsa\statespace\mlemodel.py in fit(self, start_params, transformed, includes_fixed, cov_type, cov_kwds, method, maxiter, full_output, disp, callback, return_params, optim_score, optim_complex_step, optim_hessian, flags, low_memory, **kwargs)
726 else:
727 func = self.smooth
--> 728 res = func(mlefit.params, transformed=False, includes_fixed=False,
729 cov_type=cov_type, cov_kwds=cov_kwds)
730
~\Anaconda3\envs\new_base\lib\site-packages\statsmodels\tsa\statespace\mlemodel.py in smooth(self, params, transformed, includes_fixed, complex_step, cov_type, cov_kwds, return_ssm, results_class, results_wrapper_class, **kwargs)
887
888 # Wrap in a results object
--> 889 return self._wrap_results(params, result, return_ssm, cov_type,
890 cov_kwds, results_class,
891 results_wrapper_class)
~\Anaconda3\envs\new_base\lib\site-packages\statsmodels\tsa\statespace\mlemodel.py in _wrap_results(self, params, result, return_raw, cov_type, cov_kwds, results_class, wrapper_class)
786 wrapper_class = self._res_classes['fit'][1]
787
--> 788 res = results_class(self, params, result, **result_kwargs)
789 result = wrapper_class(res)
790 return result
~\Anaconda3\envs\new_base\lib\site-packages\statsmodels\tsa\statespace\mlemodel.py in __init__(self, model, params, results, cov_type, cov_kwds, **kwargs)
2316 self.param_names = [
2317 '%s (fixed)' % name if name in self.fixed_params else name
-> 2318 for name in (self.data.param_names or [])]
2319
2320 # Save the state space representation output
~\Anaconda3\envs\new_base\lib\site-packages\statsmodels\base\data.py in param_names(self)
354 def param_names(self):
355 # for handling names of 'extra' parameters in summary, etc.
--> 356 return self._param_names or self.xnames
357
358 @param_names.setter
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
``` | 2hard
|
Title: TempoDetector batch processing hangs on bad input
Body: ### Expected behaviour
Run
tempodetector --mirex batch 105247.mp3
It should finish processing, no matter what.
The attached test file is from [FMA_medium](https://github.com/mdeff/fma).
### Actual behaviour
It prints a message that the file could not be read (which is correct), but then simply hangs.
I'll provide a fix in a PR shortly.
[105247.mp3.zip](https://github.com/CPJKU/madmom/files/3440586/105247.mp3.zip)
| 1medium
|
Title: Cannot make optional mutation argument
Body: According to the docs args are optional by default:
```python
class RechargeSim(graphene.Mutation):
class Arguments:
msisdn = graphene.String(required=True)
network_id = graphene.String(required=True)
product_id = graphene.String()
airtime_amount = graphene.Float()
```
Here is a query:
```graphql
result = authed_graphql_client.execute(
'''
mutation($msisdn: String!, $networkId: String!, $productId: String!) {
rechargeSim(msisdn: $msisdn,
networkId: $networkId,
productId: $productId) {
rechargeId
message
}
}
''',
variable_values={
'msisdn': sim.msisdn,
'networkId': network_id,
'productId': product_id
}
)
```
I get this error:
`TypeError: mutate() missing 1 required positional argument: 'airtime_amount' `
I have tried the following variations:
```python
class RechargeSim(graphene.Mutation):
class Arguments:
msisdn = graphene.String(required=True)
network_id = graphene.String(required=True)
product_id = graphene.String()
airtime_amount = graphene.Float(required=False)```
class RechargeSim(graphene.Mutation):
class Arguments:
msisdn = graphene.String(required=True)
network_id = graphene.String(required=True)
product_id = graphene.String()
airtime_amount = graphene.Float(default_value=None)```
class RechargeSim(graphene.Mutation):
class Arguments:
msisdn = graphene.String(required=True)
network_id = graphene.String(required=True)
product_id = graphene.String()
airtime_amount = graphene.Float(required=False, default_value=None)```
```
In the end I have hacked around this by defaulting the value to 0 and then resetting it to None in the resolver if 0. | 1medium
|
Title: huobi数字货币分钟 btc数据为空
Body: 接口:coin_bar
火币报错,okex 有数据
start_date='2020-04-01 00:00:01',
end_date='2020-04-22 19:00:00' 火币这个日期也没有数据。
ID:376461 | 1medium
|
Title: [BUG] `save_changes()` doesn't throw an error when document doesn't exists
Body: **Describe the bug**
In 1.11.9 state was changed and always defaults being not None, instead it got a dict with the default values. This causes `check_if_state_saved` to never throw an error.
`save_changes` on a new created document that isn't saved in the database silently does nothing.
**To Reproduce**
```python
user_data = UserEntry(...)
user_data.x = ...
await user_data.save_changes()
```
**Expected behavior**
`StateNotSaved("No state was saved")`
**Additional context**
https://github.com/roman-right/beanie/compare/1.11.8...1.11.9
https://canary.discord.com/channels/822196934973456394/822196935435747332/1042243293662158970
| 1medium
|
Title: The model runs independently, which can increase the speed!!
Body: As we all know, the project is divided into four steps. When I run the model of each step independently, the speed is significantly improved. | 1medium
|
Title: Question about Tensor Input Size Changes in Version 1.0.0
Body: Hello developers.
I appreciate all your efforts to improve this software.
Now, I noticed that the transcription behavior has changed a lot in version 1.0.0.
I found out that the size of the Tensor input to the model is different. In other words, encode output is different from the previous one, so the result of generate is also different. This may be related to the quality of the transcription.
The following code from openai's Whisper shows that the last dimension of mel_segment is padded to be N_FRAMES.
https://github.com/openai/whisper/blob/ba3f3cd54b0e5b8ce1ab3de13e32122d0d5f98ab/whisper/transcribe.py#L276
Therefore, I wonder if the same process as the function pad_or_trim is needed in this repository?
Note:
The environment I checked is as follows.
OS: Windows 10
Python: 3.9.13
| 1medium
|
Title: How can I run the server from source rather than install from PyPi?
Body: **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04
- TensorFlow installed from (source or binary): binary (pip)
- TensorFlow version: 2.3
- Python version: 3.8.2
- `bert-as-service` version: 1.10
- GPU model and memory: nvidia
- CPU model and memory:
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start -model_dir ~/workdir/bert-models/uncased_L-12_H-768_A-12 -num_worker=4
```
and calling the server via:
```python
bc = BertClient(YOUR_CLIENT_ARGS)
bc.encode()
```
Then this issue shows up:
When I launch the server, I get `AttributeError: module 'tensorflow' has no attribute 'logging' bert as service`. I see that this has been fixed in master, but it is not promoted to PyPi. How can I run bert-as-service server from source code as opposed to installing it from `pip`? thank you
... | 1medium
|
Title: [🕹️] Write a Article Comparing OpenBB and Other Financial Tools
Body: ### What side quest or challenge are you solving?
Write a Article Comparing OpenBB and Other Financial Tools
### Points
300
### Description
15-October-2024 by Neha Prasad » https://nehaprasad27118.medium.com/openbb-vs-proprietary-financial-tools-the-case-for-open-source-in-finance-9563320ff4cd:
### Provide proof that you've completed the task

| 1medium
|
Title: Make monitoring of healthchecks.io easier by instrumenting metrics / traces.
Body: Hi, I would like to instrument healthchecks.io with proper metrics (i.e. tracking http requests duration, status codes, etc...). So we monitoring of the app in production is a lot easier. Ideally using which supports also traces. https://opentelemetry.io/docs/python/ | 1medium
|
Title: Error setting template paths; Voila fails to render
Body: I'm running Voila to serve a dashboarding notebook to users via Jupyter Hub, using a proxy configuration as follows:
`c.ServerProxy.servers = { 'voila':
{ 'command': ['voila', '--debug', '--enable_nbextensions=True', '--MappingKernelManager.cull_interval=60', '--MappingKernelManager.cull_idle_timeout=120', '--MappingKernelManager.cull_busy=True', '--MappingKernelManager.cull_connected=True', '--no-browser', '--port', '{port}', '--base_url', '{base_url}voila/', '--server_url', '/', '/srv/voila/goldmine.ipynb']
}
}`
This has worked perfectly for many months.
After a recent pip upgrade, only 1 of my users is now able to successfully render the notebook page using Voila; everybody else receives a `jinja2.exceptions.TemplateNotFound: voila.tpl` error.
In comparing the --debug logs for successful and failed scenarios, it appears that the template paths are firstly set correctly (in all cases):
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: [Voila] **nbconvert template paths:
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/voila/templates/default/nbconvert_templates**
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: [Voila] **template paths:
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/voila/templates/default/templates**
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: [Voila] static paths:
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/voila/templates/default/static
Sep 01 03:46:03 ip-10-0-0-126 bash[19856]: /opt/tljh/user/lib/python3.6/site-packages/voila/static
But for sessions/users that fail, template paths is subsequently prepended with the user's ~/.local path before pre=processing:
**Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: [Voila] Template paths:
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /home/jupyter-dschofield/.local/share/jupyter/nbconvert/templates/html**
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/nbconvert/templates/html
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/local/share/jupyter/nbconvert/templates/html
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/share/jupyter/nbconvert/templates/html
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/nbconvert/templates/lab
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/nbconvert/templates/base
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /home/jupyter-dschofield/.local/share/jupyter
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /home/jupyter-dschofield/.local/share/jupyter/nbconvert/templates
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /home/jupyter-dschofield/.local/share/jupyter/nbconvert/templates/compatibility
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/nbconvert/templates
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /opt/tljh/user/share/jupyter/nbconvert/templates/compatibility
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/local/share/jupyter
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/local/share/jupyter/nbconvert/templates
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/local/share/jupyter/nbconvert/templates/compatibility
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/share/jupyter
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/share/jupyter/nbconvert/templates
Sep 01 03:46:04 ip-10-0-0-126 bash[19856]: /usr/share/jupyter/nbconvert/templates/compatibility
Of course, the default template does not exist under the first path (/home/jupyter-dschofield/.local/share/jupyter/nbconvert/templates/html) and therefore rendering fails with the Jinja template not found issue.
This does not appear to be a permissions issue and nothing else that I'm aware of has changed system-wide.
async-generator==1.10
ipykernel==5.3.0
ipympl==0.5.6
ipysheet==0.4.4
ipython==7.14.0
ipython-genutils==0.2.0
ipytree==0.1.8
ipywidgets==7.5.1
Jinja2==2.11.2
jupyter-client==6.1.3
jupyter-core==4.6.3
jupyter-server==0.1.1
jupyter-server-proxy==1.2.0
jupyterhub==1.0.0
jupyterhub-idle-culler==1.0
jupyterlab==2.2.4
jupyterlab-pygments==0.1.1
jupyterlab-server==1.2.0
nbclient==0.4.3
nbconvert==5.6.1
nbformat==5.0.6
notebook==6.0.3
Pygments==2.6.1
tornado==6.0.4
traitlets==4.3.3
voila==0.1.22
widgetsnbextension==3.5.1
| 1medium
|
Title: CGI Generic SQL Injection (blind)
Body: If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
### Environment
Flask-Appbuilder version: `2.1.9`
### Describe the expected results
I am using Superset with Flask-AppBuilder with version stated above. While doing some scanning using Nessus, found that problem on `login` page.
### Describe the actual results
Tell us what happens instead.
```
Using the POST HTTP method, Nessus found that :
+ The following resources may be vulnerable to blind SQL injection :
+ The 'csrf_token' parameter of the /login/ CGI :
/login/ [username=&password=&csrf_token=ImQzYzFjYTZmMWQwMjMxNjcyMzQyOWI1
NGUwYzU1MzYwNTAzZWQ0YjQi.XxqOfw.Rdt9Egs2sOALP63VUCR2zqBKg5Ezz&password=&
csrf_token=ImQzYzFjYTZmMWQwMjMxNjcyMzQyOWI1NGUwYzU1MzYwNTAzZWQ0YjQi.XxqO
fw.Rdt9Egs2sOALP63VUCR2zqBKg5Eyy]
-------- output --------
<title>400 Bad Request</title>
<h1>Bad Request</h1>
<p>The CSRF tokens do not match.</p>
-------- vs --------
<title>400 Bad Request</title>
<h1>Bad Request</h1>
<p>The CSRF token is invalid.</p>
------------------------
```
### Steps to reproduce
| 1medium
|
Title: Divided by zero exception107
Body: Error: Attempted to divide by zero.107 | 1medium
|
Title: imageio: fps no longer supported, use duration instead
Body: https://github.com/polakowo/vectorbt/blob/8c040429ac65d43ea431dc2789bf4787dd103533/vectorbt/utils/image_.py#LL74C1-L74C72
Clearly `vbt.save_animation` is still using `fps` as argument in `imageio.get_writer`, which doesn't support `fps` anymore,
One quick fix should be changing this line to
`with imageio.get_writer(fname, duration=fps // 5, **writer_kwargs) as writer:` | 0easy
|
Title: [Question] Opinions on including SplineTransformer as feature preprocessing step
Body: I was wondering if there were any plans of bringing the [SplineTransformer() preprocessor](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.SplineTransformer.html) into `auto-sklearn` (it became available in `sklearn` in a newer version that what is currently being used). I have been testing it recently as a custom component and it has been achieving great results for me although I am aware it is the type of preprocessor that could result in models with poor generalization capacity due to its nature.
Have you worked with this preprocessor before? What are your opinions about including it in an automated ML workflow such as `auto-sklearn`?
| 1medium
|
Title: Dash.run return url with 404 when calling JupyterDash.infer_jupyter_proxy_config()
Body: Hello,
I'm migrating several Jupyter Notebooks using dash.
It used to rely on https://pypi.org/project/jupyter-plotly-dash/ and I upgraded all python librairies and migrating in miniforge environment.
The notebooks are distributed through a jupyterhub, behind nginx as reverse proxy. Both nginx and jupyterhub are launched inside docker containers.
**Describe the bug**
I could not succeed to get Dash visible, even with simplest example.
1) if calling jupyter_dash.infer_jupyter_proxy_config() in this page : https://dash.plotly.com/dash-in-jupyter, I got a Jupyterhub 404 error. The provided url ([lab/user/fb03416l/proxy/8050/](https://HOST/lab/user/USER_ID/proxy/8050/) seems not to be correct.


2) it does not work either without calling this function, whatever jupytermode argument given to Dash.run() client tries to connect to local adress 127.0.0.1:PORT

Hoping that someone will be able to help me, thx a lot, please see below my configuration files.
**Describe your context**
docker-compose.yaml
```
services:
nginx:
container_name: 'nginx-service'
image: nginx:alpine
volumes:
- /appli/visuconti/visuconti_install/nginx.conf.template:/etc/nginx/templates/nginx.conf.template:ro
- /etc/nginx/ssl:/etc/nginx/ssl:ro
network_mode: host
environment:
- SERVER_NAME='servername'
restart: always
jupyterhub:
container_name: "jupyterhub-service"
build:
context: .
dockerfile: jupyterhub.Dockerfile
volumes:
- /appli/visuconti/visuconti_install/jupyterhub_config.py:/srv/jupyterhub/jupyterhub_config.py:ro
- /home:/home
network_mode: host
restart: always
```
nginx.conf.template
```
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name ${SERVER_NAME};
# Redirect the request to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443 ssl;
server_name ${SERVER_NAME};
ssl_certificate /etc/nginx/ssl/feevisu.crt;
ssl_certificate_key /etc/nginx/ssl/feevisu.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
#ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# Managing literal requests to the JupyterHub frontend
location /lab {
proxy_pass http://0.0.0.0:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
}
}
```
jupyterhub.Dockerfile
```
FROM condaforge/miniforge3
ARG DEBIAN_FRONTEND=noninteractive
COPY environment.yaml ./environment.yaml
RUN mamba env create -f ./environment.yaml
# JUPYTER LAB BUILD
RUN conda run -n visuconti-env /bin/bash -c '\
jupyter lab clean && \
jupyter lab build'
RUN apt-get update && apt-get install -y \
sssd \
krb5-user \
net-tools \
sssd-tools \
sssd-dbus \
krb5-user \
krb5-locales \
libkrb5-26-heimdal
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "visuconti-env", "jupyterhub", "-f","/config/jupyterhub_config.py"]
```
environment.yaml
```
name: visuconti-env
dependencies:
- python=3.7
- jupyterlab=3.6.6
- jupyterhub=4.0.2
- dash=2.14.0
- numpy=1.21.6
- netCDF4=1.6.0
- pandas=1.3.5
- scipy=1.7.3
```
jupyterhub_config.py
```
import os
import grp
os.umask(0o002)
c.JupyterHub.admin_access = True
c.JupyterHub.authenticator_class = 'dummy' #'jupyterhub.auth.PAMAuthenticator'
c.JupyterHub.bind_url = 'http://:8000/lab'
c.JupyterHub.cleanup_proxy = True
c.JupyterHub.cleanup_servers = True
c.JupyterHub.hub_connect_ip = '127.0.0.1'
c.JupyterHub.hub_connect_url = 'http://127.0.0.1:12424'
c.JupyterHub.hub_ip = '127.0.0.1'
c.JupyterHub.hub_port = 12424
c.JupyterHub.reset_db = True
c.JupyterHub.spawner_class = 'jupyterhub.spawner.LocalProcessSpawner'
c.Spawner.debug = True
c.Spawner.default_url = '/lab'
c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL']
c.Authenticator.allowed_users = {'fb03416l'}
c.Authenticator.delete_invalid_users = True
c.Authenticator.enable_auth_state = False
c.LocalAuthenticator.create_system_users = True
``` | 2hard
|
Title: [Usage]: how to cache the lora adapter in memory
Body: ### Your current environment
I want to build a multi-lora service, but the following code seems to reload the Lora adapter every time
```python
class LoraModule(BaseModel):
name: str
path: str
class UserRequest(BaseModel):
lora_module: list[LoraModule]
question: str
@app.post("/")
async def multi_loras(req: UserRequest):
params = SamplingParams(max_tokens=512)
tokenizer = await engine.get_tokenizer()
messages = tokenizer.apply_chat_template(
[{"role": "user", "content": req.question}],
tokenize=False,
add_generation_prompt=True,
)
output = []
for i, lora in enumerate(req.lora_module):
generator = engine.generate(
messages,
sampling_params=params,
lora_request=LoRARequest(
lora_name=lora.name,
lora_path=lora.path,
lora_int_id=i,
),
request_id=str(uuid4().hex),
)
final_output = None
async for res in generator:
final_output = res
output.append(final_output)
print(output)
```
### How would you like to use vllm
I noticed in the documentation that the service started via CLI seems to cache the lora adapter in memory, but I didn't find the code to implement it. Can you tell me where to implement it?
```shell
vllm serve meta-llama/Llama-2-7b-hf \
--enable-lora \
--lora-modules sql-lora=$HOME/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/snapshots/0dfa347e8877a4d4ed19ee56c140fa518470028c/
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 1medium
|
Title: Use new utility method `select_streaming_callback` in all ChatGenerators
Body: As we have added `run_async` methods to our ChatGenerators we brought over a useful utility method https://github.com/deepset-ai/haystack/blob/209e6d5ff0f30f0be1774045de2491272bd2bdc2/haystack/dataclasses/streaming_chunk.py#L32-L34
which checks the compatibility of the streaming callback with the async or non-async run method.
We should make sure to use this to all of our ChatGenerators. It's currently only been added to HuggingFaceAPIChatGenerator (both run and run_async methods) and the OpenAIChatGenerator (only the run_async method) | 1medium
|
Title: nms=true for exporting to onnx
Body: ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
i get this error
```
(yolo) root@workstation-016:/mnt/4T/Tohidi/object_detector_service# yolo export model=yolo11
x.pt nms=true format=engine device=3
Ultralytics 8.3.71 🚀 Python-3.10.0 torch-2.5.1+cu124 CUDA:3 (NVIDIA H100 PCIe, 80995MiB)
YOLO11x summary (fused): 464 layers, 56,919,424 parameters, 0 gradients, 194.9 GFLOPs
Traceback (most recent call last):
File "/opt/anaconda3/envs/yolo/bin/yolo", line 8, in <module>
sys.exit(entrypoint())
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/cfg/__init__.py",
line 986, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/model.py",
line 740, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/exporter.py
", line 354, in __call__
y = NMSModel(model, self.args)(im) if self.args.nms and not coreml else model(im)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/exporter.py
", line 1559, in forward
extra_shape = pred.shape[-1] - (4 + self.model.nc) # extras from Segment, OBB, Pose
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1931, in __getattr__
raise AttributeError(
AttributeError: 'DetectionModel' object has no attribute 'nc'
```
****
### Environment
```
Ultralytics 8.3.71 🚀 Python-3.10.0 torch-2.5.1+cu124 CUDA:0 (NVIDIA H100 80GB HBM3, 80995MiB)
Setup complete ✅ (255 CPUs, 1007.7 GB RAM, 1807.6/1831.2 GB disk)
OS Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Environment Linux
Python 3.10.0
Install pip
RAM 1007.65 GB
Disk 1807.6/1831.2 GB
CPU AMD EPYC 7773X 64-Core Processor
CPU count 255
GPU NVIDIA H100 80GB HBM3, 80995MiB
GPU count 6
CUDA 12.4
numpy ✅ 1.26.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.1>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.0.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
```
### Minimal Reproducible Example
```
yolo export model=yolo11x.pt format=engine device=3 nms=true
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | 1medium
|
Title: [BUG] Process mail doesn't work as non-superuser
Body: ### Description
I tried the now Process Mail button as added with v2.14.
https://github.com/paperless-ngx/paperless-ngx/pull/8466
- It works when the button is pressed by a superuser.
- It doesn't work when it's pressed by a non-superuser (despite having admin and all mail permissions).
It shows an 403 Forbidden error
```json
{
"headers": {
"normalizedNames": {},
"lazyUpdate": null
},
"status": 403,
"statusText": "Forbidden",
"url": "http://192.168.2.194:8010/api/mail_accounts/1/process/",
"ok": false,
"name": "HttpErrorResponse",
"message": "Http failure response for http://192.168.2.194:8010/api/mail_accounts/1/process/: 403 Forbidden",
"error": {
"detail": "You do not have permission to perform this action."
}
}
```

### Steps to reproduce
1. Login as non-superuser
2. Mail -> Process Mail
3. Error 403 Forbidden
### Webserver logs
```bash
webserver-1 | [2025-01-19 16:20:10,986] [WARNING] [django.request] Forbidden: /api/mail_accounts/1/process/
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.4
### Host OS
Ubuntu 24.04.1/docker compose
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.14.4",
"server_os": "Linux-6.8.0-51-generic-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 23002126852096,
"available": 15696873656320
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "mfa.0003_authenticator_type_uniq",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2025-01-19T16:16:06.481368+01:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2025-01-19T15:05:27.104617Z",
"classifier_error": null
}
}
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | 1medium
|
Title: `bind` parameter for preserving keys of the regenerated nodes
Body: The docstring of [bind](https://docs.dask.org/en/stable/graph_manipulation.html#dask.graph_manipulation.bind) mentions regarding the `returns`:
> The keys of the regenerated nodes will be different from the original ones, so that they can be used within the same graph.
As mentioned in https://github.com/dask/dask/issues/9333, this may be inconvenient if the input `children` already have set `dask_key_name`. As @crusaderky [wrote](https://github.com/dask/dask/issues/9333#issuecomment-1215758430), this works as intended because it's designed so that you can use the original and the bound keys together. That's perfectly reasonable as a default but in my use case I need only the bound keys; they effectively replace the original ones. So it would be nice if `bind` takes a `regenerate_keys=True` optional parameter to allow preserving the original keys.
In the meantime, what's a manual way to to restore the original keys? I tried setting `._key` but apparently that's not enough; the graph still refers to the regenerated names. | 1medium
|
Title: The extracted feature is not the same when run validate code twice
Body: Hi, David,
Thanks a lot for sharing this repo, I notice that when extract the feature on lfw, the feature is not the same on when I run the code twice, could you please tell me how this happened? should I crop the image size as a certain size? | 1medium
|
Title: Feature Request: Inversion of Control features should be supported in notebooks
Body: We're working on adopting ploomber as our pipeline management technology. In early experimentation, I've found that many of the best inversion of control features of ploomber don't seem to be supported for notebooks. I find this odd because of the amount of attention and ink spent on integrating jupyter notebooks.
Examples (in order of importance):
- Serializer and deserializer don't appear to be supported for notebook upstreams and products. They seem to always be paths that the notebook author must handle.
- Clients don't appear to be supported for notebooks. It's only possible to manually instantiate them in the notebook.
- Injection substitutes absolute paths. This results in multiple editors accidentally fighting over the upstream and product cells in source control even if no real edits are made to the notebook.
The extensions you've added to make a jupyter[lab] server work well with plain 'ol .py files are very useful but I was disappointed in the small subset of features available to notebook tasks. This breaks the most powerful features of ploomber when using notebooks. Pure python tasks can use clients and serializers to improve testability and make large changes possible with tiny reliable changes to the pipeline spec. You can develop using human-readable formats and the local filesystem and then use binary formats and cloud storage in production with a couple of lines of yaml when using pure python but this is not possible with notebooks. Further, ploomber teaches certain concepts and expectations around upstreams and products when using python tasks that are not valid when using notebooks.
Suggestion: abstract upstream and product into python objects you import instead of injecting dictionaries of strings into notebooks.
```python
# %% tags=["parameters"]
# add default values for parameters here
# %% tags=["injected-parameters"]
# Parameters
upstream = {
"input": "\some\wild\absolute-path\input.csv"
}
product = {
"data": "\some\wild\absolute-path\data.csv",
"nb": "\some\wild\absolute-path\nb.csv",
}
# %%
df = pandas.read_csv(upstream['input'])
result = do_some_stuff(df)
result.to_csv(product['data'])
```
could become:
```python
# %% tags=["parameters"]
# add default values for parameters here
upstream, product = {}, {}
# %% tags=["injected-parameters"]
# Parameters
upstream, product = ploomber.nb.get_context() # knows the current state of the pipeline and uses it to populate upstream and product
# %%
df = upstream['input'] # deserializer and client populate the object instead of the path
result = do_some_stuff(df)
product['data'] = result # serializer and client encode and store the result instead of the notebook doing it using a path
``` | 1medium
|
Title: Raise classical Pydantic ValidationError like FastApi
Body: Hello,
I'm working with this library and I found the option to raise errors (`FLASK_PYDANTIC_VALIDATION_ERROR_RAISE = True`).
I was expecting the same kind of error as in FastAPI/Pydantic combination:
```json
{
"errors":[
{
"loc":[
"query",
"request-mode"
],
"msg":"field required",
"type":"value_error.missing"
},
{
"loc":[
"body",
"birth_date"
],
"msg":"field required",
"type":"value_error.missing"
}
]
}
```
In Pydantic, all errors are in the `errors` array and the location (header, body...) is specified directly in "loc".
In Flask-Pydantic, errors are in separate folders according to the location:
```json
{
"body":[
{
"loc":[
"birth_date"
],
"msg":"field required",
"type":"value_error.missing"
}
],
"query":[
{
"loc":[
"request-mode"
],
"msg":"field required",
"type":"value_error.missing"
}
]
}
```
The `ValidationError(BaseFlaskPydanticException)` exception `e` is raised and you can look for each group errors according to the location:
- `e.body_params`
- `e.form_params`
- `e.path_params`
- `e.query_params`
What I would like is, for instance, to add the `e.errors` category which contains all the errors, formatted as in the Pydantic library used by FastAPI.
Thank you! | 1medium
|
Title: Reduce memory consumption in broadcast bmm
Body: ### 🚀 The feature, motivation and pitch
Here is a minimal example, consumes about 66GiB CUDA memory (I guess it may expand `b` to [8192,32,1024,128] before calculation). Is it possible to reduce the memory consumption without expanding?
`a=torch.rand((8192,32,1,1024),dtype=torch.bfloat16,device='cuda:0')`
`b=torch.rand((1,32,1024,128),dtype=torch.bfloat16,device='cuda:0')`
`c=torch.matmul(a,b)`
Versions:
torch: 2.6.0+cu126
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @msaroufim @eqy @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | 2hard
|
Title: SearchGraph error while follwing the example
Body: ```
from search_graph import SearchGraph
# Define the prompt and configuration
prompt = "What is Chioggia famous for?"
config = {
"llm": {"model": "gpt-3.5-turbo"}
}
# Create the search graph
search_graph = SearchGraph(prompt, config)
# Run the search graph
result = search_graph.run()
print(result)
```
this gives this error even I didn't modified the code. any idea how to fix this error?
```
Exception has occurred: OutputParserException
Invalid json output: {
"answer": {
"Chioggia is famous for offering a more authentic Italian experience compared to Venice, having a significant fishing industry and a rich history tied to the Venetian Republic, featuring a must-visit fish market with a wide variety of fresh seafood, and providing an excellent dining experience at Baia dei Porci with a focus on local seafood dishes."
}
}
json.decoder.JSONDecodeError: Expecting ':' delimiter: line 4 column 5 (char 382)
The above exception was the direct cause of the following exception:
File "/home/dongwook/Project/auto_crawl/toy.py", line 18, in <module>
raw_result = search_graph.run()
langchain_core.exceptions.OutputParserException: Invalid json output: {
"answer": {
"Chioggia is famous for offering a more authentic Italian experience compared to Venice, having a significant fishing industry and a rich history tied to the Venetian Republic, featuring a must-visit fish market with a wide variety of fresh seafood, and providing an excellent dining experience at Baia dei Porci with a focus on local seafood dishes."
}
}
``` | 1medium
|
Title: Providing a `django.mute_signals` like mechanism for SQLAlchemy
Body: #### The problem
SQLAlchemy provides a listener mechanism similar to django signals in order to execute code based on events handler like pre_save or post_save. Some models can be plugged in with this concept and code are executed `before`/`after` `insert/update/delete` events. Sometime, those pieces of code are not relevant to run into the factory scope and we want something able to unplug it on the fly.
#### Proposed solution
In the flavour of `django.mute_signals` provides a context_manager/decorator able to mute SQLAlchemy listeners.
based on this kind of declaration
```python
class ModelWithListener(Model):
id = ...
@listen_for(ModelWithListener, 'after_insert'):
def unwanted_function(...):
...
```
```python
with mute_listeners([ModelWithListener, 'after_insert', unwanted_function]):
ModelWithListenerFactory()
```
or
```python
@mute_listeners([(ModelWithListener, 'after_insert', unwanted_function)])
def test():
ModelWithListenerFactory()
```
We could easily imagine an option attribute already declared into the Meta of the factory
```python
class ModelWithListenerFactory(Factory):
class Meta:
model = ModelWithListener
sqlalchemy_mute_listeners = [
('after_insert', unwanted_function)
]
``` | 1medium
|
Title: Caching when no search results found - 2.1.1
Body: I am using python 3.6, oscar 2.1.1 (django 2.2) with haystack + solr 8, cache is memcached 1.6.9. Didnt fork search app. Caching per site is not turned on. Tried to switch to default cache - disable memcached, didnt help. Settings:
```
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '172.11.22.33:11211',
}
}
```
modified facet:
```
OSCAR_SEARCH_FACETS = {
'fields': OrderedDict([
# ('product_class', {'name': _('Type'), 'field': 'product_class'}),
('rating', {'name': _('Rating'), 'field': 'rating'}),
]),
'queries': OrderedDict([
('price_range',
{
'name': _('Price range'),
'field': 'price',
'queries': [
# This is a list of (name, query) tuples where the name will
# be displayed on the front-end.
(_('0 to 20'), u'[0 TO 20]'),
(_('20 to 40'), u'[20 TO 40]'),
(_('40 to 60'), u'[40 TO 60]'),
(_('60+'), u'[60 TO *]'),
]
}),
]),
}
```
options:
```
'loaders': [
('django.template.loaders.cached.Loader', [
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
]),
],
```
### Steps to Reproduce
1. I have lot of 'cbd' products. When I search for 'cbd', they are correctly found.
2. When I search for 'cbw' (/?s=cbw), page with 0 results is rendered.
3. After that when I try to search for 'cbd', same template for 'cbw' is rendered again: the new searched text is correctly replaced in url (/?s=cbd), but part of template with results stands the same (Produits correspondant à "cbw": 0 résultat trouvé: seems to be cached. Even when I delete (?s=cbd) from url, page /search/ is still cached with 'Produits correspondant à "cbw": 0 résultat trouvé'. | 1medium
|
Title: Add `--global` option to `--install` to save filter config to `global .gitconfig`
Body: Presently, `nbstripout --install` modifies the repo `.git/config`. This is less portable than saving the path to `nbstripout` and the filters in the user's global `.gitconfig`.
It would be nice to have a command such as:
`nbstripout --install --global --atributes .gitattributes` that created a `.gitattributes` file in the current repo but saved the filters and path globally. Then, every repo with the `.gitattributes` file would be stripped without needing to install nbstripout in every cloned repository.
See conversation #7 | 1medium
|
Title: How to do more than one cleanup task in cancelled state?
Body: <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.23.0
* **PostgreSQL version**: 12.6
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: no/ yes
* **Python version**:3.9.5
* **Platform**: Fedora Linux
* **Do you use pgbouncer?**: no
* **Did you install asyncpg with pip?**: yes
* **If you built asyncpg locally, which version of Cython did you use?**: n/a
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: have only tested w/ asyncio directly
We have a user illustrating the case that a task being cancelled will allow us to reach a finally: block where we can at most run only one awaitable cleanup task on asyncpg in order to close out the connection. if we have more than one thing to await, such as emitting a ROLLBACK or anything else, we don't get the chance to close() the connection. It then seems to go into some place where we no longer have any reference to this connection yet asyncpg still leaves it opened; our own GC handlers that are supposed to take care of this are never called.
One way to illustrate it is the use case in such a way that indicates how I'm looking for "how to solve this problem?", of having two separate asycnpg connections that suppose we are doing some kind of work on separately in the same awaitable. if a cancel() is called, I can reach the finally: block, and I can then close at most one of the connections, but not both. in the real case, we are using only one connection but we are trying to emit a ROLLBACK and also do other awaitable things before we get to the .close().
What I dont understand is why gc isn't collecting these connections or why they aren't getting closed.
```python
import asyncio
from asyncio import current_task
import asyncpg
async def get_and_cancel():
c1 = await asyncpg.connect(
user="scott", password="tiger", host="localhost", database="test"
)
c2 = await asyncpg.connect(
user="scott", password="tiger", host="localhost", database="test"
)
try:
r1 = await c1.fetch("SELECT 1")
r2 = await c2.fetch("SELECT 1")
current_task().cancel()
finally:
# we get here...
# this seems to affect the asyncpg connection, the await is
# honored....
await c1.close()
# but we never get here. connection leaks. canonical way to
# solve this issue?
await c2.close()
async def main():
while True:
try:
await get_and_cancel()
except asyncio.exceptions.CancelledError:
pass
asyncio.run(main())
```
the stack trace is that we've run out of connections:
```
Traceback (most recent call last):
File "/home/classic/dev/sqlalchemy/test4.py", line 39, in <module>
asyncio.run(main())
File "/usr/lib64/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib64/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/home/classic/dev/sqlalchemy/test4.py", line 34, in main
await get_and_cancel()
File "/home/classic/dev/sqlalchemy/test4.py", line 11, in get_and_cancel
c2 = await asyncpg.connect(
File "/home/classic/.venv3/lib64/python3.9/site-packages/asyncpg/connection.py", line 1981, in connect
return await connect_utils._connect(
File "/home/classic/.venv3/lib64/python3.9/site-packages/asyncpg/connect_utils.py", line 732, in _connect
con = await _connect_addr(
File "/home/classic/.venv3/lib64/python3.9/site-packages/asyncpg/connect_utils.py", line 632, in _connect_addr
return await __connect_addr(params, timeout, True, *args)
File "/home/classic/.venv3/lib64/python3.9/site-packages/asyncpg/connect_utils.py", line 682, in __connect_addr
await compat.wait_for(connected, timeout=timeout)
File "/home/classic/.venv3/lib64/python3.9/site-packages/asyncpg/compat.py", line 103, in wait_for
return await asyncio.wait_for(fut, timeout)
File "/usr/lib64/python3.9/asyncio/tasks.py", line 481, in wait_for
return fut.result()
asyncpg.exceptions.TooManyConnectionsError: remaining connection slots are reserved for non-replication superuser connections
```
| 1medium
|
Title: Chat Interface Throws Error When Model Provider is Ollama but Works in Notebook
Body: ## Description
I'm working on a blogpost on Jupyter AI and I had completed the draft.
Article Draft : https://docs.google.com/document/d/1N59WnVCDOzFX2UdfPW_G-eRet5AkCpNcJGbZ5uXw6HI/edit?usp=sharing
Everything was working seamlessly as can be seen from the screenshots in the article. The Ollama integration in Jupyter AI worked as expected in both notebooks and the chat interface. However, now the chat interface throws an error, while the notebook-based interactions still function correctly.
## Environment Details
- **OS**: macOS 14
- **Python Version**: 3.13.1
- **JupyterLab Version**: 4.3.6
- **Jupyter AI Version**: 2.30.0
## Steps to Reproduce
<img width="1144" alt="Image" src="https://github.com/user-attachments/assets/e12e0caa-9c25-49aa-b450-d91237bff391" />
<img width="657" alt="Image" src="https://github.com/user-attachments/assets/e3a2ed90-ff03-4d2e-8c56-35d4dfe41df0" />
## Error Message
Traceback (most recent call last):
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/jupyter_ai/chat_handlers/base.py", line 229, in on_message
await self.process_message(message)
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/jupyter_ai/chat_handlers/default.py", line 72, in process_message
await self.stream_reply(inputs, message)
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/jupyter_ai/chat_handlers/base.py", line 567, in stream_reply
async for chunk in chunk_generator:
...<32 lines>...
break
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 5548, in astream
async for item in self.bound.astream(
...<4 lines>...
yield item
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 5548, in astream
async for item in self.bound.astream(
...<4 lines>...
yield item
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3439, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3422, in atransform
async for chunk in self._atransform_stream_with_config(
...<5 lines>...
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 2308, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
)
^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3392, in _atransform
async for output in final_pipeline:
yield output
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 5584, in atransform
async for item in self.bound.atransform(
...<4 lines>...
yield item
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 4954, in atransform
async for output in self._atransform_stream_with_config(
...<5 lines>...
yield output
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 2308, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
)
^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 4935, in _atransform
async for chunk in output.astream(
...<7 lines>...
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 5548, in astream
async for item in self.bound.astream(
...<4 lines>...
yield item
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3439, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3422, in atransform
async for chunk in self._atransform_stream_with_config(
...<5 lines>...
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 2308, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
)
^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3392, in _atransform
async for output in final_pipeline:
yield output
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/output_parsers/transform.py", line 85, in atransform
async for chunk in self._atransform_stream_with_config(
...<2 lines>...
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 2266, in _atransform_stream_with_config
final_input: Optional[Input] = await py_anext(input_for_tracing, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/utils/aiter.py", line 74, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/utils/aiter.py", line 123, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 1473, in atransform
async for output in self.astream(final, config, **kwargs):
yield output
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 512, in astream
async for chunk in self._astream(
...<14 lines>...
generation += chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_ollama/chat_models.py", line 755, in _astream
async for stream_resp in self._acreate_chat_stream(messages, stop, **kwargs):
...<23 lines>...
yield chunk
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/langchain_ollama/chat_models.py", line 575, in _acreate_chat_stream
async for part in await self._async_client.chat(**chat_params):
yield part
File "/Users/parul/Desktop/venv/lib/python3.13/site-packages/ollama/_client.py", line 672, in inner
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: model is required (status code: 400)
Would appreciate any insights from the maintainers. Thanks! | 1medium
|
Title: Is resizing of the input image done in both training and predicting?
Body: In the hyperparameters, the standard settings for input image resizing are as below:
`IMAGE_RESIZE_MODE = "square"`
`IMAGE_MIN_DIM = 800`
`IMAGE_MAX_DIM = 1024`
I have input images that are 6080x3420, so to my understanding, these are resized to 1024x1024 and padded with zeroes to make a square image. Does this happen both in training and when predicting with the trained model?
I ask because I have a model trained on the 6080x3420 images with the above standard settings, but I have noticed that downscaling the test images before predicting has an influence on prediction accuracy. Effectively, the prediction accuracy is highest when downscaling the test images to 12.5% of the original size before running the model on them.
| 1medium
|
Title: Set PYTHON_EGG_CACHE for flask apps during init
Body: <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
I discovered that in my Flask deployment, the app deploys fine with e.g. `zappa init; zappa deploy dev` however upon hitting the generated endpoint a failure is returned.
## Expected Behavior
<!--- Tell us what should happen -->
You should be able to get your expected response from whatever endpoint is hit.
## Actual Behavior
<!--- Tell us what happens instead -->
You get this response:
```
"{u'message': u'An uncaught exception happened while servicing this request. You can investigate this with the `zappa tail` command.', u'traceback': ['Traceback (most recent call last):\\n', ' File \"/var/task/handler.py\", line 452, in handler\\n response = Response.from_app(self.wsgi_app, environ)\\n', ' File \"/tmp/pip-build-LktYrc/Werkzeug/werkzeug/wrappers.py\", line 903, in from_app\\n', ' File \"/tmp/pip-build-LktYrc/Werkzeug/werkzeug/wrappers.py\", line 57, in _run_wsgi_app\\n', ' File \"/tmp/pip-build-LktYrc/Werkzeug/werkzeug/test.py\", line 884, in run_wsgi_app\\n', \"TypeError: 'NoneType' object is not callable\\n\"]}"
```
`zappa tail dev` yields the following:
```
[1519342540529] Can't extract file(s) to egg cache
The following error occurred while trying to extract file(s) to the Python egg
cache:
[Errno 30] Read-only file system: '/home/sbx_user1060'
The Python egg cache directory is currently set to:
/home/sbx_user1060/.python-eggs
Perhaps your account does not have write access to this directory? You can
change the cache directory by setting the PYTHON_EGG_CACHE environment
variable to point to an accessible directory.
```
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Seems that PYTHON_EGG_CACHE needs to be set as an environment variable to '/tmp'. I solved by including the following in my zappa_settings.json:
```json
"environment_variables": {
"PYTHON_EGG_CACHE": "/tmp"
}
```
Unsure if this is Flask specific, or if I stuffed up somewhere, or if this is actually expected behaviour...
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Make a flask app
2. `zappa init`
3. `zappa deploy dev`
4. poke API endpoint
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.45.1
* Operating System and Python version: 4.13.0-32-generic #35~16.04.1-Ubuntu | Python 2.7.12
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
```json
{
"dev": {
"app_function": "*****.api.API",
"aws_region": "ap-southeast-2",
"profile_name": "*****",
"project_name": "api",
"runtime": "python2.7",
"s3_bucket": "*****",
"environment_variables": {
"*****": "*****",
"PYTHON_EGG_CACHE": "/tmp"
},
"domain": "*****.*****",
"cors": true,
"certificate_arn": "arn:aws:acm:us-east-1:*******"
}
``` | 1medium
|
Title: Multiple-Input Batched Data
Body: I'm trying to feed a RNN+LSTM Network using the `KerasModel` layer, and in my case, I need to use 2 inputs, and get a `Dense(1)` output.
```python
train, test, val = dataset.get('train'), dataset.get('test'), dataset.get('val')
train_ds = BatchData(train, batch_size, use_list=True)
test_ds = BatchData(test, batch_size, use_list=True)
M = KerasModel(create_model,
inputs_desc=[
InputDesc(tf.float32, [
None, timesteps_val, len(features_per_timestep)], 'input_a'),
InputDesc(tf.float32, [
None, timesteps_val, 96, 96, 1], 'input_b'),
],
targets_desc=[InputDesc(tf.float32, [None, 1], 'labels')],
input=QueueInput(train_ds))
```
If I remove the `use_list=True`, I got an error saying that batched data only works in Numpy Arrays. If I remove batching, or keep the way that its above, I got:
```
[1024 15:03:14 @input_source.py:168] ERR Exception in EnqueueThread QueueInput/input_queue:
Traceback (most recent call last):
File "/Users/brunoalano/.local/share/virtualenvs/research-laYaeRqi/lib/python3.6/site-packages/tensorpack/input_source/input_source.py", line 159, in run
feed = _make_feeds(self.placehdrs, dp)
File "/Users/brunoalano/.local/share/virtualenvs/research-laYaeRqi/lib/python3.6/site-packages/tensorpack/input_source/input_source.py", line 41, in _make_feeds
len(datapoint), len(placeholders))
AssertionError: Size of datapoint and placeholders are different: 2 != 3
```
Details:
- Python Version: Python 3.6.5
- TF Version: v1.11.0-rc2-4-gc19e29306c 1.11.0
- Tensorpack Version (from git): 0.8.9 | 1medium
|
Title: Feature request: Watermark support
Body: Hi,
I was looking in documentation and couldn't find any clue how to set background image in worksheet. I want to add watermark to printed pages. Is this feature supported? If not, maybe any workaround exists?
| 1medium
|
Title: ModuleNotFoundError: No module named 'facelib'
Body: ```
Traceback (most recent call last):
File "inference_codeformer.py", line 10, in <module>
from facelib.utils.face_restoration_helper import FaceRestoreHelper
ModuleNotFoundError: No module named 'facelib'
```
Then I install the library, but no suitable version is displayed, what should I do?
```
pip install facelib
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))': /simple/facelib/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))': /simple/facelib/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))': /simple/facelib/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))': /simple/facelib/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))': /simple/facelib/
Could not fetch URL https://pypi.tuna.tsinghua.edu.cn/simple/facelib/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Max retries exceeded with url: /simple/facelib/ (Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))) - skipping ERROR: Could not find a version that satisfies the requirement facelib (from versions: none)
ERROR: No matching distribution found for facelib
``` | 1medium
|
Title: ImportError: cannot import name 'XPath'
Body: XPath去哪啦???看起来你好像移除了xpath???

| 1medium
|
Title: [Docker] Please export port 8000 for plesk
Body: Hi there, merry Christmas!
I am trying to install paperless-ng on a server using Plesk. It's working and I got it all running - but Plesk does not recognize the port you're exposing, which prevents me from actually accessing the service. I am using your docker image from https://hub.docker.com/r/jonaswinkler/paperless-ng
I found this article, where they describe in the comments what needs to be done
https://support.plesk.com/hc/en-us/articles/115003142213-Unable-to-add-Docker-Proxy-rules-in-Plesk-no-container-is-displayed
The way I see it, all we'd need would be an EXPOSE 8000 in the docker file. But I am no expert and just guessing wildly. Would be much appreciated!
Best,
Jens | 1medium
|
Title: 合作请求
Body: ### 是否已存在类似的功能请求?
- [x] 我已搜索现有的功能请求
### 痛点
老师,推荐开源项目 https://github.com/volcengine/ai-app-lab 合作,您看有兴趣吗?可以邮件沟通一下吗?[email protected]
### 建议的解决方案
老师,推荐开源项目 https://github.com/volcengine/ai-app-lab 合作,您看有兴趣吗?可以邮件沟通一下吗?[email protected]
### 有用的资源
_No response_
### 其他信息
_No response_ | 3misc
|
Title: Missing Array length
Body: Hey,
I worked in C# with this API and it was verry cool. But i missed something.
It would be realy nice to have a field, wich shows the length of the following Array.
For example like this:
```
"types_length": 2,
"types": [
{
"slot": 1,
"type": {
"name": "grass",
"url": "https://pokeapi.co/api/v2/type/12/"
}
},
{
"slot": 2,
"type": {
"name": "poison",
"url": "https://pokeapi.co/api/v2/type/4/"
}
}
]
``` | 1medium
|
Title: persintency among sessions
Body: could it be possible to recover stored vector indexes across sessions (same API key ) at least within 90 days? | 1medium
|
Title: Higher CPU load after upgrading to 2025.3.2
Body: ### The problem
Hi
I see a higher CPU load after upgrading from 2025.2.5 to 2025.3.2 today at 8:15. Normally only around 1-2%, but now it is consistently between 3-4%.
I have tried stopping all add-ons, but no difference in CPU load.

### What version of Home Assistant Core has the issue?
2025.3.2
### What was the last working version of Home Assistant Core?
2025.2.5
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
Nothing new
```
### Additional information
_No response_ | 1medium
|
Title: Implement browser session API
Body: ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Browser sessions allow developers to track browser status in streamlit, so that they can implement features like authentication, persistent draft or shopping cart, which require the ability to keep user state after refreshing or reopen browsers.
### Why?
The current streamlit session will lost state if users refresh or reopen their browser. And the effort of providing a API to write cookies has been pending for years. I think provide a dedicated API to track browser session would be cleaner and easier to implement.
With this API developers don't need to know how it works, it can be based on cookie or local storage or anything else. And developers can use it with singleton pattern to keep state for browser to persist whatever they want in streamlit.
### How?
This feature will introduce several new APIs:
* `st.get_browser_session(gdpr_consent=False)`, which will set a unique session id in browser if it doesn't exist, and return it.
If `gdpr_consent` is set to True, a window will pop up to ask for user's consent before setting the session id.
* `st.clean_browser_session()`, which will remove the session id from browser.
The below is a POC of how `get_browser_session` can be used to implement a simple authentication solution:
```python
from streamlit.web.server.websocket_headers import _get_websocket_headers
from streamlit.components.v1 import html
import streamlit as st
from http.cookies import SimpleCookie
from uuid import uuid4
from time import sleep
def get_cookie():
try:
headers = st.context.headers
except AttributeError:
headers = _get_websocket_headers()
if headers is not None:
cookie_str = headers.get("Cookie")
if cookie_str:
return SimpleCookie(cookie_str)
def get_cookie_value(key):
cookie = get_cookie()
if cookie is not None:
cookie_value = cookie.get(key)
if cookie_value is not None:
return cookie_value.value
return None
def get_browser_session():
"""
use cookie to track browser session
this id is unique to each browser session
it won't change even if the page is refreshed or reopened
"""
if 'st_session_id' not in st.session_state:
session_id = get_cookie_value('ST_SESSION_ID')
if session_id is None:
session_id = uuid4().hex
st.session_state['st_session_id'] = session_id
html(f'<script>document.cookie = "ST_SESSION_ID={session_id}";</script>')
sleep(0.1) # FIXME: work around bug: Tried to use SessionInfo before it was initialized
st.rerun() # FIXME: rerun immediately so that html won't be shown in the final page
st.session_state['st_session_id'] = session_id
return st.session_state['st_session_id']
@st.cache_resource
def get_auth_state():
"""
A singleton to store authentication state
"""
return {}
st.set_page_config(page_title='Browser Session Demo')
session_id = get_browser_session()
auth_state = get_auth_state()
if session_id not in auth_state:
auth_state[session_id] = False
st.write(f'Your browser session ID: {session_id}')
if not auth_state[session_id]:
st.title('Input Password')
token = st.text_input('Token', type='password')
if st.button('Submit'):
if token == 'passw0rd!':
auth_state[session_id] = True
st.rerun()
else:
st.error('Invalid token')
else:
st.success('Authentication success')
if st.button('Logout'):
auth_state[session_id] = False
st.rerun()
st.write('You are free to refresh or reopen this page without re-authentication')
```
A more complicated example of using this method to work with oauth2 can be tried here: https://ai4ec.ikkem.com/apps/op-elyte-emulator/
### Additional Context
Related issues:
* https://github.com/streamlit/streamlit/issues/861
* https://github.com/streamlit/streamlit/issues/8518 | 2hard
|
Title: The official RL example got error
Body: Copy and run these code from official RL example, but got below errors, please help check, thanks.
https://github.com/microsoft/qlib/blob/main/examples/rl/simple_example.ipynb
Training started
/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/tianshou/env/venvs.py:66: UserWarning: You provided an environment generator that returned an OpenAI Gym environment. We strongly recommend transitioning to Gymnasium environments. Tianshou is automatically wrapping your environments in a compatibility layer, which could potentially cause issues.
warnings.warn(
/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/utils/data_queue.py:98: RuntimeWarning: After 1 cleanup, the queue is still not empty.
warnings.warn(f"After {repeat} cleanup, the queue is still not empty.", category=RuntimeWarning)
Traceback (most recent call last):
File "/Users/user/Desktop/ruc/paper/quant/Quant/rl/rl_example.py", line 166, in <module>
train(
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/trainer/api.py", line 63, in train
trainer.fit(vessel)
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/trainer/trainer.py", line 224, in fit
self.vessel.train(vector_env)
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/trainer/vessel.py", line 171, in train
collector = Collector(self.policy, vector_env, VectorReplayBuffer(self.buffer_size, len(vector_env)))
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/tianshou/data/collector.py", line 80, in __init__
self.reset(False)
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/tianshou/data/collector.py", line 131, in reset
self.reset_env(gym_reset_kwargs)
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/tianshou/data/collector.py", line 147, in reset_env
obs, info = self.env.reset(**gym_reset_kwargs)
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/utils/finite_env.py", line 233, in reset
for i, o in zip(request_id, super().reset(request_id)):
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/tianshou/env/venvs.py", line 280, in reset
assert (
AssertionError: The environment does not adhere to the Gymnasium's API.
Exception ignored in: <function DataQueue.__del__ at 0x12b9f6280>
Traceback (most recent call last):
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/utils/data_queue.py", line 148, in __del__
self.cleanup()
File "/Users/user/Desktop/ruc/paper/quant/Quant/venv/lib/python3.8/site-packages/qlib/rl/utils/data_queue.py", line 101, in cleanup
self._queue.get(block=False)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/queues.py", line 111, in get
res = self._recv_bytes()
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError:
env
python: 3.8
qlib: 0.9.3
| 1medium
|
Title: mypy can't find OPT_SERIALIZE_DATACLASS
Body: For some reason mypy doesn't think that OPT_SERIALIZE_DATACLASS exists in the orjson module. I really don't know why this is, it's clearly defined and in the .pyi file, so maybe it's an issue with mypy? Figured I would post it here in case you know why.
My code:
```python
from dataclasses import dataclass
import orjson
@dataclass
class Test:
value: str
test = Test("hi")
print(orjson.dumps(test, option=orjson.OPT_SERIALIZE_DATACLASS))
```
mypy's output is " error: Module has no attribute "OPT_SERIALIZE_DATACLASS"
| 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.