text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Checkpointing during MCMC
Body: Hi devs, thanks for your contributions to this tool!
Is it possible to save the MCMC chains while the chains are running? I'm using numpyro on multiple GPUs in an HPC environment and would like to checkpoint my jobs in case of preemption. | 1medium
|
Title: Investigate allowing `predict_likelihood_parameters` for auto-regression with quantile likelihoods
Body: Check whether we could allow `predict_likelihood_parameters=True` for auto-regression with quantile likelihoods.
Would be interesting to see how the quantiles compare if we just feed the model with the last predicted quantiles, compared to feeding it with the samples
If the results are similar, this could speed up things quite a bit especially for torch models since we wouldn’t have to call the forward with all samples | 1medium
|
Title: Reopen Shelly Ble Problem
Body: ### The problem
Is the same like
https://github.com/home-assistant/core/issues/140889
### What version of Home Assistant Core has the issue?
2025.3.x
### What was the last working version of Home Assistant Core?
2024.x
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
15.0
### Link to integration documentation on our website
_No response_
### Diagnostics information
Logger: habluetooth.base_scanner
Quelle: runner.py:154
Erstmals aufgetreten: 22. März 2025 um 10:02:14 (4 Vorkommnisse)
Zuletzt protokolliert: 22. März 2025 um 18:31:45
shellyplus1pm-keller (08:3A:F2:02:2D:A0): Bluetooth scanner has gone quiet for 99.69573211669922s, check logs on the scanner device for more information
shellyplus1pm-keller (08:3A:F2:02:2D:A0): Bluetooth scanner has gone quiet for 100.61075592041016s, check logs on the scanner device for more information
shellyplus1pm-keller (08:3A:F2:02:2D:A0): Bluetooth scanner has gone quiet for 119.21070861816406s, check logs on the scanner device for more information
shellyplus1pm-keller (08:3A:F2:02:2D:A0): Bluetooth scanner has gone quiet for 94.63068389892578s, check logs on the scanner device for more information
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | 1medium
|
Title: 'BERTopic' object has no attribute 'reduce_outliers'
Body: i am not able to use reduce_outliers as it is showing a error as shown below

Can you guide me on how do i solve this? | 1medium
|
Title: [Support Us]: LogicalCube
Body: Thank you for letting us use your organization's name on the repository read.me page and letting other customers know that you support the project! If you would like us to also display your organization's logo. please raise a pull request to provide an image file for the logo.
Please add any files to *docs/source/_static/*
Organization Name: LogicalCube (https://www.logicalcube.com)
Your Name: Bryan Kelly
Your Position: Manager
I have included a logo: n
*By raising a Support Us issue (and related pull request), you are granting AWS permission to use your company’s name (and logo) for the limited purpose described here and you are confirming that you have authority to grant such permission.*
| 0easy
|
Title: Can't save models via the ModelCheckpoint() when using custom optimizer
Body: ### Bug description
Dear all,
I want to use a [Hessian-Free LM optimizer](https://github.com/ltatzel/PyTorchHessianFree) replace the pytorch L-BFGS optimizer. However, the model can't be saved normally if I use the ModelCheckpoint(), while the torch.save() and Trainer.save_checkpoint() are still working. You can find my test python file in the following. Could you give me some suggestions to handle this problem?
Thanks!
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
import numpy as np
import pandas as pd
import time
import torch
from torch import nn
from torch.utils.data import DataLoader,TensorDataset
import matplotlib.pyplot as plt
import lightning as L
from lightning.pytorch import LightningModule
from lightning.pytorch.loggers import CSVLogger
from lightning.pytorch.callbacks.model_checkpoint import ModelCheckpoint
from lightning.pytorch import Trainer
from lightning.pytorch.callbacks.early_stopping import EarlyStopping
from hessianfree.optimizer import HessianFree
class LitModel(LightningModule):
def __init__(self,loss):
super().__init__()
self.tanh_linear= nn.Sequential(
nn.Linear(1,20),
nn.Tanh(),
nn.Linear(20,20),
nn.Tanh(),
nn.Linear(20,1),
)
self.loss_fn = nn.MSELoss()
self.automatic_optimization = False
return
def forward(self, x):
out = self.tanh_linear(x)
return out
def configure_optimizers(self):
optimizer = HessianFree(
self.parameters(),
cg_tol=1e-6,
cg_max_iter=1000,
lr=1e0,
LS_max_iter=1000,
LS_c=1e-3
)
return optimizer
def training_step(self, batch, batch_idx):
x, y = batch
opt = self.optimizers()
def forward_fn():
y_pred = self(x)
loss=self.loss_fn(y_pred,y)
return loss,y_pred
opt.optimizer.step( forward=forward_fn)
loss,y_pred=forward_fn()
self.log("train_loss", loss, on_epoch=True, on_step=False)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
val_loss = self.loss_fn(y_hat, y)
# passing to early_stoping
self.log("val_loss", val_loss, on_epoch=True, on_step=False)
return val_loss
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = self.loss_fn(y_hat, y)
return loss
def main():
input_size = 20000
train_size = int(input_size*0.9)
test_size = input_size-train_size
batch_size = 1000
x_total = np.linspace(-1.0, 1.0, input_size, dtype=np.float32)
x_total = np.random.choice(x_total,size=input_size,replace=False) #random sampling
x_train = x_total[0:train_size]
x_train= x_train.reshape((train_size,1))
x_test = x_total[train_size:input_size]
x_test= x_test.reshape((test_size,1))
x_train=torch.from_numpy(x_train)
x_test=torch.from_numpy(x_test)
y_train = torch.from_numpy(np.sinc(10.0 * x_train))
y_test = torch.from_numpy(np.sinc(10.0 * x_test))
training_data = TensorDataset(x_train,y_train)
test_data = TensorDataset(x_test,y_test)
# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size
#,num_workers=2
)
test_dataloader = DataLoader(test_data, batch_size=batch_size
#,num_workers=2
)
for X, y in test_dataloader:
print("Shape of X: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
for X, y in train_dataloader:
print("Shape of X: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
loss_fn = nn.MSELoss()
model=LitModel(loss_fn)
# prepare trainer
opt_label=f'lm_HF_t20'
logger = CSVLogger(f"./{opt_label}", name=f"test-{opt_label}",flush_logs_every_n_steps=1)
epochs = 1e1
print(f"test for {opt_label}")
early_stop_callback = EarlyStopping(
monitor="val_loss"
, min_delta=1e-9
, patience=10
, verbose=False, mode="min"
, stopping_threshold = 1e-8 #stop if reaching accuracy
)
modelck=ModelCheckpoint(
dirpath = f"./{opt_label}"
, monitor="val_loss"
,save_last = True
#, save_top_k = 2
#, mode ='min'
#, every_n_epochs = 1
#, save_on_train_epoch_end=True
#,save_weights_only=True,
)
Train_model=Trainer(
accelerator="cpu"
, max_epochs = int(epochs)
, enable_progress_bar = True #using progress bar
#, callbacks=[modelck,early_stop_callback] # using earlystopping
, callbacks=[modelck] #do not using earlystopping
, logger=logger
#, num_processes = 16
)
t1=time.time()
Train_model.fit(model,train_dataloaders=train_dataloader, val_dataloaders=test_dataloader)
t2=time.time()
print('total time')
print(t2-t1)
# torch.save() and Trainer.save_checkpoint() can save the model, but ModelCheckpoint() can't.
#torch.save(model.state_dict(), f"model{opt_label}.pth")
#print(f"Saved PyTorch Model State to model{opt_label}.pth")
#Train_model.save_checkpoint(f"model{opt_label}.ckpt")
#print(f"Saved PL Model State to model{opt_label}.ckpt")
exit()
return
if __name__=='__main__':
main()
```
```
### Error messages and logs
```
# Error messages and logs here please
```
The program do not report error, but the ModelCheckpoint() can't save models when I use a custom optimizer.
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: 12.1
* Lightning:
- backpack-for-pytorch: 1.6.0
- lightning: 2.2.0
- lightning-utilities: 0.11.3.post0
- pytorch-lightning: 2.2.3
- torch: 2.2.0
- torchaudio: 2.0.1
- torchmetrics: 0.11.4
- torchvision: 0.15.1
* Packages:
- aiohttp: 3.9.1
- aiosignal: 1.3.1
- async-timeout: 4.0.3
- attrs: 23.2.0
- backpack-for-pytorch: 1.6.0
- bottleneck: 1.3.5
- certifi: 2022.12.7
- charset-normalizer: 3.1.0
- cmake: 3.26.0
- colorama: 0.4.6
- contourpy: 1.2.1
- cycler: 0.12.1
- einops: 0.8.0
- filelock: 3.10.0
- fonttools: 4.51.0
- frozenlist: 1.4.1
- fsspec: 2023.3.0
- hessianfree: 0.1
- idna: 3.4
- jinja2: 3.1.2
- kiwisolver: 1.4.5
- lightning: 2.2.0
- lightning-utilities: 0.11.3.post0
- lit: 15.0.7
- markupsafe: 2.1.2
- matplotlib: 3.8.4
- mpmath: 1.3.0
- multidict: 6.0.4
- networkx: 3.0
- numexpr: 2.8.4
- numpy: 1.24.2
- nvidia-cublas-cu11: 11.10.3.66
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu11: 11.7.101
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu11: 11.7.99
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu11: 11.7.99
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu11: 8.5.0.96
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu11: 10.9.0.58
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu11: 10.2.10.91
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu11: 11.4.0.1
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu11: 11.7.4.91
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu11: 2.14.3
- nvidia-nccl-cu12: 2.19.3
- nvidia-nvjitlink-cu12: 12.3.101
- nvidia-nvtx-cu11: 11.7.91
- nvidia-nvtx-cu12: 12.1.105
- packaging: 23.0
- pandas: 1.5.3
- pillow: 9.4.0
- pip: 24.1.1
- pyparsing: 3.1.2
- python-dateutil: 2.8.2
- pytorch-lightning: 2.2.3
- pytz: 2022.7
- pyyaml: 6.0
- requests: 2.28.2
- setuptools: 67.6.0
- six: 1.16.0
- sympy: 1.11.1
- torch: 2.2.0
- torchaudio: 2.0.1
- torchmetrics: 0.11.4
- torchvision: 0.15.1
- tqdm: 4.65.0
- triton: 2.2.0
- typing-extensions: 4.11.0
- unfoldnd: 0.2.1
- urllib3: 1.26.15
- wheel: 0.40.0
- yarl: 1.9.4
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.9
- release: 3.10.0-862.el7.x86_64
- version: #1 SMP Fri Apr 20 16:44:24 UTC 2018
</details>
### More info
_No response_ | 1medium
|
Title: Allow passing parameters to signal receiver
Body: ## Summary
Allow passing parameters to a signal receiver (when self is not available)
I.e.
```
crawler.signals.connect(receiver=cls.engine_stopped, signal=signals.engine_stopped, cb_kwargs={"lazy": True})
@classmethod
def engine_stopped(cls, lazy: bool) -> None:
...
```
## Motivation
Pass of parameters to the receiver method would allow more dependent logic/behavior in it | 1medium
|
Title: [BUG] ViT models can't load pretrained weights from models with different `cls_token`/`no_embed_class` settings
Body: **Describe the bug**
The title says it all. ViT models currently support changing some hyperparameters when loading pretrained weights (such as `img_size`). This is useful, when the loaded weights are intended to be used for further fine-tuning with different hyperparameters. However, `_load_weights` currently assumes that the default config was used.
**To Reproduce**
```python
timm.create_model("vit_large_patch16_384", pretrained=True, class_token=False, global_pool="avg")
# AttributeError: 'NoneType' object has no attribute 'copy_'
```
```python
timm.create_model("vit_large_patch16_384", pretrained=True, no_embed_class=True)
# RuntimeError: The size of tensor a (576) must match the size of tensor b (577) at non-singleton dimension 1
```
**Expected behavior**
Return ViT models with `class_token=False` and `no_embed_class=True`.
I don't have the time to fill out a proper PR, but the short version is that `_load_weights` should check if `model.cls_token` is `None` before attempting to copy it from the pretrained weights and `resize_pos_embed` should just drop the extra prefix tokens from the embeddings before doing the interpolation. | 2hard
|
Title: gradio demo don't work in huggingface space
Body: ### Describe the bug
when deploying demo code of doc on huggingface space, it will produce a bug "gradio.exceptions.error: 'data incompatible with the messages format'". Because the version of gradio is 5.0.1 when deploying and can not change, I can not fix it by updating the version. After trying, find if I change the code 【chatbot=gr.Chatbot(height=300)】 -> 【chatbot=gr.Chatbot(height=300, type="messages")], the bug will be fixed.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def slow_echo(message, history):
for i in range(len(message)):
time.sleep(0.3)
yield "You typed: " + message[: i+1]
gr.ChatInterface(
slow_echo,
type="messages",
chatbot=gr.Chatbot(height=300),
textbox=gr.Textbox(placeholder="Ask me a yes or no question", container=False, scale=7),
title="Yes Man",
description="Ask Yes Man any question",
theme="ocean",
examples=["Hello", "Am I cool?", "Are tomatoes vegetables?"],
cache_examples=True,
).launch()
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
gradio==5.0.1
```
### Severity
I can work around it | 1medium
|
Title: UnicodeDecodeError when using subprocess.getstatusoutput
Body: I've made this script for finding corrupt images using Imagemagick. Full code:
```
from pathlib import Path
import time
import subprocess
import concurrent.futures
from tqdm import tqdm
_err_set = set()
def _imgerr(_img):
global _err_set
output = subprocess.getstatusoutput("magick identify -regard-warnings \"" + str(_img) + "\"")
if(int(output[0]) == 1):
_err_set.add(str(_img))
_root = input("Input directory path: ")
file_set = set(Path(_root).rglob("*.jpg"))
print("Scanning...")
start1 = time.perf_counter()
with concurrent.futures.ThreadPoolExecutor() as executor:
list(tqdm(executor.map(_imgerr, file_set),total=int(len(file_set))))
finish1 = time.perf_counter()
with open('bad_img.txt', 'w', encoding="utf-8") as f:
for item in sorted(_err_set):
f.write('"' + item + '"' + "\n")
f.close()
print(f'Total execution time [mt] = {round(finish1 - start1, 3)}s')
print(f'Average time per image = {round((finish1 - start1)/len(file_set), 10)}s')
print(f'Corrupt images = {len(_err_set)}')
```
I'm using tqdm for progress tracking. The problem is with this line:
```
list(tqdm(executor.map(_imgerr, file_set),total=int(len(file_set))))
```
If there are no non ascii characters in the image path then everything works fine, but if any unicode character appears I get
>Exception has occurred: UnicodeDecodeError 'charmap' codec can't decode byte 0x81 in position 37: character maps to /<undefined/>
If I instead just use
```
executor.map(_imgerr, file_set)
```
everyting works just fine, regardless if there are unicode characters present or not. Been scratching my head for couple of hours now but still can't figure out what causes the error. Any suggestions are welcome! Btw maybe it's relevant but when debugging the error pops out in the function at the following line:
```
output = subprocess.getstatusoutput("magick identify -regard-warnings \"" + str(_img) + "\"")
```
| 1medium
|
Title: DOC grammar issue in the governance page
Body: ### Describe the issue linked to the documentation
In the governance page at line: https://github.com/scikit-learn/scikit-learn/blob/59dd128d4d26fff2ff197b8c1e801647a22e0158/doc/governance.rst?plain=1#L161
there is a reference attached to "Enhancement proposals (SLEPs)."
However, after compiling, it is displayed as "a Enhancement proposals (SLEPs)" which is grammatically incorrect.
Page at: https://scikit-learn.org/stable/governance.html
### Suggest a potential alternative/fix
Fix it by updating the line with
```
an :ref:`slep`
``` | 0easy
|
Title: Image preprocessing parameters
Body: The function `preprocess_input()`, in encoders/_preprocessing.py, takes 'mean' and 'std' as parameters and apply the normalization on the data in the following way:
```
if mean is not None:
mean = np.array(mean)
x = x - mean
if std is not None:
std = np.array(std)
x = x / std
```
In my opinion, the mean/std here should be the statistics for the training dataset (data-specific). However, according to `get_preprocessing_params()`, in encoders/\_\_init\_\_.py, the mean/std are determined by the pretrained model, which depends on the training data used in the pretrained model.
Just wonder, is there any reason why we do it based on the pretrained model? | 1medium
|
Title: select_related with ManyToMany through
Body: **Describe the bug**
Trying to query a ManyToMany through relationship and getting this error:
```bash
ormar.exceptions.RelationshipInstanceError: Relationship error - ForeignKey EffortResource is of type <class 'int'> while <class 'weakref.ProxyType'> passed as a parameter.
```
after running `await EffortStep.objects.select_related("users").all()`.
Models:
```python
class User(PublicIdMixin, ormar.Model):
id = ormar.Integer(primary_key=True)
class Meta(BaseMeta):
tablename = "users"
class EffortStepUser(ormar.Model):
id = ormar.Integer(primary_key=True)
class Meta(BaseMeta):
tablename = "effort_step_x_user"
class EffortStep(PublicIdMixin, DateFieldsMixin, TenantAwareModel, ormar.Model):
id = ormar.Integer(primary_key=True)
users = ormar.ManyToMany(
User,
through=EffortStepUser,
through_relation_name="step_id",
through_reverse_relation_name="user_id",
)
class Meta(BaseMeta):
tablename = "effort_step"
class EffortResource(DateFieldsMixin, TenantAwareModel, ormar.Model):
id = ormar.Integer(primary_key=True)
step: EffortStep = ormar.ForeignKey(EffortStep, name="step_id", nullable=False)
class Meta(BaseMeta):
tablename = "effort_resource"
```
### Full traceback
```bash
Traceback (most recent call last):
File "/.venvs/core/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/fastapi/applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "/venvs/core/lib/python3.11/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/cors.py", line 92, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/cors.py", line 147, in simple_response
await self.app(scope, receive, send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/.venvs/core/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/.venvs/core/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/fastapi/routing.py", line 235, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/efforts/router.py", line 95, in add_effort_collaborators
return await service.add_step_zero_collaborators(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/efforts/service.py", line 165, in add_step_zero_collaborators
await EffortStep.objects.select_related("users")
File "/.venvs/core/lib/python3.11/site-packages/ormar/queryset/queryset.py", line 982, in get
processed_rows = self._process_query_result_rows(rows)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/queryset/queryset.py", line 196, in _process_query_result_rows
return self.model.merge_instances_list(result_rows) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/mixins/merge_mixin.py", line 43, in merge_instances_list
model = cls.merge_two_instances(next_model, model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/mixins/merge_mixin.py", line 91, in merge_two_instances
cls.merge_two_instances(
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/mixins/merge_mixin.py", line 91, in merge_two_instances
cls.merge_two_instances(
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/mixins/merge_mixin.py", line 82, in merge_two_instances
setattr(other, field_name, value_to_set)
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/newbasemodel.py", line 175, in __setattr__
object.__setattr__(self, name, value)
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/descriptors/descriptors.py", line 110, in __set__
model = instance.Meta.model_fields[self.name].expand_relationship(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/fields/foreign_key.py", line 541, in expand_relationship
model = constructors.get( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/fields/foreign_key.py", line 399, in _extract_model_from_sequence
return [
^
File "/.venvs/core/lib/python3.11/site-packages/ormar/fields/foreign_key.py", line 400, in <listcomp>
self.expand_relationship( # type: ignore
File "/.venvs/core/lib/python3.11/site-packages/ormar/fields/foreign_key.py", line 541, in expand_relationship
model = constructors.get( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/fields/foreign_key.py", line 475, in _construct_model_from_pk
raise RelationshipInstanceError(
ormar.exceptions.RelationshipInstanceError: Relationship error - ForeignKey EffortResource is of type <class 'int'> while <class 'weakref.ProxyType'> passed as a parameter.
```
**Versions (please complete the following information):**
- Database backend used (mysql/sqlite/postgress): **postgres 14.1**
- Python version: 3.11
- `ormar` version: 0.12.0
- if applicable `fastapi` version 0.88
Thanks!
| 2hard
|
Title: Incorrect(?) use of db.session in flask_appbuilder.AppBuilder
Body: ### Environment
Flask-Appbuilder version: 4.1.3
### Describe the expected results
We have an Azure SQL database that we use with flask-appbuilder. This database requires that we request a new token every hour or so (expiry on the token is 3600 seconds). To do this, we use an event listener of the form: `@event.listens_for(engine, "do_connect")`, that requests a new token and sets this in the connection parameters for the engine when creating a new connection. The expected behaviour would be that once the token has expired, and it needs to create a new connection to the database, it runs the event from above and acquires a new token that can be used for connections.
### Describe the actual results
We're facing an issue where after an hour (when the token expires) if you perform a request to the application you'll get an Internal Server Error with an error like this: `sqlalchemy.exc.OperationalError: (pyodbc.OperationalError) ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x20 (32) (SQLExecDirectW)')`.
Subsequent requests after this will be fine, until the token expires again at which point it'll happen again.
My suspicion is that there is a problem with the use of the `db.session` when initializing the appbuilder object: `flask_appbuilder.AppBuilder(app, db.session, ...)` because the `db.session` in my understanding is not meant to be long-lived, it's meant to be a short-lived object that you use for a transaction and then close afterwards: see [here](https://docs.sqlalchemy.org/en/14/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it).
I further don't know if engine events are triggered for sessions at all (and that this may be the cause of the token expiry -> connection failure issue that I'm seeing).
```pytb
app|ERROR|Exception on / [GET]
Traceback (most recent call last):
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
pyodbc.OperationalError: ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x20 (32) (SQLExecDirectW)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/.venv/lib/python3.9/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/app/.venv/lib/python3.9/site-packages/flask/app.py", line 1519, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/app/.venv/lib/python3.9/site-packages/flask/app.py", line 1517, in full_dispatch_request
rv = self.dispatch_request()
File "/app/.venv/lib/python3.9/site-packages/flask/app.py", line 1503, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/app/app.py", line 72, in index
return self.render_template(index, appbuilder=self.appbuilder)
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/baseviews.py", line 322, in render_template
return render_template(
File "/app/.venv/lib/python3.9/site-packages/flask/templating.py", line 154, in render_template
return _render(
File "/app/.venv/lib/python3.9/site-packages/flask/templating.py", line 128, in _render
rv = template.render(context)
File "/app/.venv/lib/python3.9/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/app/.venv/lib/python3.9/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "/app/webapp/templates/index_not_auth.html", line 1, in top-level template code
{% extends "appbuilder/base.html" %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/base.html", line 1, in top-level template code
{% extends base_template %}
File "/app/webapp/templates/custom_base.html", line 1, in top-level template code
{% extends 'appbuilder/baselayout.html' %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 2, in top-level template code
{% import 'appbuilder/baselib.html' as baselib %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/init.html", line 37, in top-level template code
{% block body %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 8, in block 'body'
{% block navbar %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 10, in block 'navbar'
{% include 'appbuilder/navbar.html' %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/navbar.html", line 29, in top-level template code
{% include 'appbuilder/navbar_menu.html' %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/navbar_menu.html", line 11, in top-level template code
{% if item1 | is_menu_visible %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/filters.py", line 136, in is_menu_visible
return self.security_manager.has_access("menu_access", item.name)
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/security/manager.py", line 1526, in has_access
return self.is_item_public(permission_name, view_name)
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/security/manager.py", line 1406, in is_item_public
permissions = self.get_public_permissions()
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/security/sqla/manager.py", line 322, in get_public_permissions
role = self.get_public_role()
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/security/sqla/manager.py", line 316, in get_public_role
self.get_session.query(self.role_model)
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2845, in one_or_none
return self._iter().one_or_none()
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2903, in _iter
result = self.session.execute(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 1696, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2043, in _handle_dbapi_exception
util.raise_(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (pyodbc.OperationalError) ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x20 (32) (SQLExecDirectW)')
...
(Background on this error at: https://sqlalche.me/e/14/e3q8)
```
### Steps to reproduce
Unclear. | 2hard
|
Title: API not capturing conversion_id property update
Body: ### Describe the problem/bug
When converting an input measurement the 'conversion_id' prop of "device measurement settings" in the API stays null while the db reflects the update
### Versions:
- Mycodo Version: 8.10.1
- Raspberry Pi Version: 3B+
- Raspbian OS Version: Raspberry Pi OS Lite
### Reproducibility
Convert an input measurement via UI
GET api/settings/device_measurements/by_device_id/[unique_id]
Bug: DB has conversion_id ref but API does not
thank you
| 1medium
|
Title: [Storage] Bump GCSFuse to 2.4.0+
Body: [GCSFuse 2.4.0](https://github.com/GoogleCloudPlatform/gcsfuse/releases/tag/v2.4.0) introduces parallel downloads which helps with loading large files (e.g., model checkpoints).
We should bump GCSFuse version + investigate the tradeoffs of enabling parallel downloads (by setting `file-cache:enable-parallel-downloads:true` in the gcsfuse config file). | 1medium
|
Title: Feature Request: Automatic reloading for existing scripts?
Body: While jupyter notebook is a great place for doing the initial development, at some point you want to move your automations out from it. It would be great if (at least the existing scripts) would be automatically reloaded on change, i.e., by using inotify listeners on the existing script files and calling the `reload()` when a modification is detected.
My personal development setup looks like this: I'm using notebooks to do the initial development, and as soon as I'm confident that things are mostly working, I'll move the code over into a new script. For development, I'm using pycharm which I have configured to do an auto-deployment on the production instance whenever a file gets saved. In such cases, the ability to avoid doing a manual reload service call would simplify the workflow. | 1medium
|
Title: Plotly min.js import missing .js suffix
Body: When running this cell in a JupyterLab instance:
```python
import plotly.io as pio
pio.renderers.default = "notebook_connected"
import plotly.graph_objects as go
go.Figure()
```
The first time Plotly is loaded, the figure is blank and the following errors appear in the dev console:
```
GET https://cdn.plot.ly/plotly-3.0.1.min net::ERR_ABORTED 403 (Forbidden)
Uncaught ReferenceError: Plotly is not defined
at <anonymous>:1:179
at P.attachWidget (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1859455)
at P.insertWidget (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1858919)
at M._insertOutput (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1282454)
at M.onModelChanged (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1278810)
at m (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1832098)
at Object.l [as emit] (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1831774)
at a.emit (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1829611)
at d._onListChanged (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1273587)
at m (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1832098)
```
When running the cell for a second time, the figure appears. If the page is refreshed, the figure disappears again.
It looks like the issue can be traced back to this line, where `.js` is removed from the CDN path:
https://github.com/plotly/plotly.py/blob/ae0fbedce7ba3be6450aba350f12c1fb043e8eb8/plotly/io/_base_renderers.py#L286
This is with Python 3.11.11, Plotly 6.0.1 and Jupyterlab 4.3.6, and occurs in both Chrome and Edge.
```
anyio==4.9.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==3.0.0
async-lru==2.0.5
attrs==25.3.0
babel==2.17.0
beautifulsoup4==4.13.3
bleach==6.2.0
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
comm==0.2.2
debugpy==1.8.13
decorator==5.2.1
defusedxml==0.7.1
executing==2.2.0
fastjsonschema==2.21.1
fqdn==1.5.1
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
idna==3.10
ipykernel==6.29.5
ipython==9.0.2
ipython_pygments_lexers==1.1.1
isoduration==20.11.0
jedi==0.19.2
Jinja2==3.1.6
json5==0.10.0
jsonpointer==3.0.0
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
jupyter-events==0.12.0
jupyter-lsp==2.2.5
jupyter_client==8.6.3
jupyter_core==5.7.2
jupyter_server==2.15.0
jupyter_server_terminals==0.5.3
jupyterlab==4.3.6
jupyterlab_pygments==0.3.0
jupyterlab_server==2.27.3
MarkupSafe==3.0.2
matplotlib-inline==0.1.7
mistune==3.1.3
narwhals==1.31.0
nbclient==0.10.2
nbconvert==7.16.6
nbformat==5.10.4
nest-asyncio==1.6.0
notebook_shim==0.2.4
overrides==7.7.0
packaging==24.2
pandocfilters==1.5.1
parso==0.8.4
pexpect==4.9.0
platformdirs==4.3.7
plotly==6.0.1
prometheus_client==0.21.1
prompt_toolkit==3.0.50
psutil==7.0.0
ptyprocess==0.7.0
pure_eval==0.2.3
pycparser==2.22
Pygments==2.19.1
python-dateutil==2.9.0.post0
python-json-logger==3.3.0
PyYAML==6.0.2
pyzmq==26.3.0
referencing==0.36.2
requests==2.32.3
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rpds-py==0.23.1
Send2Trash==1.8.3
six==1.17.0
sniffio==1.3.1
soupsieve==2.6
stack-data==0.6.3
terminado==0.18.1
tinycss2==1.4.0
tornado==6.4.2
traitlets==5.14.3
types-python-dateutil==2.9.0.20241206
typing_extensions==4.12.2
uri-template==1.3.0
urllib3==2.3.0
wcwidth==0.2.13
webcolors==24.11.1
webencodings==0.5.1
websocket-client==1.8.0
```
It still doesn't work when downgrading to Plotly 5.24.1, although the console error is slightly different:
```
Uncaught ReferenceError: require is not defined
at <anonymous>:1:17
at P.attachWidget (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1859455)
at P.insertWidget (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1858919)
at M._insertOutput (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1282454)
at M.onModelChanged (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1278810)
at m (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1832098)
at Object.l [as emit] (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1831774)
at a.emit (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1829611)
at d._onListChanged (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1273587)
at m (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1832098)
```
When exporting to html with `nbconvert`, the 403 error still appears but the plot displays. Changing line 7568 of the attached html file to `<script type="module">import "https://cdn.plot.ly/plotly-3.0.1.min.js"</script>` removes the error from the console.
[Plotly min.js issue.zip](https://github.com/user-attachments/files/19428204/Plotly.min.js.issue.zip) | 1medium
|
Title: Fix Frontend Failing Test: paddle - creation.paddle.assign
Body: To-do List: https://github.com/unifyai/ivy/issues/27500 | 1medium
|
Title: swipe滑动操作不够平滑 #884
Body: 我在用airtest测试自动滑动解锁验证码,但很多页面的滑动解锁会通过js判断滑动操作是人为还是机器,比较生硬的滑动操作会被判定为机器。
目前使用的是基于windows页面识别的swipe操作,具体滑动代码如下:
swipe({图片},vector=[0.1776,0.0056],duration=0.2,steps=randint(2,5))
无论怎么调用参数,swipe的滑动操作其实都是针对指定距离向量进行分段滑动操作,滑动过程都比较生硬容易被识别。
能否提供比较拟人的滑动解决方案? | 1medium
|
Title: Errbot 4.2.2 python 2.7. Error with command errbot.
Body: In order to let us help you better, please fill out the following fields as best you can:
### I am...
* [ ] Reporting a bug
* [ ] Suggesting a new feature
* [x ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 4.2.2
* OS version: ubuntu 16.04 lts
* Python version: 2.7
* Using a virtual environment: yes
### Issue description
Please describe your bug/feature/problem here.
The more information you can provide, the better.
I can only use python 2.7. I'm trying to install errbot version 4.2.2. I installed it. I got error running errbot command. Can you help me implement errbot without using python 3
> 17:42:12 DEBUG errbot.specific_plugin_ma Load the one remaining...
17:42:12 ERROR yapsy Unable to import plugin: /home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/backends/text
Traceback (most recent call last):
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/yapsy/PluginManager.py", line 488, in loadPlugins
candidate_module = imp.load_module(plugin_module_name,plugin_file,candidate_filepath+".py",("py","r",imp.PY_SOURCE))
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/backends/text.py", line 14, in <module>
from errbot.rendering import ansi, text, xhtml, imtext
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/rendering/__init__.py", line 10, in <module>
MD_ESCAPE_RE = re.compile(u'|'.join(re.escape(c) for c in Markdown.ESCAPED_CHARS))
AttributeError: type object 'Markdown' has no attribute 'ESCAPED_CHARS'
17:42:12 ERROR errbot.bootstrap Unable to load or configure the backend.
Traceback (most recent call last):
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/bootstrap.py", line 126, in setup_bot
bot = backendpm.get_plugin_by_name(backend_name)
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/specific_plugin_manager.py", line 87, in get_plugin_by_name
raise Exception(u'Error loading plugin %s:\nError:\n%s\n' % (name, formatted_error))
Exception: Error loading plugin Text:
Error:
<type 'exceptions.AttributeError'>:
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/yapsy/PluginManager.py", line 488, in loadPlugins
candidate_module = imp.load_module(plugin_module_name,plugin_file,candidate_filepath+".py",("py","r",imp.PY_SOURCE))
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/backends/text.py", line 14, in <module>
from errbot.rendering import ansi, text, xhtml, imtext
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/rendering/__init__.py", line 10, in <module>
MD_ESCAPE_RE = re.compile(u'|'.join(re.escape(c) for c in Markdown.ESCAPED_CHARS))
### Steps to reproduce
I create a folder, virtual environment.
I downloaded errbot 4.2.2 here: https://pypi.python.org/pypi/errbot/4.2.2
I installed python-telegram-bot
I created a requirements.txt file
I installed it by pip install errbot and I also clicked installation suggestion in pycharm.
When I check errbot --version, it's 4.2.2
Then I click errbot, I got error.
In case of a bug, please describe the steps we need to take in order to reproduce your issue.
If you cannot easily reproduce the issue please let us know and provide as much information as you can which might help us pinpoint the problem.
### Additional info
If you have any more information, please specify it here.
| 1medium
|
Title: Background upscale isn't working / Real-ESRGAN ignored?
Body: Hello, Thank you for this great project! 💙
I'm running this on Windows 10 and Anaconda, Installation was very easy and simple to follow thanks to your step-by-step instructions, I appreciate it.
### Problem Description:
I've added the argument: --bg_upsampler realesrgan
But it seems to ignore it and just upscale the face without the background, I get this warning:
```
inference_codeformer.py:22: RuntimeWarning: The unoptimized RealESRGAN is slow on CPU. We do not use it. If you really want to use it, please modify the corresponding codes.
warnings.warn('The unoptimized RealESRGAN is slow on CPU. We do not use it. '
Face detection model: retinaface_resnet50
Background upsampling: False, Face upsampling: False
Processing: 5a.png
detect 1 faces
All results are saved in results/_SOURCE__0.7
```
Since I'm not a programmer I don't know how to fix or mess with code in general,
Can you please tell me how to make it work?
Thanks ahead! | 1medium
|
Title: Cannot send message? Object of type Message is not JSON serializable
Body: Whenever i try to send a message using this function:
```py
async def generate_response(message):
with open("message.txt", "w") as f:
for chunk in client.send_message("capybara", message):
f.write(chunk["text_new"], end="", flush=True)
with open("message.txt", "r+") as f:
return f.read()
```
I just get this error:
```
Ignoring exception in on_message
Traceback (most recent call last):
File "/home/runner/Bowkii/venv/lib/python3.9/site-packages/nextcord/client.py", line 512, in _run_event
await coro(*args, **kwargs)
File "main.py", line 370, in on_message
response = await generate_response(message)
File "main.py", line 14, in generate_response
for chunk in client.send_message("capybara", message):
File "/home/runner/Bowkii/venv/lib/python3.9/site-packages/poe.py", line 329, in send_message
message_data = self.send_query("SendMessageMutation", {
File "/home/runner/Bowkii/venv/lib/python3.9/site-packages/poe.py", line 202, in send_query
payload = json.dumps(json_data, separators=(",", ":"))
File "/nix/store/p21fdyxqb3yqflpim7g8s1mymgpnqiv7-python3-3.8.12/lib/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/nix/store/p21fdyxqb3yqflpim7g8s1mymgpnqiv7-python3-3.8.12/lib/python3.8/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/nix/store/p21fdyxqb3yqflpim7g8s1mymgpnqiv7-python3-3.8.12/lib/python3.8/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/nix/store/p21fdyxqb3yqflpim7g8s1mymgpnqiv7-python3-3.8.12/lib/python3.8/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Message is not JSON serializable
```
I'm pretty sure I did everything correctly and I have not seen anybody else with this error | 1medium
|
Title: losing DatasetInfo in Dataset.map when num_proc > 1
Body: ### Describe the bug
Hello and thanks for developing this package!
When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset.
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetInfo
def run_map(num_proc):
dataset = Dataset.from_dict(
{"col1": [0, 1], "col2": [3, 4]},
info=DatasetInfo(
dataset_name="my_dataset",
),
)
ds = dataset.map(lambda x: x, num_proc=num_proc)
print(ds.info.dataset_name)
run_map(1)
run_map(2)
```
This puts out:
```bash
Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s]
my_dataset
Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s]
None
```
### Expected behavior
I expect the DatasetInfo to be kept as it was and there should be no difference in the output of running map with num_proc=1 and num_proc=2.
Expected output:
```bash
Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s]
my_dataset
Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s]
my_dataset
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17
- Python version: 3.8.18
- `huggingface_hub` version: 0.20.2
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.9.2 | 1medium
|
Title: DataArray.rolling fails with chunk size of 1 or 2 (reemergence of issue #9862)
Body: ### What happened?
The problem is exactly as written in closed issue #9862, but I'm using:
- xarray: 2025.1.2
- dask: 2025.2.0
Since everything is the same (including traceback and behavior when pasted into console or binder), please refer to original issue for complete description.
I didn't click "new issue" since it's an old issue that was closed, but is not fixed.
### What did you expect to happen?
We would expect the rolling mean to calculate correctly.
### Minimal Complete Verifiable Example
```Python
import dask.array as da
import xarray as xr
import numpy as np
# Dimensions and sizes
nx, ny, nt = 100, 200, 50 # size of x, y, and time dimensions
x = np.linspace(0, 10, nx) # x-coordinates
y = np.linspace(0, 20, ny) # y-coordinates
time = np.linspace(0, 1, nt) # time coordinates
# Generate a random Dask array with lazy computation
data = da.random.random(size=(nx, ny, nt), chunks=(100, 200, 1))
# Create an xarray DataArray with coordinates and attributes
data_array = xr.DataArray(
data,
dims=["x", "y", "time"],
coords={"x": x, "y": y, "time": time},
name="dummy_data",
attrs={"units": "arbitrary", "description": "Dummy 3D dataset"}
)
d_rolling = data_array.rolling(time=5).mean()
d_rolling.compute()
```
### MVCE confirmation
- [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [x] Complete example — the example is self-contained, including all data and the text of any traceback.
- [x] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [x] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
Traceback (most recent call last):
Cell In[6], line 24
d_rolling.compute()
File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/core/dataarray.py:1206 in compute
return new.load(**kwargs)
File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/core/dataarray.py:1174 in load
ds = self._to_temp_dataset().load(**kwargs)
File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/core/dataset.py:900 in load
evaluated_data: tuple[np.ndarray[Any, Any], ...] = chunkmanager.compute(
File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/namedarray/daskmanager.py:85 in compute
return compute(*data, **kwargs) # type: ignore[no-untyped-call, no-any-return]
File /srv/conda/envs/notebook/lib/python3.10/site-packages/dask/base.py:662 in compute
results = schedule(dsk, keys, **kwargs)
File /srv/conda/envs/notebook/lib/python3.10/site-packages/dask/_task_spec.py:740 in __call__
return self.func(*new_argspec, **kwargs)
ValueError: Moving window (=5) must between 1 and 4, inclusive
```
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0]
python-bits: 64
OS: Linux
OS-release: 6.8.0-52-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2025.1.2
pandas: 2.2.3
numpy: 2.1.3
scipy: 1.15.2
netCDF4: 1.7.2
pydap: 3.5.3
h5netcdf: 1.5.0
h5py: 3.13.0
zarr: 2.18.3
cftime: 1.6.4
nc_time_axis: 1.4.1
iris: 3.11.0
bottleneck: 1.4.2
dask: 2025.2.0
distributed: 2025.2.0
matplotlib: 3.10.1
cartopy: 0.24.0
seaborn: 0.13.2
numbagg: 0.9.0
fsspec: 2025.2.0
cupy: None
pint: 0.24.4
sparse: 0.15.5
flox: None
numpy_groupies: None
setuptools: 75.8.0
pip: 25.0
conda: None
pytest: None
mypy: None
IPython: 8.32.0
sphinx: None
</details>
| 1medium
|
Title: Union + __fields__() has misleading error message
Body: While the error is misleading (I should fix that), in GraphQL you can't select fields of an union type directly, you must use fragments to select depending on each type you want to handle.
In your case, could you try:
```py
import sgqlc
from sgqlc.operation import Operation
from sgqlc.types import String, Type, Union, Field, non_null
class TypeA(Type):
i = int
class TypeB(Type):
s = str
class TypeU(Union):
__types__ = (TypeA, TypeB)
class Query(sgqlc.types.Type):
some_query = Field(non_null(TypeU), graphql_name='someQuery')
op = Operation(Query, name="op_name")
q = op.some_query()
q.__fields__() # this line throws 'AttributeError: TypeA has no field name'
# correct behavior would be to:
q.__as__(TypeA).i()
```
_Originally posted by @barbieri in https://github.com/profusion/sgqlc/issues/71#issuecomment-555237354_ | 1medium
|
Title: dask-expr is now a hard dependency
Body: dask-expr is now a hard dependency of dask[dataframe].
We still need to update
- `distributed/continuous_integration/recipes` (see conda build workflow on distributed, currently failing)
- https://github.com/conda-forge/dask-feedstock
- distributed gpuci failing
- other?
CC @phofl @jrbourbeau
XREFs
- dask/dask#10967
- dask/dask#10976
- dask/distributed#8552
| 1medium
|
Title: "Add to Slack" Button throws OAuth error
Body: The "Add to Slack" button here https://api.slack.com/docs/slack-button fails with the error
Oops, Something Went Wrong!
Please try again from here or contact the app owner (reason: invalid_browser: This can occur due to page reload, not beginning the OAuth flow from the valid starting URL, or the /slack/install URL not using https://)
This is repeatable on Firefox & Chrome.
The https://XXX.com/slack/install button works. I can also get the "Add to Slack" button to temporarily work if I first go to the https://XXX.com/slack/install page even if I do not click anything there.
Is there a function that https://XXX.com/slack/install is calling when it loads or a setting I am missing to get this to work?
I am using the URL https://slack.com/oauth/v2/authorize?client_id={{ client_id }}&scope={{ scopes }}&state={{ unique_user_code }} for the button | 1medium
|
Title: Display Addition: Generic 20x4 LCD
Body: Hi Kyle,
Can you add support for generic 20x4 LCD displays?
I was able to set one up and get it to display data using the 16x4 option, but it’s limited to 16 characters wide.

| 1medium
|
Title: the issue with next parameter not accessible anywhere, in the custom adapter
Body: # 🛑 Stop
The issue tracker has been moved to https://codeberg.org/allauth/django-allauth/issues.
Please submit your issue there.
NEXT after google/login/callback is empty or none, no proven way to target or access next_url and other url parameters in url before social login
Dumb enough this is a persistent bug i suppose on this project, yet the long nosed creature is yet to fix it
Fix it | 1medium
|
Title: How to open doc2vec trained on an older version of gensim?
Body:
I have a large number of models trained on older gensim ? I recently updated my python library, and gensim was bumped to the latest version. The problem is the Doc2Vec.load is refusing to load the older versions. Is there a compatibility mode available ? Or what's the cleanest way to load old models and save them in the new format. I am getting the following error:
attributeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 model = Doc2Vec.load(r'Z:\process\edgar\business_doc2vec\20230731.model')
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\doc2vec.py:815, in Doc2Vec.load(cls, *args, **kwargs)
810 except AttributeError as ae:
811 logger.error(
812 "Model load error. Was model saved using code from an older Gensim version? "
813 "Try loading older model using gensim-3.8.3, then re-saving, to restore "
814 "compatibility with current code.")
--> 815 raise ae
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\doc2vec.py:809, in Doc2Vec.load(cls, *args, **kwargs)
786 """Load a previously saved :class:`~gensim.models.doc2vec.Doc2Vec` model.
787
788 Parameters
(...)
806
807 """
808 try:
--> 809 return super(Doc2Vec, cls).load(*args, rethrow=True, **kwargs)
810 except AttributeError as ae:
811 logger.error(
812 "Model load error. Was model saved using code from an older Gensim version? "
813 "Try loading older model using gensim-3.8.3, then re-saving, to restore "
814 "compatibility with current code.")
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\word2vec.py:1949, in Word2Vec.load(cls, rethrow, *args, **kwargs)
1947 except AttributeError as ae:
1948 if rethrow:
-> 1949 raise ae
1950 logger.error(
1951 "Model load error. Was model saved using code from an older Gensim Version? "
1952 "Try loading older model using gensim-3.8.3, then re-saving, to restore "
1953 "compatibility with current code.")
1954 raise ae
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\word2vec.py:1942, in Word2Vec.load(cls, rethrow, *args, **kwargs)
1923 """Load a previously saved :class:`~gensim.models.word2vec.Word2Vec` model.
1924
1925 See Also
(...)
1939
1940 """
1941 try:
-> 1942 model = super(Word2Vec, cls).load(*args, **kwargs)
1943 if not isinstance(model, Word2Vec):
1944 rethrow = True
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\utils.py:487, in SaveLoad.load(cls, fname, mmap)
484 compress, subname = SaveLoad._adapt_by_suffix(fname)
486 obj = unpickle(fname)
--> 487 obj._load_specials(fname, mmap, compress, subname)
488 obj.add_lifecycle_event("loaded", fname=fname)
489 return obj
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\word2vec.py:1958, in Word2Vec._load_specials(self, *args, **kwargs)
1956 def _load_specials(self, *args, **kwargs):
1957 """Handle special requirements of `.load()` protocol, usually up-converting older versions."""
-> 1958 super(Word2Vec, self)._load_specials(*args, **kwargs)
1959 # for backward compatibility, add/rearrange properties from prior versions
1960 if not hasattr(self, 'ns_exponent'):
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\utils.py:518, in SaveLoad._load_specials(self, fname, mmap, compress, subname)
516 logger.info("loading %s recursively from %s.* with mmap=%s", attrib, cfname, mmap)
517 with ignore_deprecation_warning():
--> 518 getattr(self, attrib)._load_specials(cfname, mmap, compress, subname)
520 for attrib in getattr(self, '__numpys', []):
521 logger.info("loading %s from %s with mmap=%s", attrib, subname(fname, attrib), mmap)
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\utils.py:1522, in deprecated.<locals>.decorator.<locals>.new_func1(*args, **kwargs)
1515 @wraps(func)
1516 def new_func1(*args, **kwargs):
1517 warnings.warn(
1518 fmt.format(name=func.__name__, reason=reason),
1519 category=DeprecationWarning,
1520 stacklevel=2
1521 )
-> 1522 return func(*args, **kwargs)
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\doc2vec.py:328, in Doc2Vec.docvecs(self)
325 @property
326 @deprecated("The `docvecs` property has been renamed `dv`.")
327 def docvecs(self):
--> 328 return self.dv
AttributeError: 'Doc2Vec' object has no attribute 'dv' | 1medium
|
Title: See country total instead of province.
Body: I don't know if I'm just missing it, but I can't seem to find a query parameter that returns the latest cases per country, instead of per province, as is currently the case. Is the only possibility at the moment to go through each province belonging to a country and to add the total cases together?
Thanks | 1medium
|
Title: Sitting with "about:blank" in Chrome Ubuntu 24.10
Body: ### Bug Description
Fails to run with Ubuntu 24.10 on x86_64
Code:
`from langchain_ollama import ChatOllama`
`from browser_use import Agent, Browser, BrowserConfig`
`from pydantic import SecretStr`
`import asyncio`
`from dotenv import load_dotenv`
`load_dotenv()`
`async def main():`
` llm=ChatOllama(model="qwen2.5", num_ctx=32000)`
` agent = Agent(`
` task="Compare deepseek and open ai pricing",`
` llm=llm,`
` )`
` await agent.run()`
`asyncio.run(main())`
The about:blank never gets updated.
Playwright works fine

### Reproduction Steps
Standard install. Just run the code above. Seems it's due to python not being able to control playwright working with local ollama
### Code Sample
```python
from langchain_ollama import ChatOllama
from browser_use import Agent, Browser, BrowserConfig
from pydantic import SecretStr
import asyncio
from dotenv import load_dotenv
load_dotenv()
async def main():
# Initialize the model
llm=ChatOllama(model="qwen2.5", num_ctx=32000)
agent = Agent(
task="Compare deepseek and open ai pricing",
llm=llm,
)
await agent.run()
asyncio.run(main())
```
### Version
0.1.40
### LLM Model
Local Model (Specify model in description)
### Operating System
Ubuntu 24.10
### Relevant Log Output
```shell
``` | 1medium
|
Title: Enable Ruff format implicit string concatenation
Body: When this is stable, revisit `ISC001` disable lints.
- https://github.com/astral-sh/ruff/issues/9457#issuecomment-2437519130 | 1medium
|
Title: Set Library Search Order not working in child suites when used in __init__ file
Body: There is a bug in Robotframework that when I set the `Set Library Search Order`, it is not honored anymore in the child suites, when I initially set it in a `__init__.robot`
I use this Search Order with the python remote server:
```
Import Library Remote htttp://xxxxx:yyyy WITH NAME RemoteLib
Set Library Search Order RemoteLib
```
Folder structure
```
my_project
├── __init__.robot => "Set Library Search Order " in the `Suite Setup`
├── test.robot => "Library Search Order not set anymore"
├── ...
│
``` | 1medium
|
Title: `test_expected_minimum` failure
Body: One test for the expected minimum function fails somewhere deep in scipy. Any ideas on how to track this down?
```
$ pytest --pdb -x -m 'not slow_test' skopt/tests/test_utils.py (skopt)
============================= test session starts ==============================
platform darwin -- Python 3.5.2, pytest-3.0.7, py-1.4.31, pluggy-0.4.0
rootdir: /Users/thead/git/scikit-optimize, inifile:
collected 3 items
skopt/tests/test_utils.py ..F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@pytest.mark.fast_test
def test_expected_minimum():
res = gp_minimize(bench3,
[(-2.0, 2.0)],
x0=[0.],
noise=0.0,
n_calls=20,
> random_state=1)
skopt/tests/test_utils.py:81:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
skopt/optimizer/gp.py:238: in gp_minimize
callback=callback, n_jobs=n_jobs)
skopt/optimizer/base.py:249: in base_minimize
result = optimizer.tell(next_x, next_y, fit=fit_model)
skopt/optimizer/optimizer.py:407: in tell
est.fit(self.space.transform(self.Xi), self.yi)
skopt/learning/gaussian_process/gpr.py:194: in fit
super(GaussianProcessRegressor, self).fit(X, y)
../../anaconda/envs/skopt/lib/python3.5/site-packages/sklearn/gaussian_process/gpr.py:217: in fit
bounds))
../../anaconda/envs/skopt/lib/python3.5/site-packages/sklearn/gaussian_process/gpr.py:424: in _constrained_optimization
fmin_l_bfgs_b(obj_func, initial_theta, bounds=bounds)
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/optimize/lbfgsb.py:193: in fmin_l_bfgs_b
**opts)
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/optimize/lbfgsb.py:330: in _minimize_lbfgsb
f, g = func_and_grad(x)
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/optimize/lbfgsb.py:278: in func_and_grad
f = fun(x, *args)
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/optimize/optimize.py:289: in function_wrapper
return function(*(wrapper_args + args))
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/optimize/optimize.py:63: in __call__
fg = self.fun(x, *args)
../../anaconda/envs/skopt/lib/python3.5/site-packages/sklearn/gaussian_process/gpr.py:194: in obj_func
theta, eval_gradient=True)
../../anaconda/envs/skopt/lib/python3.5/site-packages/sklearn/gaussian_process/gpr.py:388: in log_marginal_likelihood
L = cholesky(K, lower=True) # Line 2
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/linalg/decomp_cholesky.py:81: in cholesky
check_finite=check_finite)
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/linalg/decomp_cholesky.py:20: in _cholesky
a1 = asarray_chkfinite(a)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = array([[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan],
[ nan, na...nan, nan, nan],
[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan]])
dtype = None, order = None
def asarray_chkfinite(a, dtype=None, order=None):
"""Convert the input to an array, checking for NaNs or Infs.
Parameters
----------
a : array_like
Input data, in any form that can be converted to an array. This
includes lists, lists of tuples, tuples, tuples of tuples, tuples
of lists and ndarrays. Success requires no NaNs or Infs.
dtype : data-type, optional
By default, the data-type is inferred from the input data.
order : {'C', 'F'}, optional
Whether to use row-major (C-style) or
column-major (Fortran-style) memory representation.
Defaults to 'C'.
Returns
-------
out : ndarray
Array interpretation of `a`. No copy is performed if the input
is already an ndarray. If `a` is a subclass of ndarray, a base
class ndarray is returned.
Raises
------
ValueError
Raises ValueError if `a` contains NaN (Not a Number) or Inf (Infinity).
See Also
--------
asarray : Create and array.
asanyarray : Similar function which passes through subclasses.
ascontiguousarray : Convert input to a contiguous array.
asfarray : Convert input to a floating point ndarray.
asfortranarray : Convert input to an ndarray with column-major
memory order.
fromiter : Create an array from an iterator.
fromfunction : Construct an array by executing a function on grid
positions.
Examples
--------
Convert a list into an array. If all elements are finite
``asarray_chkfinite`` is identical to ``asarray``.
>>> a = [1, 2]
>>> np.asarray_chkfinite(a, dtype=float)
array([1., 2.])
Raises ValueError if array_like contains Nans or Infs.
>>> a = [1, 2, np.inf]
>>> try:
... np.asarray_chkfinite(a)
... except ValueError:
... print('ValueError')
...
ValueError
"""
a = asarray(a, dtype=dtype, order=order)
if a.dtype.char in typecodes['AllFloat'] and not np.isfinite(a).all():
raise ValueError(
> "array must not contain infs or NaNs")
E ValueError: array must not contain infs or NaNs
../../anaconda/envs/skopt/lib/python3.5/site-packages/numpy/lib/function_base.py:1022: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /Users/thead/anaconda/envs/skopt/lib/python3.5/site-packages/numpy/lib/function_base.py(1022)asarray_chkfinite()
-> "array must not contain infs or NaNs")
``` | 1medium
|
Title: wrong histograms
Body: Hi, I am having problems plotting histograms. I think there is a very good chance that it is not because of a bug in tensorboard-pytorch, but I'm not sure what I could be doing wrong, and I'm not sure where to ask, so if someone could help I would appreciate it.
I am trying to plot histograms of the gradients like this:
```
loss.backward()
for n, p in filter(lambda np: np[1].grad is not None, spectral_model.named_parameters()):
print(n, p.grad.data.min(), p.grad.data.max())
summary_writer.add_histogram(n, p.grad.data.cpu().numpy(), global_step=step)
```
The mins and maxes show that the values are all between -.15 and .15 (and in fact most values are much closer to zero than that). But the histograms seem to show that all the values are located at one extremely high value, like 3.01e+18:

| 1medium
|
Title: Value Error
Body: Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
ValueError: "zero-size array to reduction operation maximum which has no identity"
Traceback Error: "
File "UVR.py", line 6059, in process_start
File "separate.py", line 369, in seperate
File "lib_v5\spec_utils.py", line 125, in normalize
File "numpy\core\_methods.py", line 40, in _amax
"
Error Time Stamp [2023-09-06 18:57:06]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Voc FT
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
is_save_align: True
is_mdx_c_seg_def: True
is_invert_spec: False
is_deverb_vocals: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: MP3
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems | 1medium
|
Title: [BUG] DQ Anomaly Metrics should not be displayed when we do count aggregation on a categorical column
Body: ## Describe the bug
If we create a KPI with metric column type categorical and count as aggregation, DQ mean and Max graphs are displayed empty and DQ count graph is only displayed.
## Explain the environment
- **Chaos Genius version**: v0.3.0
## Expected behavior
DQ Graphs should not be displayed. We can't take mean/max for categorical values and DQ Count graph will be the same as the overall KPI graph.
## Screenshots

| 1medium
|
Title: Unhandled Exception (26a2b144c)
Body: Autosploit version: `4.0`
OS information: `Linux-5.2.0-2parrot1-amd64-x86_64-with-Parrot-4.7-stable`
Running context: `autosploit.py`
Error mesage: ``hosts.txt` and `/home/arc/AutoSploit/hosts.txt` are the same file`
Error traceback:
```
Traceback (most recent call):
File "/home/arc/AutoSploit/lib/term/terminal.py", line 644, in terminal_main_display
self.do_load_custom_hosts(choice_data_list[-1])
File "/home/arc/AutoSploit/lib/term/terminal.py", line 456, in do_load_custom_hosts
shutil.copy(file_path, lib.settings.HOST_FILE)
File "/usr/lib/python2.7/shutil.py", line 139, in copy
copyfile(src, dst)
File "/usr/lib/python2.7/shutil.py", line 83, in copyfile
raise Error("`%s` and `%s` are the same file" % (src, dst))
Error: `hosts.txt` and `/home/arc/AutoSploit/hosts.txt` are the same file
```
Metasploit launched: `False`
| 1medium
|
Title: [Bug]: failed to fetch;failed to connect 54323,but 5050 report status ok
Body: ### What happened?
A bug happened!
http://localhost:3000/login,After I entered my account and password,web report "failed to fetch". And I checked the [5050](http://localhost:5050/), it said"{"status":"OK"}".but i failed to connect the http://localhost:54323/,it said "Unable to access this website. Localhost refused our connection request."

### Relevant log output
```bash
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
worker | raise mapped_exc(message) from exc
worker | httpx.ConnectError: [Errno 111] Connection refused
backend-core | INFO: 172.26.0.1:45280 - "GET /user HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45282 - "GET /user/identity HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45282 - "GET /user HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45280 - "GET /user/identity HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45280 - "GET /user HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45282 - "GET /user/identity HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45282 - "GET /user HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45280 - "GET /user/identity HTTP/1.1" 403 Forbidden
backend-core | INFO: 127.0.0.1:57488 - "GET /healthz HTTP/1.1" 200 OK
backend-core | INFO: 127.0.0.1:41430 - "GET /healthz HTTP/1.1" 200 OK
```
### Twitter / LinkedIn details
_No response_ | 1medium
|
Title: Wrong hompage in package description
Body: ### What version of GlobaLeaks are you using?
4.13.12
### What browser(s) are you seeing the problem on?
Other
### What operating system(s) are you seeing the problem on?
macOS
### Describe the issue
Due to the shell command 'dpkg -s globaleaks' the information of the installed package is presented.
Actually it shows like this on Ubuntu 22.04:
Package: globaleaks
Status: install ok installed
Priority: optional
Section: web
Installed-Size: 87176
Maintainer: Giovanni Pellerano <[email protected]>
Architecture: all
Version: 4.13.12
Depends: python3:any, adduser, apparmor, apparmor-utils, gnupg, iptables, lsb-base, python3-acme, python3-debian, python3-cryptography, python3-h2, python3-nacl, python3-openssl, python3-gnupg, python3-priority, python3-pyotp, python3-sqlalchemy, python3-twisted, python3-txtorcon, tor
Conffiles:
/etc/apparmor.d/usr.bin.globaleaks 42cc8bb81a4ff0706a6e7635b8cd5e56
/etc/default/globaleaks 753092d375c0453441385ff18f364856
/etc/init.d/globaleaks 436f0388680721cfe13dbfd069ce9f41
Description: Free and open-source whistleblowing software
GlobaLeaks is free, open source software enabling anyone to easily set up and
maintain a secure whistleblowing platform
Homepage: **https://www.globleaks.org/**
### Proposed solution
As a minor issue I like to recommend to correct the Homepage from www.globleaks.org to www.globaleaks.org | 0easy
|
Title: [FEATURE] Add Microsoft markdown files
Body: ### Description
Add updated code of conduct and security markdown files
### Expected behavior with the suggested feature
CODE_OF_CONDUCT.md matching: https://github.com/microsoft/repo-templates/blob/main/shared/CODE_OF_CONDUCT.md
SECURITY.md matching https://github.com/microsoft/repo-templates/blob/main/shared/SECURITY.md
### Other Comments
| 1medium
|
Title: 自定义词 添加到 CustomDictionary 里面就可以被识别,自己加载词典就不被识别
Body: <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:
我使用的版本是:
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
CustomDictionary.add("用户词语");
System.out.println(StandardTokenizer.segment("触发问题的句子"));
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
实际输出
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| 1medium
|
Title: Train my own recognition model
Body: i want to use english_g2.pth to train my own recognition model,is there any Tutorial? The deep-text-recognition-benchmark model looks like 200MB,It's a little big for me,thanks | 1medium
|
Title: Train model with more than 1 image per person
Body: * face_recognition version: 1.2.3
* Python version: 2.7.15
* Operating System: Windows 10
### Description
I Would like to train the model with more than 1 image per each person to achieve better recognition results. Is it possible?
One more question is what does [0] mean here:
```
known_face_encoding_user = face_recognition.face_encodings('image.jpg')[0]
```
If I put [1] here I receive "IndexError: list index out of range" error.
| 1medium
|
Title: Link to "Edit in Github" still broken on project homepage
Body: This is a continuation of issue #146. The link seems to still be broken.
# Steps to Repro:
- Go to the project homepage http://drivendata.github.io/cookiecutter-data-science/
- In the top right there is a button "Edit on Github" that links to this page: https://github.com/drivendata/cookiecutter-data-science/edit/master/docs/index.md
- Click on that link
# What I got
The link sends me to a 404 "not found" error page on github.
# What I wanted
What I expected was it would send me to some page on GitHub.
# Possible fix
I imagine maybe the docs is built from the gh-pages branch and not the master branch - if that's the case we would need to edit [this line spefically](https://github.com/drivendata/cookiecutter-data-science/blob/9e01bf8d09c6dd65f435acc50444971b771ebfe4/index.html#L74) on the gh-pages branch.
| 1medium
|
Title: [BUG] NameError: name 'pq' is not defined if pyarrow is not installed
Body: <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
```python
mars/services/lifecycle/api/oscar.py:19: in <module>
from ..supervisor.tracker import LifecycleTrackerActor
mars/services/lifecycle/supervisor/__init__.py:15: in <module>
from .service import LifecycleSupervisorService
mars/services/lifecycle/supervisor/service.py:17: in <module>
from .tracker import LifecycleTrackerActor
mars/services/lifecycle/supervisor/tracker.py:21: in <module>
from ...meta.api import MetaAPI
mars/services/meta/__init__.py:15: in <module>
from .api import AbstractMetaAPI, MetaAPI, MockMetaAPI, WebMetaAPI
mars/services/meta/api/__init__.py:16: in <module>
from .oscar import MetaAPI, MockMetaAPI
mars/services/meta/api/oscar.py:21: in <module>
from ....dataframe.core import (
mars/dataframe/__init__.py:33: in <module>
from .datasource.read_parquet import read_parquet
mars/dataframe/datasource/read_parquet.py:98: in <module>
class ParquetEngine:
mars/dataframe/datasource/read_parquet.py:122: in ParquetEngine
use_arrow_dtype=None,
E NameError: name 'pq' is not defined
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version 3.7.7
2. The version of Mars you use latest master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| 1medium
|
Title: fix dfs warnings in get_recommended_primitives
Body: in `_recommend_non_numeric_primitives` we make a call to dfs to generate features for all valid primitives. That list of valid primitives usually includes many numeric primitives that don't get used and cause an UnusedPrimitive warning. | 1medium
|
Title: Can not add graph for dataparallel model
Body: Hi there, I get a `KeyError: '322'` when I try to `add_graph` for data parallel model on multiple GPU. Here is a mini-example which can reproduce the error:
So what should I do for this error?
```
import torch
import torchvision.models as models
from tensorboardX import SummaryWriter
device = 'cuda'
net = torch.nn.DataParallel(models.__dict__['resnet50']().to(device))
dump_input = torch.rand((10, 3, 224, 224), device=device)
SummaryWriter('./tmp').add_graph(net, dump_input, verbose=False)
```
| 1medium
|
Title: [Bugs] RuntimeError: No CUDA GPUs are available in transformers v4.48.0 or above when running Ray RLHF example
Body: ### System Info
- `transformers` version: 4.48.0
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Yes
- Using GPU in script?: Yes
- GPU type: NVIDIA A800-SXM4-80GB
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hi for all!
I failed to run the vLLM project RLHF example script. The code is exactly same as the vLLM docs page: https://docs.vllm.ai/en/latest/getting_started/examples/rlhf.html
The error messages are:
```
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] Error executing method 'init_device'. This might cause deadlock in distributed execution.
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] Traceback (most recent call last):
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] return run_method(target, method, args, kwargs)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 2220, in run_method
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] return func(*args, **kwargs)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker.py", line 155, in init_device
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] torch.cuda.set_device(self.device)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 478, in set_device
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] torch._C._cuda_setDevice(device)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] torch._C._cuda_init()
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] RuntimeError: No CUDA GPUs are available
(MyLLM pid=70946) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::MyLLM.__init__() (pid=70946, ip=11.163.37.230, actor_id=202b48118215566c51057a0101000000, repr=<test_ray_vllm_rlhf.MyLLM object at 0x7fb7453669b0>)
(MyLLM pid=70946) File "/data/cfs/workspace/test_ray_vllm_rlhf.py", line 96, in __init__
(MyLLM pid=70946) super().__init__(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 1051, in inner
(MyLLM pid=70946) return fn(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 242, in __init__
(MyLLM pid=70946) self.llm_engine = self.engine_class.from_engine_args(
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 484, in from_engine_args
(MyLLM pid=70946) engine = cls(
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 273, in __init__
(MyLLM pid=70946) self.model_executor = executor_class(vllm_config=vllm_config, )
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 262, in __init__
(MyLLM pid=70946) super().__init__(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 51, in __init__
(MyLLM pid=70946) self._init_executor()
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 90, in _init_executor
(MyLLM pid=70946) self._init_workers_ray(placement_group)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 355, in _init_workers_ray
(MyLLM pid=70946) self._run_workers("init_device")
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 476, in _run_workers
(MyLLM pid=70946) self.driver_worker.execute_method(sent_method, *args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 575, in execute_method
(MyLLM pid=70946) raise e
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
(MyLLM pid=70946) return run_method(target, method, args, kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 2220, in run_method
(MyLLM pid=70946) return func(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker.py", line 155, in init_device
(MyLLM pid=70946) torch.cuda.set_device(self.device)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 478, in set_device
(MyLLM pid=70946) torch._C._cuda_setDevice(device)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
(MyLLM pid=70946) torch._C._cuda_init()
(MyLLM pid=70946) RuntimeError: No CUDA GPUs are available
```
I found in transformers==4.47.1 the script could run normally. However when I tried transformers==4.48.0, 4.48.1 and 4.49.0 I got the error messages above. Then I checked pip envs with `pip list` and found only transformers versions are different.
I've tried to change vllm version between 0.7.0 and 0.7.2, the behavior is the same.
Related Ray issues:
* https://github.com/vllm-project/vllm/issues/13597
* https://github.com/vllm-project/vllm/issues/13230
### Expected behavior
The script runs normally. | 2hard
|
Title: AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
Body: When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised:
```Python traceback
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
The error traceback:
```Python traceback
~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self)
1391 # `IterableDataset` automatically fills missing columns with None.
1392 # This is done with `_apply_feature_types_on_example`.
-> 1393 example = _apply_feature_types_on_example(
1394 example, self.features, token_per_repo_id=self._token_per_repo_id
1395 )
~/huggingface/datasets/src/datasets/iterable_dataset.py in _apply_feature_types_on_example(example, features, token_per_repo_id)
1080 encoded_example = features.encode_example(example)
1081 # Decode example for Audio feature, e.g.
-> 1082 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
1083 return decoded_example
1084
~/huggingface/datasets/src/datasets/features/features.py in decode_example(self, example, token_per_repo_id)
1974
-> 1975 return {
1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1977 if self._column_requires_decoding[column_name]
~/huggingface/datasets/src/datasets/features/features.py in <dictcomp>(.0)
1974
1975 return {
-> 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1977 if self._column_requires_decoding[column_name]
1978 else value
~/huggingface/datasets/src/datasets/features/features.py in decode_nested_example(schema, obj, token_per_repo_id)
1339 # we pass the token to read and decode files from private repositories in streaming mode
1340 if obj is not None and schema.decode:
-> 1341 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1342 return obj
1343
~/huggingface/datasets/src/datasets/features/image.py in decode_example(self, value, token_per_repo_id)
187 image = PIL.Image.open(BytesIO(bytes_))
188 image.load() # to avoid "Too many open files" errors
--> 189 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None:
190 image = PIL.ImageOps.exif_transpose(image)
191 if self.mode and self.mode != image.mode:
~/huggingface/datasets/venv/lib/python3.9/site-packages/PIL/Image.py in __getattr__(name)
75 )
76 return categories[name]
---> 77 raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
78
79
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
### Environment info
Since datasets 2.19.0 | 1medium
|
Title: unable to like
Body: i get the unable to load media error( for like 4.5 posts ) and after that it gets a media post and
After getting the post when is time to click the like button the app stops
Traceback (most recent call last):
File "C:\Users\ribei\Downloads\Reddit\insta.py", line 32, in
session.like_by_tags(["cats"], amount=10)
File "C:\Users\ribei\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\instapy.py", line 1957, in like_by_tags
inappropriate, user_name, is_video, reason, scope = check_link(
File "C:\Users\ribei\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\like_util.py", line 633, in check_link
media = post_page[0]["shortcode_media"]
KeyError: 0
[Finished in 207.3s] #6230 | 1medium
|
Title: A question about document tokenization
Body: Hi, I found a very interesting result to tokenize a document. The example code is:
```
import spacy
nlp = spacy.load("en_core_web_sm")
# doc = nlp("Apple is looking at. startup for $1 billion.")
# for token in doc:
# print(token.text, token.pos_, token.dep_)
# Example text
text = '''Panel C: Gene Associations in LUAD and NATs
In LUAD tumors, ZNF71 is associated with JUN, SAMHD1, RNASEL, IFNGR1, IKKB, and EIF2A.
In non-cancerous adjacent tissues (NATs), the associated genes are OAS1, MP3K7, and IFNAR2.'''
# Process the text
doc = nlp(text)
out_sen = []
# Iterate over the sentences
for sent in doc.sents:
if len(sent) != 0:
print(sent.text)
out_sen.append(sent)
```
The result out_sen's length is 1, and it is treated as a whole sentence. Is this a bug or sth by default? Thanks.
The spacy version is 3.7.6 | 3misc
|
Title: KL Loss.
Body: Hi, I just noticed that the KL Loss in the VAE paper would look like this:
0.5 * torch.sum(torch.exp(logVar) + mean ** 2 - 1. - logVar)
And here, the KL Loss is:
torch.mean(0.5 * torch.sum(torch.exp(logVar) + mean ** 2 - 1. - logVar, 1))
What's your thought in this? | 1medium
|
Title: UTF-8 – CP1252 encoding issue in exported HTML report
Body: <!--
**NOTE** Please use this template to open issues, bugs, etc., only.
See our [GitHub Discussions Board](https://github.com/datapane/datapane/discussions) to discuss feature requests, general support, ideas, and to chat with the community.
-->
### System Information
<!-- Please fill this out to help us understand the bug/issue -->
- OS: Windows 10
- Python version: 3.8.10
- Python environment: conda
- Using jupyter: true
- Datapane version: 0.11.11
### Bug / Issue
When displaying a pandas dataframe in DataPane as a Table (not DataTable, which does work correctly), euro sign characters (€) display as â¬:

This doesn't happen inside JupyterLab, or when exporting the original dataframe to html using `df.to_html()`. I am calling `report.save()` rather than `upload` as I want to generate local html reports.
In #9 you mention it could be an issue with Windows' default encoding not being UTF-8, are there any steps I should take to fix this?
Thank you! | 1medium
|
Title: Is it possible to implement custom email validation in AccountAdapter instead of overriding RegisterSerializer.validate_email?
Body: Hi,
I'm writing a multitenant app and wanted to use AllAuth.
However, it [does not have an option to replace `EmailAddress`](https://github.com/pennersr/django-allauth/issues/2450)
I also found this issue pennersr/django-allauth/issues/976 that allows implementing custom logic to validate email uniqueness in `AccountAdapter`, merged in pennersr/django-allauth/pull/1407.
The change in PR updates `BaseSignupForm.clean_email` method. If I understand correctly `dj-rest-auth` `RegisterSerializer` is modelled after `BaseSignupForm`.
Would it be possible to do the same in `RegisterSerializer` so I do not have to provide my own and can keep the changes only in AccountAdapter?
| 2hard
|
Title: Package Error: python-dateutil
Body: Would you please support the newest version of python-dateutil?
```
ERROR: zappa 0.48.2 has requirement python-dateutil<2.7.0,>=2.6.1, but you'll have python-dateutil 2.8.0 which is incompatible.
``` | 1medium
|
Title: [BUG] Some parts of the UI are light in dark theme
Body: **Describe the bug**
Some parts of the UI are light in dark theme. Some of those parts are not readable because the text and the background are white.
**To Reproduce**
Some searches



**Deployment Method**
- [ ] Heroku (one-click deploy)
- [x] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- 0.7.3
| 1medium
|
Title: Running locally almost halts the computer
Body: The first time I ran this locally, my computer slowed to a crawl after Python used up 100% of my RAM and CPU. I waited 20 minutes, and ended the task. Is there some setting to ensure it doesn't consume so many resources?
My PC is a year old, and pretty decent:
**Operating System**
Windows 10 Pro 64-bit Version 21H2 (OS Build 19044.1706)
**CPU**
Intel Core i9 10900K @ 3.70GHz, 3696 Mhz, 10 Core(s), 20 Logical Processor(s)
**RAM**
Corsair Vengeance RGB Pro 64 GB (2 x 32 GB) DDR4-3200 CL16
**Motherboard**
Gigabyte Z590 AORUS MASTER (U3E1)
**Graphics**
LG ULTRAWIDE (3840x1600@60Hz)
Intel UHD Graphics 630 (Gigabyte)
2047MB NVIDIA GeForce RTX 3080
| 2hard
|
Title: S3 credentials in links missing for private buckets after upgrade to Django 5.1
Body: Hi @jschneier!
I found an issue with Django 5.1. After the upgrade (using django-storages 1.14.4), all GET-parameter credentials for private S3 buckets are not being added to the links in my templates anymore. This means that I don't have access to the files.
I have no idea and since I have little knowledge on how this amazing package works, I can't really contribute any suggestions. 😔 I just verified that it's really Django 5.1.
Here's my setup, it might help in reconstructing the case.
```python
from django.conf import settings
from storages.backends.s3boto3 import S3Boto3Storage
class PrivateMediaStorage(S3Boto3Storage):
location = settings.AWS_PRIVATE_MEDIA_LOCATION
querystring_expire = 3600 # seconds until the generated link expires
default_acl = "bucket-owner-full-control"
file_overwrite = False
custom_domain = False
```
Thanks so much for looking into this!
Ronny
PS: It seens unrelated to https://github.com/jschneier/django-storages/issues/1437 since there, Django 4.2 was used. | 2hard
|
Title: 400 Error?
Body: Hi, I'm trying to spin up a simple socket connection w/ React + Flask... I'm unfortunately getting a 400 error... any thoughts around why this is? Happy to answer any questions around configs.
<img width="498" alt="Screen Shot 2019-08-27 at 2 44 14 PM" src="https://user-images.githubusercontent.com/11385142/63776473-620d2d80-c8d9-11e9-8c77-3e3f09791028.png">
<img width="448" alt="Screen Shot 2019-08-27 at 2 44 27 PM" src="https://user-images.githubusercontent.com/11385142/63776475-620d2d80-c8d9-11e9-8e7a-8f45cfd7410b.png">
<img width="784" alt="Screen Shot 2019-08-27 at 2 44 32 PM" src="https://user-images.githubusercontent.com/11385142/63776476-62a5c400-c8d9-11e9-8168-e4bb9b4b3b39.png">
<img width="791" alt="Screen Shot 2019-08-27 at 2 44 38 PM" src="https://user-images.githubusercontent.com/11385142/63776477-62a5c400-c8d9-11e9-8adb-63e0b445aaa8.png">
<img width="596" alt="Screen Shot 2019-08-27 at 2 44 45 PM" src="https://user-images.githubusercontent.com/11385142/63776479-62a5c400-c8d9-11e9-8d97-2ab361c1acd9.png">
| 1medium
|
Title: import tushare后logging打不出来日志
Body: Python 3.8.16,下面这样是可以打出日志的
```
import logging
# import tushare
logging.basicConfig(level=logging.INFO)
logging.info("a info log")
```
只要把import tushare的注释去掉,logging就打不出来日志了
求助一下要如何解决 | 1medium
|
Title: How to access Params from Post-Create Hook?
Body: #### Description
I am using this package with SQLAlchemy models. My goal is to use a factory to create a model instance, along with a number of associated model instances. Both specific and generic associations, like these two use cases:
```py
# use two new gyms:
UserFactory(gyms_count=2)
# use specified gym(s)
gym = GymFactory()
UserFactory(gyms=[gym])
```
I am able to use a `post_generation` hook to create the related objects, but I'm having issues accessing one of the params when doing so. Or maybe I'm misunderstanding how the params work.
#### To Reproduce
##### Model / Factory code
Models (many to many association: user has many gyms, and vice versa):
```python
class Gym(db.Model):
__tablename__ = "gyms"
id = db.Column(db.Integer, primary_key=True, index=True)
title = db.Column(db.String, nullable=False) #> "My Gym"
memberships = db.relationship("Membership", back_populates="gym")
class User(db.Model):
__tablename__ = "users"
id = db.Column(db.Integer, primary_key=True, index=True)
email = db.Column(db.String, index=True, nullable=False, unique=True)
memberships = db.relationship("Membership", back_populates="user")
class Membership(db.Model, MyModel):
__tablename__ = "memberships"
id = db.Column(db.Integer, primary_key=True, index=True)
gym_id = db.Column(db.Integer, db.ForeignKey("gyms.id"), nullable=False, index=True)
user_id = db.Column(db.Integer, db.ForeignKey("users.id"), nullable=False, index=True)
gym = db.relationship("Gym", back_populates="memberships", uselist=False)
user = db.relationship("User", back_populates="memberships", uselist=False)
```
Factories:
```python
class BaseFactory(SQLAlchemyModelFactory):
class Meta(object):
sqlalchemy_session = db.session
class UserFactory(BaseFactory):
class Meta:
model = User
id = Sequence(lambda n: n+1)
email = Sequence(lambda n: f"u{n+1}@example.com")
class Params:
gyms_count = 0
@post_generation
def gyms(obj, create, extracted, **kwargs):
if not create:
# Simple build, do nothing.
return
if extracted:
for gym in extracted:
Membership.find_or_create(gym_id=gym.id, user_id=obj.id)
gyms_count = kwargs.get("gyms_count") or 0
#gyms_count = obj.gyms_count
# I tried `kwargs.get("gyms_count")` and `obj.gyms_count` but neither was successful.
# how to get the gyms count param here?
for _ in range(0, gyms_count):
gym = GymFactory()
Membership.find_or_create(gym_id=gym.id, user_id=obj.id)
```
##### The issue
How to access the `gyms_count` param from within the post_generation hook?
| 1medium
|
Title: Challenges with LLMs Not Respecting Provided Fields in JSON Outputs
Body: When utilizing Large Language Models to extract data from documents such as invoices and generate structured outputs like JSON files, a common issue arises: the LLM does not always adhere strictly to the provided fields and sometimes invents new ones. This behavior poses significant challenges for applications that require exact data formats for database integration and other automated processes. | 1medium
|
Title: Batch generator hangs with multithread
Body: When using alongside the keras `ImageDataGenerator` with `multithreading=True`, the process hangs on `recv()`. Switching the number of workers to 0 (i.e. no multithread) works as expected.
> Versions:
imgaug: 0.2.5 (from master 13May18)
python: 3.5
keras: 2.1.3
I saw the last merged PR #126 tacking in the same direction, and have confirmed that the installed version has it, but problem persists.
| 1medium
|
Title: Stooq: futures, indices, cash, currency, bond yield tickers don't feed
Body: Hello,
I'm trying to scrape multiple historical quotes from Stooq. Equities and indicies work well, while futures, indices, cash, currency, bond yield don't feed. Shall I type the tickers somehow differently?
Below is the code example with the tickers that don't work for me.
```py
now = datetime.now().date()
stooq_tickers = ['PLN_I', 'DX.F', 'FX.C', 'U4.F', 'USDAUD', '10CNY.B', 'UKOUSD6M']
stooqdf = dr.get_data_stooq(stooq_tickers, start='2016-01-01', end=now)
```
Also is there a way to feed economic data, for example 'PMMNCN.M' or 'IMPRCN.M'?
Thank you in advance for any help! | 1medium
|
Title: ImportError: dynamic module does not define module export function (PyInit__pydantic_core)
Body: Hello, I am trying to import openai in my visual studio code and face with "ImportError: dynamic module does not define module export function (PyInit__pydantic_core)" error, I really dont have any idea of how resolving it, my pydantic version is 2.5.3 and my pydantic_core version is 2.15.0, my python code is 3.11.7, I appreciate any help.
<img width="532" alt="Capture" src="https://github.com/pydantic/pydantic-core/assets/156265022/2d8c7f12-3897-433f-9cb1-b2892db3e0b9">
| 1medium
|
Title: example rand_exit_choice_nb can not run
Body: Hi,
I find an example can not run:
https://github.com/polakowo/vectorbt/blob/master/vectorbt/signals/factory.py#L316
```python
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\vectorbt\signals\factory.py", line 61, in __init__
IndicatorFactory.__init__(
TypeError: __init__() got an unexpected keyword argument 'in_output_settings'
```
```python
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\vectorbt\signals\factory.py", line 61, in __init__
IndicatorFactory.__init__(
TypeError: __init__() got an unexpected keyword argument 'param_settings'
```
```python
Traceback (most recent call last):
File "D:/test_vectorbt/demo_stop3.py", line 66, in <module>
my_sig.rand_type_readable
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\vectorbt\indicators\factory.py", line 1181, in attr_readable
return getattr(_self, attr_name).applymap(lambda x: '' if x == -1 else enum._fields[x])
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\frame.py", line 6944, in applymap
return self.apply(infer)
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\frame.py", line 6878, in apply
return op.get_result()
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\apply.py", line 186, in get_result
return self.apply_standard()
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\apply.py", line 313, in apply_standard
results, res_index = self.apply_series_generator()
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\apply.py", line 341, in apply_series_generator
results[i] = self.f(v)
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\frame.py", line 6942, in infer
return lib.map_infer(x.astype(object).values, func)
File "pandas\_libs\lib.pyx", line 2329, in pandas._libs.lib.map_infer
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\vectorbt\indicators\factory.py", line 1181, in <lambda>
return getattr(_self, attr_name).applymap(lambda x: '' if x == -1 else enum._fields[x])
TypeError: tuple indices must be integers or slices, not float
```
| 1medium
|
Title: unhandled exception in script -- when run py to exc in windows
Body: Traceback (most recent call last):
File "labelImg.py", line 18, in <module>
ModuleNotFoundError: No module named 'PyQt5'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "labelImg.py", line 27, in <module>
ModuleNotFoundError: No module named 'sip'
| 1medium
|
Title: Support Narwhals DataFrames including DuckDB relation and Dask.
Body: In https://github.com/panel-extensions/panel-graphic-walker/pull/22 I will add more support for more data sources to `panel-graphic-walker`. I add general DataFrame support via [Narwhals](https://github.com/narwhals-dev/narwhals) because its what we are going to do for param, panel and rest of HoloViz ecosystem I believe.
With the PR above we will end up supporting

It would be very nice with
- DuckDB Relation and Dask support in pygwalker.
- General support for any Narwhals DataFrame type
- The pygwalker database `Connector` being a support Narwhals DataFrame type. [Context](https://github.com/narwhals-dev/narwhals/issues/1289).
| 1medium
|
Title: Docker image - arm64 please?
Body: Hi - longtime user of this project on a raspi. Recently jumped to using docker and am reinstalling everything in containers.
Discovered tonight that while the repository works well on raspbian, some of the dependent libraries have platform specificity. Since the image on docker hub is tagged as amd64, it pulls the wrong dependencies for arm64...
Any chance you could publish another tag for arm64 please? | 1medium
|
Title: Simple instructions for a self referential table
Body: ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class Node(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
text: str
parent_id: Optional[int] = Field(foreign_key="node.id")
# parent: Optional["Node"] not sure what to put here
# children: List[Node] not sure what to put here either :)
```
### Description
I am trying to create a simple self referential model - using the SQL model equivalent of the adjacency list pattern described here: https://docs.sqlalchemy.org/en/14/orm/self_referential.html
I am only a litte familiar with SQL alchemy but was unable to translate their example into one that would work with SQLmodel.
In your docs you said: "Based on SQLAlchemy [...] SQLModel is designed to satisfy the most common use cases and to be as simple and convenient as possible for those cases, providing the best developer experience". I was assuming that a self referential model would be a fairly common use case but totally appreciate that I could be wrong on this :)
I see that there is an `sa_relationship` param that you can pass 'complicated stuff' too but I was not sure whether I should be using that (or how I would do so if I was meant to) - sorry just a bit too new to this.
Crossing my fingers that it is straight forward to complete the commented lines in my example.
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.7
### Additional Context
_No response_ | 1medium
|
Title: DocBin.to_bytes fails with a "ValueError: bytes object is too large" Spacy v 3.7.2
Body: <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
I am trying to train a model from scratch (NER) on a corpus which has around 8 million sentences, after added data in DocBin() then unable to save it getting error
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: WIN 10 (64bit)
* Python Version Used: 3.9.0
* spaCy Version Used: 3.7.2
* Environment Information:
I have tried to make chunk of DocBin then murge and save single file but same issue
```
import os
import spacy
from spacy.tokens import DocBin
from tqdm import tqdm
from spacy.util import filter_spans
merged_doc_bin = DocBin()
fiels = [
"G:\\success-demo\\product_ner\\test\\train3.spacy",# 3000000 tokens here
"G:\\success-demo\\product_ner\\test\\train1.spacy", # 3000000 tokens here
"G:\\success-demo\\product_ner\\test\\train2.spacy", # 2000000 tokens here
]
for filename in fiels:
doc_bin = DocBin().from_disk(filename)
merged_doc_bin.merge(doc_bin)
merged_doc_bin.to_disk("G:\\success-demo\\product_ner\\test\\final\\murge.spacy")
```


| 2hard
|
Title: 为何录制教学版视频 - ApacheCN
Body: http://ailearning.apachecn.org/why-to-record-study-ml-video/
ApacheCN 专注于优秀项目维护的开源组织 | 3misc
|
Title: 导入AutoencoderKLWan, WanPipeline的问题
Body: from diffusers import AutoencoderKLWan, WanPipeline提示 from diffusers import AutoencoderKLWan, WanPipeline
ImportError: cannot import name 'AutoencoderKLWan' from 'diffusers' (E:\PyCharm2023.3.2\python\pythonProject\.venv\Lib\site-packages\diffusers\__init__.py). Did you mean: 'AutoencoderKL'?
再modelscope.cn问了说是要从源码安装diffusers ,我从源码安装到diffusers-0.33.0版本 还是提示找不到AutoencoderKLWan, WanPipeline这两个模块,这个是什么问题呢@tastelikefeet @wangxingjun778
| 1medium
|
Title: Error during face_encodings - code: 7, reason: A call to cuDNN failed
Body: * face_recognition version: v1.2.2
* Python version: 3.6.7
* Operating System: Ubuntu 18.04 (Jetson Nano)
### Description
Following the JetsonNano instructions and incorporating ZED Camera feed by replaced cv2.VideoCapture with cam.retrieve_image (https://www.stereolabs.com/docs/opencv-python/#capturing-video), I get a crash during face_encoding.
I've verified that the image is converted to RGB and scaled down to 1/4 original size, yet still get this crash every time. When I try the original example using the ZED camera as a Universal Video Camera (UVC) [https://www.stereolabs.com/docs/opencv-python/#uvc-capture] there are no issues.
### Error:
`Traceback (most recent call last):
File "facedetect.py", line 251, in <module>
main_loop()
File "facedetect.py", line 163, in main_loop
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
File "/usr/local/lib/python3.6/dist-packages/face_recognition/api.py", line 210, in face_encodings
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
File "/usr/local/lib/python3.6/dist-packages/face_recognition/api.py", line 210, in <listcomp>
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
RuntimeError: Error while calling cudnnConvolutionForward( context(), &alpha, descriptor(data), data.device(), (const cudnnFilterDescriptor_t)filter_handle, filters.device(), (const cudnnConvolutionDescriptor_t)conv_handle, (cudnnConvolutionFwdAlgo_t)forward_algo, forward_workspace, forward_workspace_size_in_bytes, &beta, descriptor(output), output.device()) in file /tmp/pip-build-do6wa1sv/dlib/dlib/cuda/cudnn_dlibapi.cpp:1007. code: 7, reason: A call to cuDNN failed`
### What I Did (main_loop)
`def main_loop():
#conifgure ZED camera
init = sl.InitParameters()
cam = sl.Camera()
if not cam.is_opened():
print("Opening ZED Camera...")
status = cam.open(init)
if status != sl.ERROR_CODE.SUCCESS:
print(repr(status))
exit()
runtime = sl.RuntimeParameters()
mat = sl.Mat()
print_camera_information(cam)
# ZED
# Get image size
image_size = cam.get_resolution()
width = image_size.width
height = image_size.height
left_image_rgba = np.zeros((height, width, 4), dtype=np.uint8)
# Prepare single image containers
left_image = sl.Mat()
# Track how long since we last saved a copy of our known faces to disk as a backup.
number_of_faces_since_save = 0
while True:
# Grab a single frame of video (ZED)
err = cam.grab(runtime)
if err == sl.ERROR_CODE.SUCCESS:
cam.retrieve_image(left_image, sl.VIEW.VIEW_LEFT)
## TODO: Look at what type the images are here. *******
# Copy the left image to the left side of SBS image
left_image_rgba[0:height, 0:width, :] = left_image.get_data()
# Convert SVO image from RGBA to RGB
left_image_rgb = cv2.cvtColor(left_image_rgba, cv2.COLOR_RGBA2RGB)
# Resize frame of video to 1/4 size for faster face recognition processing
small_frame = cv2.resize(left_image_rgb, (0, 0), fx=0.175, fy=0.175) #(ZED)
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_small_frame = small_frame
# Find all the face locations and face encodings in the current frame of video
face_locations = face_recognition.face_locations(rgb_small_frame)
print("Number of faces detected: ", len(face_locations))
print(face_locations)
print(rgb_small_frame.shape)
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
# Loop through each detected face and see if it is one we have seen before
# If so, we'll give it a label that we'll draw on top of the video.
face_labels = []
for face_location, face_encoding in zip(face_locations, face_encodings):
# See if this face is in our list of known faces.
metadata = lookup_known_face(face_encoding)
# If we found the face, label the face with some useful information.
if metadata is not None:
time_at_door = datetime.now() - metadata['first_seen_this_interaction']
face_label = f"At door {int(time_at_door.total_seconds())}s"
# If this is a brand new face, add it to our list of known faces
else:
face_label = "New visitor!"
# Grab the image of the the face from the current frame of video
top, right, bottom, left = face_location
face_image = small_frame[top:bottom, left:right]
face_image = cv2.resize(face_image, (150, 150))
# Add the new face to our known face data
register_new_face(face_encoding, face_image)
face_labels.append(face_label)
# Draw a box around each face and label each face
for (top, right, bottom, left), face_label in zip(face_locations, face_labels):
# Scale back up face locations since the frame we detected in was scaled to 1/4 size
top *= 4
right *= 4
bottom *= 4
left *= 4
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
cv2.putText(frame, face_label, (left + 6, bottom - 6), cv2.FONT_HERSHEY_DUPLEX, 0.8, (255, 255, 255), 1)
# Display recent visitor images
number_of_recent_visitors = 0
for metadata in known_face_metadata:
# If we have seen this person in the last minute, draw their image
if datetime.now() - metadata["last_seen"] < timedelta(seconds=10) and metadata["seen_frames"] > 5:
# Draw the known face image
x_position = number_of_recent_visitors * 150
frame[30:180, x_position:x_position + 150] = metadata["face_image"]
number_of_recent_visitors += 1
# Label the image with how many times they have visited
visits = metadata['seen_count']
visit_label = f"{visits} visits"
if visits == 1:
visit_label = "First visit"
cv2.putText(frame, visit_label, (x_position + 10, 170), cv2.FONT_HERSHEY_DUPLEX, 0.6, (255, 255, 255), 1)
if number_of_recent_visitors > 0:
cv2.putText(frame, "Visitors at Door", (5, 18), cv2.FONT_HERSHEY_DUPLEX, 0.8, (255, 255, 255), 1)
# Display the final frame of video with boxes drawn around each detected fames
# cv2.imshow('Video', frame)
cv2.imshow("ZED", rgb_small_frame)
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
save_known_faces()
break
# We need to save our known faces back to disk every so often in case something crashes.
if len(face_locations) > 0 and number_of_faces_since_save > 100:
save_known_faces()
number_of_faces_since_save = 0
else:
number_of_faces_since_save += 1
# Release handle to the webcam
#video_capture.release()
cv2.destroyAllWindows()
# Close (ZED)
cam.close()
`
| 2hard
|
Title: Can't display an ipydatagrid in mercury
Body: Hello, the below code is not rendering the grid as it should:
df = pd.DataFrame(data=np.random.randn(5,10))
datagrid=DataGrid(df)
datagrid
Mercury is not displaying the grid with the below output:

| 1medium
|
Title: Add "raw" endpoint to return raw, unparsed request data?
Body: If possible, please consider adding a `/raw` or `/echo` endpoint that returns the raw HTTP request that the server received.
This feature can be used to help diagnose misbehaving applications (e.g. sending duplicate headers with different values or using incorrect line endings) or debug applications using niche HTTP features (e.g. sub-headers for `multipart/form-data` sections). | 1medium
|
Title: Local to cell __name__
Body: ### Documentation is
- [x] Missing
- [ ] Outdated
- [ ] Confusing
- [ ] Not sure?
### Explain in Detail
I've been trying to use smolagents library in marimo and was investigating why one of the functions `push_to_hub` not works as expected.
There were several reasons that it doesn't work and I tried to monkey patch them here:
https://github.com/kazemihabib/Huggingface-Agents-Course-Marimo-Edition/blob/marimo/patches/smolagents_patches.py that you could check for further details.
There was a specific undocumented behavior that breaks the library and it's the focus of this issue.
```
def _test():
pass
print(_test.__name__)
```
marimo adds prefixes to cell local name and prints:
`_cell_AJWG_test`
`push_to_hub` function relies on this name:
1) It fetches the source code
2) Replaces the function name with 'forward' (This one breaks as __name__ returns prefixed name)
3) Append the `forward` function to some other code
### Your Suggestion for Changes
IMHO this behavior of prefixing local to cell names with `_cell_{cell_id}` can be documented. | 1medium
|
Title: fix: Densenet Documentation
Body: This is code for DenseNet121 in Keras
```
keras.applications.DenseNet121(
include_top=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation="softmax",
name="densenet121",
)
```
And the documentation for the keras is not much specify about the `classes` argument
Earlier Documentation : classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified.
Updated Documentation : classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. **Defaults to 1000** | 0easy
|
Title: Crash on new install launch
Body: Press any button or wait 20 seconds to continue.
2016-11-15 12:11:05,838 [ cli] [INFO] PokemonGO Bot v1.0
2016-11-15 12:11:05,846 [ cli] [INFO] commit: 30dbcc0d
2016-11-15 12:11:05,849 [ cli] [INFO] Configuration initialized
2016-11-15 12:11:05,850 [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:
2016-11-15 12:11:05,850 [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics
2016-11-15 12:11:05,858 [requests.packages.urllib3.connectionpool] [INFO] Starting new HTTP connection (1): www.google-analytics.com
(23653) wsgi starting up on http://127.0.0.1:4000
[2016-11-15 12:11:05] [SleepSchedule] [INFO] Next sleep at 12:26:41, for a duration of 05:54:05
[2016-11-15 12:11:05] [PokemonGoBot] [INFO] Setting start location.
[2016-11-15 12:11:05] [PokemonGoBot] [INFO] [x] Coordinates found in passed in location, not geocoding.
[2016-11-15 12:11:05] [PokemonGoBot] [INFO] Location found: 37.809295714,-122.410976772 (37.809295714, -122.410976772, 8.0)
[2016-11-15 12:11:05] [PokemonGoBot] [INFO] Now at (37.809295714, -122.410976772, 8.0)
[2016-11-15 12:11:05] [PokemonGoBot] [INFO] Login procedure started.
[2016-11-15 12:11:07] [PokemonGoBot] [INFO] Login successful.
[2016-11-15 12:11:07] [PokemonGoBot] [INFO]
[2016-11-15 12:11:07] [PokemonGoBot] [INFO] [x] Error while opening cached forts: [Errno 2] No such file or directory: u'/Users/moquette/Bot/pokemongo_bot/../data/recent-forts-Evolver1K.json'
[2016-11-15 12:11:08] [PokemonGoBot] [INFO] Level: 3 (Next Level: 2860 XP) (Total: 3140 XP)
[2016-11-15 12:11:08] [PokemonGoBot] [INFO] Pokemon Captured: 5 | Pokestops Visited: 0
[2016-11-15 12:11:08] [PokemonGoBot] [INFO]
[2016-11-15 12:11:08] [PokemonGoBot] [INFO] --- Evolver1K ---
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Pokemon Bag: 5/250
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Items: 74/350
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Stardust: 1100 | Pokecoins: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] PokeBalls: 70 | GreatBalls: 0 | UltraBalls: 0 | MasterBalls: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] RazzBerries: 0 | BlukBerries: 0 | NanabBerries: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] LuckyEgg: 0 | Incubator: 0 | TroyDisk: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Potion: 0 | SuperPotion: 0 | HyperPotion: 0 | MaxPotion: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Incense: 2 | IncenseSpicy: 0 | IncenseCool: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Revive: 0 | MaxRevive: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO]
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Pokemon:
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] #4 Charmander: (CP 12, IV 0.67)
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] #72 Tentacool: (CP 36, IV 0.67) | (CP 11, IV 0.69)
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] #118 Goldeen: (CP 11, IV 0.36)
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] #147 Dratini: (CP 37, IV 0.47)
[2016-11-15 12:11:10] [PokemonGoBot] [INFO]
[2016-11-15 12:11:10] [RandomAlivePause] [INFO] Next random alive pause at 13:21:15, for a duration of 0:01:28
[2016-11-15 12:11:10] [RandomPause] [INFO] Next random pause at 12:41:42, for a duration of 0:00:53
[2016-11-15 12:11:10] [RecycleItems] [INFO] Next forced item recycle at 12:15:55
[2016-11-15 12:11:10] [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:
[2016-11-15 12:11:10] [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Starting bot...
[2016-11-15 12:11:10] [CollectLevelUpReward] [INFO] Received level up reward:
[2016-11-15 12:11:11] [ cli] [INFO]
[2016-11-15 12:11:11] [ cli] [INFO] Ran for 0:00:06
[2016-11-15 12:11:11] [ cli] [INFO] Total XP Earned: 0 Average: 0.00/h
[2016-11-15 12:11:11] [ cli] [INFO] Travelled 0.00km
[2016-11-15 12:11:11] [ cli] [INFO] Visited 0 stops
[2016-11-15 12:11:11] [ cli] [INFO] Encountered 0 pokemon, 0 caught, 0 released, 0 evolved, 0 never seen before ()
[2016-11-15 12:11:11] [ cli] [INFO] Threw 0 pokeballs
[2016-11-15 12:11:11] [ cli] [INFO] Earned 0 Stardust
[2016-11-15 12:11:11] [ cli] [INFO] Hatched eggs 0
[2016-11-15 12:11:11] [ cli] [INFO]
[2016-11-15 12:11:11] [ cli] [INFO] Highest CP Pokemon:
[2016-11-15 12:11:11] [ cli] [INFO] Most Perfect Pokemon:
Traceback (most recent call last):
File "pokecli.py", line 846, in <module>
main()
File "pokecli.py", line 205, in main
bot.tick()
File "/Users/moquette/Bot/pokemongo_bot/__init__.py", line 770, in tick
if worker.work() == WorkerResult.RUNNING:
File "/Users/moquette/Bot/pokemongo_bot/cell_workers/buddy_pokemon.py", line 135, in work
if self._km_walked() - self.buddy['last_km_awarded'] >= self.buddy_distance_needed:
KeyError: 'last_km_awarded'
[2016-11-15 12:11:11] [sentry.errors] [ERROR] Sentry responded with an error: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128) (url: https://app.getsentry.com/api/90254/store/)
Traceback (most recent call last):
File "/Users/moquette/Bot/lib/python2.7/site-packages/raven/transport/threaded.py", line 174, in send_sync
super(ThreadedHTTPTransport, self).send(data, headers)
File "/Users/moquette/Bot/lib/python2.7/site-packages/raven/transport/http.py", line 47, in send
ca_certs=self.ca_certs,
File "/Users/moquette/Bot/lib/python2.7/site-packages/raven/utils/http.py", line 66, in urlopen
return opener.open(url, data, timeout)
File "/Users/moquette/Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 494, in open
response = self._open(req, data)
File "/Users/moquette/Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 512, in _open
'_open', req)
File "/Users/moquette/Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 466, in _call_chain
result = func(*args)
File "/Users/moquette/Bot/lib/python2.7/site-packages/raven/utils/http.py", line 46, in https_open
return self.do_open(ValidHTTPSConnection, req)
File "/Users/moquette/Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 1284, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1097, in _send_request
self.endheaders(body)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 895, in _send_output
msg += message_body
UnicodeDecodeError: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128)
[2016-11-15 12:11:11] [sentry.errors.uncaught] [ERROR] [u"KeyError: 'last_km_awarded'", u' File "pokecli.py", line 846, in <module>', u' File "pokecli.py", line 205, in main', u' File "pokemongo_bot/__init__.py", line 770, in tick', u' File "pokemongo_bot/cell_workers/buddy_pokemon.py", line 135, in work']
Tue Nov 15 12:11:11 PST 2016 Pokebot Stopped.
| 2hard
|
Title: after importing agent with .json KeyError 'chat_history'
Body: ### Description

### Steps to Reproduce the Bug
1.Go import agent with .json format from export
2.go to chat
error will show
### Expected Behavior
no error
### Operating System
- [X] Linux
- [ ] Microsoft Windows
- [ ] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [X] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [X] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [ ] Using docker compose
- [X] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | 1medium
|
Title: how to make loss visualization
Body: 如何使训练损失loss通过图的形式呈现 | 1medium
|
Title: Factors that influence distance between face embeddings
Body: Hi,
I am using the facenet project in order to calculate the distances between two faces in images in order to evaluate if the two persons depicted are the same or not according to the compare.py file.
normally it is claimed that if the distance < =1 the faces depicted are the same person.
However, after making a lot of tests, the distance threshold can differ for different cases.
I was wondering what can influence this difference in the distance threshold? The reason is that I want to come up with a number for the distance threshold to use as a metric that produces correct results in good percent.
Is there smth I can improve in the code, or is there some criteria to have in mind when selecting tha images to be compared?For example, lightness of the image, or if the face is turned slighly to the side, etc. What prereseuites should an image have in order to give it as input to the comparison?
Thanks in asdvance!
| 1medium
|
Title: Swagger throws Internal Server Error when using reverse relationships in serializer
Body: **error:**
File "/usr/local/lib/python3.6/dist-packages/drf_yasg/inspectors/field.py", line 85, in field_to_swagger_object
raise SwaggerGenerationError("cannot instantiate nested serializer as " + swagger_object_type.__name__)
drf_yasg.errors.SwaggerGenerationError: cannot instantiate nested serializer as Parameter
```
//models
class ModelA(BaseModel):
...some fields...
class Meta:
db_table = 'model_a'
class ModelB(BaseModel):
model_a_id = models.ForeignKey(ModelA, on_delete=models.CASCADE, related_name='model_b_info')
...some fields....
class Meta:
db_table = 'model_b'
//serializers
class ModelBReadSerializer(serializers.ModelSerializer):
some_id = someSerializer(many=True, read_only=True)
class Meta:
model = ModelB
fields = "__all__"
class ModelASerializer(serializers.ModelSerializer):
model_b_info = ModelBReadSerializer(many=True,read_only=True)
...more fields...
class Meta:
model = ModelA
fields = "__all__"
```
| 1medium
|
Title: Update torch version in Dockerfile
Body: Currently, the CPU build of Docker still uses torch 1.x. Update to the latest version. | 1medium
|
Title: WebTransport server script
Body: Is there a webtransport server script?
Or does [http3_server.py](https://github.com/aiortc/aioquic/blob/main/examples/http3_server.py) script can receive the streams(unidi&bidi) and datagrams ?
If yes, does it also work with WebTransport javascript client? | 1medium
|
Title: Request: Plotting parameters in WhatifComponent
Body: As far as I can see, WhatifComponent includes ShapContributionsGraphComponent, but does not include plot styling parameters such as "sort" and "orientation". I think it makes sens to add these :-) | 1medium
|
Title: [feature] Add possibility to save the backup per command
Body: **Version and OS**
Synology DSM 7.2.1 with docker
**Is your feature request related to a problem? Please describe.**
It would be great to have the possibility to have an automatic export function of the link that his didn't has to be done by hand.
Therefore it would be good to have an export command, which can be triggered through e.g. synology task manager to do the export e.g. every friday at 01:00.
**Describe the solution you'd like**
For example for paperless-ngx there is a command like that
bash -c "docker exec Paperless-ngx document_exporter ../export -z"
Something similar for Changedetection would be nice.
**Describe the use-case and give concrete real-world examples**
To have a full automized system also a automatic backup solution would be nice that it is not required to do the backup very time I change something in the list. | 1medium
|
Title: Convert image to geometric parameters in python
Body: I try use tutorial from kaggle site https://www.kaggle.com/c/facial-keypoints-detection#getting-started-with-r
They perform face recogntion using the R language, but the analysis is not important. It doesn't matter!!!
It is important that they work with transformed parameters(this data ready for analysis)!
I'm interested in the process of preparing (convert)data from the image to the data frame
Here, they provided two datasets(test and train) These datasets have decomposed on geometric parameters or pictures like here
data.frame': 7049 obs. of 30 variables:
$ left_eye_center_x : num 66 64.3 65.1 65.2 66.7 ...
$ left_eye_center_y : num 39 35 34.9 37.3 39.6 ...
$ right_eye_center_x : num 30.2 29.9 30.9 32 32.2 ...
$ right_eye_center_y : num 36.4 33.4 34.9 37.3 38 ...
$ left_eye_inner_corner_x : num 59.6 58.9 59.4 60 58.6 ...
$ left_eye_inner_corner_y : num 39.6 35.3 36.3 39.1 39.6 ...
$ left_eye_outer_corner_x : num 73.1 70.7 71 72.3 72.5 ...
$ left_eye_outer_corner_y : num 40 36.2 36.3 38.4 39.9 ...
$ right_eye_inner_corner_x : num 36.4 36 37.7 37.6 37 ...
$ right_eye_inner_corner_y : num 37.4 34.4 36.3 38.8 39.1 ...
$ right_eye_outer_corner_x : num 23.5 24.5 25 25.3 22.5 ...
$ right_eye_outer_corner_y : num 37.4 33.1 36.6 38 38.3 ...
$ left_eyebrow_inner_end_x : num 57 54 55.7 56.4 57.2 ...
$ left_eyebrow_inner_end_y : num 29 28.3 27.6 30.9 30.7 ...
$ left_eyebrow_outer_end_x : num 80.2 78.6 78.9 77.9 77.8 ...
$ left_eyebrow_outer_end_y : num 32.2 30.4 32.7 31.7 31.7 ...
$ right_eyebrow_inner_end_x: num 40.2 42.7 42.2 41.7 38 ...
$ right_eyebrow_inner_end_y: num 29 26.1 28.1 31 30.9 ...
$ right_eyebrow_outer_end_x: num 16.4 16.9 16.8 20.5 15.9 ...
$ right_eyebrow_outer_end_y: num 29.6 27.1 32.1 29.9 30.7 ...
$ nose_tip_x : num 44.4 48.2 47.6 51.9 43.3 ...
$ nose_tip_y : num 57.1 55.7 53.5 54.2 64.9 ...
$ mouth_left_corner_x : num 61.2 56.4 60.8 65.6 60.7 ...
$ mouth_left_corner_y : num 80 76.4 73 72.7 77.5 ...
$ mouth_right_corner_x : num 28.6 35.1 33.7 37.2 31.2 ...
$ mouth_right_corner_y : num 77.4 76 72.7 74.2 77 ...
$ mouth_center_top_lip_x : num 43.3 46.7 47.3 50.3 45 ...
$ mouth_center_top_lip_y : num 72.9 70.3 70.2 70.1 73.7 ...
$ mouth_center_bottom_lip_x: num 43.1 45.5 47.3 51.6 44.2 ...
$ mouth_center_bottom_lip_y: num 84.5 85.5 78.7 78.3 86.9 ...
so as you can see for each picture 30 variables
suppose i have any pictures here "C:\Users\Admin\Downloads\mypicture"
How can i decompose it on these variables using python
Thank you
| 1medium
|
Title: Autorange values stored in dcc.Graph.figure['layout']['xaxis' 'yaxis']['range'] no longer available after version v0.30.1
Body: I was using the initial autorange values to "hold" my plotly.graph_objs.Layout plot axes from recalculating when dynamically adding or removing plotly.graph_objs.scatter.Markers. Three questions:
1. Is there some place else I can read Layout axes range values after they have been calculated by the autorange function?
2. Is there a better way to "hold" a plotly.graph_objs.Layout to the initial autorange values?
3. Can the plotly.graph_objs.Layout axes autorange function be invoked directly? | 1medium
|
Title: Support cafile in NATS streaming source
Body: https://github.com/mage-ai/mage-ai/blob/d0eaef32a40a771b9ac8485e14ed08da83ca2dc6/mage_ai/data_preparation/templates/data_loaders/streaming/nats.yaml#L20
Issue with NATS template:
```python
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/mage_ai/shared/retry.py", line 38, in retry_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/executors/streaming_pipeline_executor.py", line 116, in __execute_with_retry
self.__execute_in_python(
File "/usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/executors/streaming_pipeline_executor.py", line 164, in __execute_in_python
source = SourceFactory.get_source(
File "/usr/local/lib/python3.10/site-packages/mage_ai/streaming/sources/source_factory.py", line 36, in get_source
return NATSSource(config, **kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/streaming/sources/nats_js.py", line 62, in __init__
super().__init__(config)
File "/usr/local/lib/python3.10/site-packages/mage_ai/streaming/sources/base.py", line 23, in __init__
self.config = self.config_class.load(config=config)
File "/usr/local/lib/python3.10/site-packages/mage_ai/shared/config.py", line 48, in load
return self(**config)
TypeError: NATSConfig.__init__() got an unexpected keyword argument 'cafile
```
resolved by this addition:
```yaml
ssl_config:
cafile: "/tmp/.certs/ca.pem"
``` | 1medium
|
Title: Text Properties Support
Body: Hi Team,
Is it possible to extract text properties like font-size, font-width, font-style, font-color. | 1medium
|
Title: [Feature Request] Render table can add Grand Total
Body: Thanks so much for your interest in Dash!
Before posting an issue here, please check the Dash [community forum](https://community.plotly.com/c/dash) to see if the topic has already been discussed. The community forum is also great for implementation questions. When in doubt, please feel free to just post the issue here :)
**Is your feature request related to a problem? Please describe.**
Hey, i have ideal in visualized table, how to add grand total. It's a footer such as the column header. But it's calculate by column number in table.
**Describe the solution you'd like**
I encourage encountered when make a table very long and have pagination. So, how about add html or add a row grand total below table with pagination.
**Describe alternatives you've considered**
It's helpful for me.

| 1medium
|
Title: asyncpg.exceptions.DataError: invalid input for query argument $1: <Record chat_id=-1001218255164> (an integer is required (got type asyncpg.Record))
Body: <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.18.3
* **PostgreSQL version**: 4.3
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: No
* **Python version**: 3.7.1
* **Platform**: Linux Ubuntu
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: No (i installed it in pycharm)
* **If you built asyncpg locally, which version of Cython did you use?**: I don't use Cython
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: I don't know
<!-- Enter your issue details below this comment. -->
i have a column with bigint type, when i want make a record
gives an error message: ``` asyncpg.exceptions.DataError: invalid input for query argument $1: <Record chat_id=-1001218255164> (an integer is required (got type asyncpg.Record)) ``` | 1medium
|
Title: Advance NLP with spaCy load issue
Body: In the the online course https://course.spacy.io/
import spacy failing to load in exercises (see Section 2 Getting Started)
| 1medium
|
Title: question about max_time
Body: It appears that `max_time` sets a time limit for the time spent retrying, that does not include the initial try. If that is correct, if all I have is, for example, the total time budget for an API call, then I have to subtract the maximum time the initial attempt take from `max_time` to make sure that the total time spent = `(initial attempt + all retries) < time_budget`. Is that the intent of the code?
By way of context, we assumed that `max_time` could be set to our `time_budget` for the total time spent calling an API, but this assumption seemed to be incorrect. | 1medium
|
Title: config.env database_url not working
Body: i run a postgresql container in docker,
and add this in config.env:
`DATABASE_URL=postgresql://postgres:docker@localhost:5432/postgres`
but every time i run:
`python manage.py setup_dev`
it will creates data-dev.sqlite
ps:the db name and password is correct,i can use the command to connect:
`psql -h localhost -U postgres -d postgres`
| 1medium
|
Title: New Core ML Spec Released
Body: Some features are added so this tool needs to be updated accordingly. See [their spec](https://apple.github.io/coremltools/coremlspecification/index.html) for details. | 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.