text
stringlengths
20
57.3k
labels
class label
4 classes
Title: 'TextClassifier' object has no attribute 'embeddings' Body: TARSClassifier.load error AttributeError Traceback (most recent call last) <ipython-input-13-710c2b4d40e4> in <module> ----> 1 tars = TARSClassifier.load('/content/drive/MyDrive/Text_classification/final-model.pt') 2 frames /usr/local/lib/python3.7/dist-packages/flair/nn/model.py in load(cls, model_path) 147 state = torch.load(f, map_location="cpu") 148 --> 149 model = cls._init_model_with_state_dict(state) 150 151 if "model_card" in state: /usr/local/lib/python3.7/dist-packages/flair/models/tars_model.py in _init_model_with_state_dict(cls, state, **kwargs) 739 label_dictionary=state.get("label_dictionary"), 740 label_type=state.get("label_type", "default_label"), --> 741 embeddings=state.get("tars_model").embeddings, 742 num_negative_labels_to_sample=state.get("num_negative_labels_to_sample"), 743 **kwargs, /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 1206 return modules[name] 1207 raise AttributeError("'{}' object has no attribute '{}'".format( -> 1208 type(self).__name__, name)) 1209 1210 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None: AttributeError: 'TextClassifier' object has no attribute 'embeddings'
1medium
Title: How to plot Matplotlib's surfaces? Body: I'm exploring the possibility to use K3D-Jupyter as a plotting library for [SymPy](https://github.com/sympy/sympy/), instead of relying on Matplotlib which is quite slow in a notebook. However, it seems like there is no function/object capable of using the data format used by [Matplotlib's `plot_surface`](https://matplotlib.org/stable/api/_as_gen/mpl_toolkits.mplot3d.axes3d.Axes3D.html#mpl_toolkits.mplot3d.axes3d.Axes3D.plot_surface), in which `x, y, z` are two dimensional arrays. I've seen [K3D's `surface`](https://k3d-jupyter.org/k3d.html?highlight=surface#k3d.factory.surface), but I think it assumes a uniform grid spacing between `xmin, xmax` and `ymin, ymax`. What would be the best way to plot matplotlib's surfaces with K3D? I'm going to drop a couple of example on what I'd like to achieve. **Example 1:** ``` from sympy import * from sympy.plotting.plot import plot3d, plot3d_parametric_surface var("x, y") r = sqrt(x**2 + y**2) expr = cos(r) * exp(-r / 10) p = plot3d(expr) s = p._series[0] xx, yy, zz = s.get_meshes() ``` Here, `xx, yy, zz` contains the numerical data used by Matplotib to draw the surface. Note that for each `(x, y)` there is one `z`. ![Figure 2](https://user-images.githubusercontent.com/9921777/115139183-d5c0f880-a030-11eb-9e59-749279ea8210.png) **Example 2:** ``` p = plot3d_parametric_surface(cos(u + v), sin(u - v), u - v, (u, -5, 5), (v, -5, 5)) s2 = p._series[0] xx, yy, zz = s2.get_meshes() ``` Note that for each `(x, y)` there could be multiple values for `z`. ![Figure 1](https://user-images.githubusercontent.com/9921777/115139231-03a63d00-a031-11eb-8712-bad2e20c012f.png)
1medium
Title: Support stable diffusion model Body: can i use stable diffusion model with petals?
1medium
Title: RAM consumption of TimeSeriesDataset Body: I have a dataframe that consumes aprox 10 G in memory. when i try to build the TimeSeriesDataset, it consumes >30G in memory (making explode my RAM). I know It makes sense because the time series dataset is a bigger structure than the dataframe. How much can the memory consumption grow when building the time series dataset? Like a 4x? I would like to have an estimation to know how to reduce the original dataframe. Is there any way to make TimeSeriesDataset consume less RAM? Thanks @jdb78
1medium
Title: size mismatch error Body: When I run the "python3 app.py" for demo, it cannot load the pretrained model naver-clova-ix/donut-base-finetuned-docvqa, there is a size miss match error pretrained_model = DonutModel.from_pretrained(args.pretrained_path) File "/home/local/Project/chart/donut/donut/model.py", line 597, in from_pretrained model = super(DonutModel, cls).from_pretrained(pretrained_model_name_or_path, revision="official", *model_args, **kwargs) File "/home/local/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained ) = cls._load_pretrained_model( File "/home/local/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3532, in _load_pretrained_model raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}") RuntimeError: Error(s) in loading state_dict for DonutModel: size mismatch for encoder.model.layers.1.downsample.norm.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for encoder.model.layers.1.downsample.norm.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for encoder.model.layers.1.downsample.reduction.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([256, 512]). size mismatch for encoder.model.layers.2.downsample.norm.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]). size mismatch for encoder.model.layers.2.downsample.norm.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]). size mismatch for encoder.model.layers.2.downsample.reduction.weight: copying a param with shape torch.Size([1024, 2048]) from checkpoint, the shape in current model is torch.Size([512, 1024]). You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
1medium
Title: quiet mode execution Body: Currently there are a ton of messages are printed. Is there a way to mute all or part of the messages?
1medium
Title: The resend_activation endpoint discloses if a user with a given email exists Body: The `resend_activation` endpoint returns a 400 response if the given email does not belong to an (inactive) user. This endpoint re-uses the password reset serializer (#555) but does not respect the `PASSWORD_RESET_SHOW_EMAIL_NOT_FOUND` setting because of these lines: https://github.com/sunscrapers/djoser/blob/c62371e3f9a8bbad2eaf55ffd0efad6eb6c02f26/djoser/views.py#L208-L209 All settings related to disclosing email default to `False`: ``` PASSWORD_RESET_SHOW_EMAIL_NOT_FOUND USERNAME_RESET_SHOW_EMAIL_NOT_FOUND ``` `resend_activation` shouldn't break this default. P.S. These settings don't work as advertised by the way, setting one to True has the effect of also toggling the other: https://github.com/sunscrapers/djoser/blob/c62371e3f9a8bbad2eaf55ffd0efad6eb6c02f26/djoser/serializers.py#L145-L149
1medium
Title: ModuleNotFoundError: No module named 'models.yolo'. Body: ### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question I have finetuned my model on google colab and I have downloaded best.py model and save it locally. after that I run below code `from ultralytics import YOLO model = YOLO("./models/best.pt") result = model.predict("input_videos/image.png")` I make sure about my model path and Input Image path. Also I added __init__.py file in my models folder. Also I have again installed ultralytics library. I know I can run `import os result = os.system("python yolov5/detect.py --weights models/last.pt --img 640 --conf 0.8 --source input_videos/input_video.mp4")` This code. But this code does not give the integer as a result. So I don't know how to do prediction and detect it's location. ### Additional I am using python version 3.12.5 and for ultralytics I am using version 8.2.87. I run this code in CPU. I also run the same type of code for detecting person without finetuning model i.e using your models. In that case I got the correct result.
1medium
Title: [Bug]: Auto Run script fails with error Body: ### Commit before submitting - [x] I understand that Issues are used to provide feedback and solve problems, not to complain in the comments section, and will provide more information to help solve the problem. - [x] I have checked the top Issue and searched for existing [open issues](https://github.com/yeongpin/cursor-free-vip/issues) and [closed issues](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20), and found no similar issues. - [x] I have filled out a short and clear title, so that developers can quickly determine the general problem when browsing the Issue list. Not "a suggestion", "stuck", etc. ### Platform macOS ARM64 ### Version 1.7.12 ### Description fails to auto run script (manually download works only) ### Related log output ```shell curl -fsSL https://raw.githubusercontent.com/yeongpin/cursor-free-vip/main/scripts/install.sh -o install.sh && chmod +x install.sh && ./install.sh curl: (56) Failure writing output to destination, passed 1369 returned 4294967295 ```
1medium
Title: Bug: Spigo demo in Python3 Body: Change import to: ``` try: import urllib.request as urllib2 except ImportError: import urllib2 ```
0easy
Title: Issue running vwap with dataframe index from yFinance data Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.** ```python import pandas_ta as ta print(ta.version) ``` 0.3.14b0 **Do you have _TA Lib_ also installed in your environment?** ```sh $ pip list ``` no **Did you upgrade? Did the upgrade resolve the issue?** ```sh $ pip install -U git+https://github.com/twopirllc/pandas-ta ``` yes but didn't resolve the issue **Describe the bug** I'm trying to run the simple example to calculate the vwap. Since it's provided, the vwap requires the datetime indexing. However, the data coming via yFinance doesn't have this column and we get the key error for the dataframe **To Reproduce** ```python import pandas as pd import pandas_ta as ta df = pd.DataFrame() df = df.ta.ticker("aapl", period="9d", interval="5m") df.set_index(pd.DatetimeIndex(df["datetime"]), inplace=True) print(df.columns) df.ta.vwap(append=True) ``` **Screenshots** ![image](https://user-images.githubusercontent.com/4449380/159678191-7a639b18-34b1-40fd-95e0-56e146be433d.png) Thanks for using Pandas TA!
1medium
Title: generate template error: Stopping generation because post_gen_project hook script didn't exit successfully Body: # Image ![image](https://user-images.githubusercontent.com/5794205/201492410-aad40c6c-fa85-439c-8865-a1a48870762c.png) # Error Info ` E:\Code>fastapi_template Project name: fastapi_template_test Project description: Removing resources for disabled feature GraphQL API... Removing resources for disabled feature Kafka support... Removing resources for disabled feature Kubernetes... Removing resources for disabled feature Migrations... Removing resources for disabled feature Gitlab CI... Removing resources for disabled feature Dummy model... Removing resources for disabled feature Self-hosted swagger... Removing resources for disabled feature Tortoise ORM... Removing resources for disabled feature Ormar ORM... Removing resources for disabled feature PsycoPG... Removing resources for disabled feature Piccolo... Removing resources for disabled feature Postgresql DB... Removing resources for disabled feature Opentelemetry support... Removing resources for disabled feature SQLite DB... cleanup complete! โญ Placing resources nicely in your new project โญ Resources are happy to be where they are needed the most. Git repository initialized. warning: in the working copy of 'fastapi_template_test/static/docs/redoc.standalone.js', LF will be replaced by CRLF the next time Git touches it warning: in the working copy of 'fastapi_template_test/static/docs/swagger-ui-bundle.js', LF will be replaced by CRLF the next time Git touches it warning: in the working copy of 'fastapi_template_test/static/docs/swagger-ui.css', LF will be replaced by CRLF the next time Git touches it Added files to index. Traceback (most recent call last): File "C:\Users\pc\AppData\Local\Temp\tmpo5ko_okk.py", line 74, in <module> init_repo() File "C:\Users\pc\AppData\Local\Temp\tmpo5ko_okk.py", line 64, in init_repo subprocess.run(["poetry", "install", "-n"]) File "C:\Python311\Lib\subprocess.py", line 546, in run with Popen(*popenargs, **kwargs) as process: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\subprocess.py", line 1022, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Python311\Lib\subprocess.py", line 1491, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [WinError 2] ็ณป็ปŸๆ‰พไธๅˆฐๆŒ‡ๅฎš็š„ๆ–‡ไปถใ€‚ Stopping generation because post_gen_project hook script didn't exit successfully ` # Context Info Python: 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32 Pip Package: `C:\Users\pc>pip list Package Version ------------------ --------- arrow 1.2.3 binaryornot 0.4.4 certifi 2022.9.24 cfgv 3.3.1 chardet 5.0.0 charset-normalizer 2.1.1 click 8.1.3 colorama 0.4.6 cookiecutter 1.7.3 distlib 0.3.6 fastapi-template 3.3.10 filelock 3.8.0 identify 2.5.8 idna 3.4 Jinja2 3.1.2 jinja2-time 0.2.0 MarkupSafe 2.1.1 nodeenv 1.7.0 pip 22.3 platformdirs 2.5.3 poyo 0.5.0 pre-commit 2.20.0 prompt-toolkit 3.0.32 pydantic 1.10.2 python-dateutil 2.8.2 python-slugify 6.1.2 PyYAML 6.0 requests 2.28.1 setuptools 65.5.0 six 1.16.0 termcolor 1.1.0 text-unidecode 1.3 toml 0.10.2 typing_extensions 4.4.0 urllib3 1.26.12 virtualenv 20.16.6 wcwidth 0.2.5`
1medium
Title: ManyToMany for SQLAlchemy Body: I've trying to get M2M field working with SQLAlchemy. Following the documentation ``` class LandingPageFactory(alchemy.SQLAlchemyModelFactory): FACTORY_FOR = LandingPage FACTORY_SESSION = db.session name = Sequence(lambda n: u'Landing Page %d' % n) class LPRotatorFactory(alchemy.SQLAlchemyModelFactory): FACTORY_FOR = LPRotator FACTORY_SESSION = db.session name = Sequence(lambda n: u'Landing Page %d' % n) @post_generation def landing_pages(self, create, extracted, **kwargs): if not create: return if extracted: for landing_page in extracted: self.landing_pages.add(landing_page) ``` Then if I try to set it up like so ``` lp1 = LandingPageFactory() lp2 = LandingPageFactory() db.session.commit() # All good here lpr = LPRotatorFactory(landing_pages=(lp1, lp2)) db.session.commit() # This throws the error ``` This throws an Attribute error. ``` self.landing_pages.add(landing_page) AttributeError: 'InstrumentedList' object has no attribute 'add' ``` I noticed all the docs and examples use Django, but didn't see anything too specific. Am I doing something wrong? Thanks
1medium
Title: module 'keras.utils' has no attribute 'PyDataset' Body: I have correctly installed version 3.0.5 of keras and used pytorch for the backend, but it always prompts module 'keras. utils' has no attribute' PyDataset '. How can I solve this problem?
1medium
Title: favorable Body: coolest idea
3misc
Title: top10_floatholders ๅ’Œ top10_holders ๆŽฅๅฃ็ผบๅฐ‘ 2006 ๅนดๆ•ฐๆฎ Body: top10_floatholders ๅ’Œ top10_holders ๆŽฅๅฃ็ผบๅฐ‘ 2006 ๅนดๆ•ฐๆฎ็š„ 4 ไธชๅญฃๅบฆๆ•ฐๆฎ๏ผŒๅ…ถไป–ๅนดไปฝๆญฃๅธธ tushare id: 224776 ![ๅ‰ๅๆต้€š่‚กไธœ](https://github.com/waditu/tushare/assets/32354144/91328e64-dd72-4389-b996-ac6b5f264143)
1medium
Title: Geocoder with own locations Body: Greetings, i slightly edited geocoder such that it can now accept your own places. https://github.com/JohnyCarrot/folium-geocoder-own-locations Hope someone find it usefull.
3misc
Title: How to compute metrics for each class in multi class segmentation Body: I would compute the metrics individually for each class so I would like to have in output a (1xC) vector where C is the number of classes, I was trying like this but it throws me an error: ``` output = torch.rand([10, 3, 256, 256]) target = torch.rand([10, 1, 256, 256]).round().long() # first compute statistics for true positives, false positives, false negative and # true negative "pixels" tp, fp, fn, tn = smp.metrics.get_stats(output, target, mode='multi class', num_classes = 3) # then compute metrics with required reduction (see metric docs) iou_score = smp.metrics.iou_score(tp, fp, fn, tn, reduction="macro-imagewise") f1_score = smp.metrics.f1_score(tp, fp, fn, tn, reduction="macro-imagewise") false_negatives = smp.metrics.false_negative_rate(tp, fp, fn, tn, reduction=None) recall = smp.metrics.recall(tp, fp, fn, tn, reduction=None) ``` The error: ``` ValueError: For ``multiclass`` mode ``target`` should be one of the integer types, got torch.float32. ```
1medium
Title: Websockets RuntimeError "This event loop is already running" Body: When I'm trying to run a websocket, then in some time stop it, and run a new websocket, the following error occurs: Exception in thread Thread-2: Traceback (most recent call last): File "D:\python\Python39\lib\threading.py", line 950, in _bootstrap_inner self.run() File "D:\python\Python39\lib\site-packages\binance\threaded_stream.py", line 59, in run self._loop.run_until_complete(self.socket_listener()) File "D:\python\Python39\lib\asyncio\base_events.py", line 618, in run_until_complete self._check_running() File "D:\python\Python39\lib\asyncio\base_events.py", line 578, in _check_running raise RuntimeError('This event loop is already running') RuntimeError: This event loop is already running D:\python\Python39\lib\threading.py:952: RuntimeWarning: coroutine 'ThreadedApiManager.socket_listener' was never awaited self._invoke_excepthook(self) RuntimeWarning: Enable tracemalloc to get the object allocation traceback This is a part of code which I use to run websocket: `twm = ThreadedWebsocketManager(self.main_window.api_key, self.main_window.api_secret)` `twm.start()` `current_candle_websocket = twm.start_kline_futures_socket(callback=self.handle_candle_message, symbol=self.symbol, interval=Client.KLINE_INTERVAL_5MINUTE)` This is a part of code which I use to stop websocket: `twm.stop_socket(current_candle_websocket)` `twm.stop()` `twm = ''` I use Python 3.9. The error didn't occur on python-binance 1.0.15, but since some API features are retired I can no longer use this version and updated python-binance to 1.0.19, and after that I am getting this error.
1medium
Title: ไปฅ็•Œ้ขๆจกๅผๅฏๅŠจ๏ผŒ็จ‹ๅบๆœชๅ“ๅบ”ๅŽๅ…ณ้—ญ้‡ๆ–ฐๅฏๅŠจ๏ผŒๆ— ๆณ•ๆ˜พ็คบๅ‡บ็•Œ้ข Body: **Summary[้—ฎ้ข˜็ฎ€่ฟฐ๏ผˆไธ€ๅฅ่ฏ๏ผ‰]** ไปฅ็•Œ้ขๆจกๅผๅฏๅŠจ๏ผŒ็จ‹ๅบๆœชๅ“ๅบ”ๅŽๅ…ณ้—ญ้‡ๆ–ฐๅฏๅŠจ๏ผŒๆ— ๆณ•ๆ˜พ็คบๅ‡บ็•Œ้ข **Env & To Reproduce[ๅค็ŽฐไธŽ็Žฏๅขƒ]** python3.7.9 `python demo_toolbox.py -d dataset` ไปฅ็•Œ้ขๆจกๅผๅฏๅŠจๅŽ๏ผŒ็‚นๅ‡ป load above ็„ถๅŽๅœจๅณไพง่พ“ๅ…ฅๆก†่พ“ๅ…ฅๆ–‡ๅญ—ๆ—ถๅ€™ๅกไฝ๏ผŒ็จ‹ๅบๆœชๅ“ๅบ”ใ€‚ๅ…ณ้—ญๅŽๅ†ๆฌกๅฏๅŠจ็จ‹ๅบๅˆ™ไธๅ‡บ็Žฐ็•Œ้ขใ€‚ **Screenshots[ๆˆชๅ›พ๏ผˆๅฆ‚ๆœ‰๏ผ‰]** ![snipaste_20230108_130339](https://user-images.githubusercontent.com/28062894/211181459-7850d635-7f82-4a6d-8560-dc855a20ee5a.png) ๅ†ๆฌกๅฏๅŠจไผšๅกๅœจ่ฟ™้‡Œ ![snipaste_20230108_130946](https://user-images.githubusercontent.com/28062894/211181484-fb6fdab8-03ae-4243-97c2-c2a480f75db7.png)
1medium
Title: Fix error on user register Body: review user forms, register and login
1medium
Title: Grads w.r.t. weights of `MixtureGeneral` Distribution are giving `nan`s Body: Hi, We have created some models where we estimate the weights of the `MixtureGeneral` distribution. However, when computing the gradient of this argument, we are encountering `nan` values. We enabled `jax.config.update("debug_nan", True)` to diagnose the issue, and it pointed to the following line: https://github.com/pyro-ppl/numpyro/blob/8e9313fd64a34162bc1c08b20ed310373e82e347/numpyro/distributions/mixtures.py#L152 I suspect that after the implementation of https://github.com/pyro-ppl/numpyro/pull/1791, extra care is needed to handle `inf` and `nan` values, possibly by using a double `where` for a safe `logsumexp`. > [!IMPORTANT] > This is an urgent issue, so a prompt response would be greatly appreciated.
2hard
Title: [Feature Request] dcc.Slider - Enable user to define slider direction Body: I have an application where I am like to use sliders to crop an image (heatmap graph). The image's (0,0) is defined as the top left of the image. I'd like the y slider to start with 0 at the top, and end with the height of the image at the bottom. Currently, I cannot find a method to invert the slider to allow for the slider to go {top: low, bottom: high}, instead of the default {top: high, bottom: low} **Describe the solution you'd like** A slider where the direction from minimum to maximum could be swapped. **Describe alternatives you've considered** I tried setting the min to be higher than the max, did not work. I've tried flipping the slider with CSS and the result is... erratic
1medium
Title: Update c-ares version to 1.19.1 Body: * gevent version: 22.10.2 from PyPI * Python version: python 3.11.3 * Operating System: docker python:latest ### Description: Update c-ares version to 1.19.1 (it is the latest version as of today: https://c-ares.org). It is not a bug with the gevent itself, but with the depedency c-ares. These vulnerabilities exists: * https://nvd.nist.gov/vuln/detail/CVE-2023-32067 * https://nvd.nist.gov/vuln/detail/CVE-2022-4904 * https://nvd.nist.gov/vuln/detail/CVE-2023-31124 Version 1.19.1 seems fine (at least considering these vulnerabilities). Gevent is currently using 1.18.1: https://github.com/gevent/gevent/blob/master/deps/c-ares/include/ares_version.h#L14. It would be nice to update the c-ares version to 1.19.1. ### What I've run: I tried this using this Dockerile: ```python FROM python RUN pip3 install gevent==22.10.2 ```
1medium
Title: training failed Body: we have a large dataset that contains 1m tables..tained on yolov11x model.. ``` def model_train(data_yaml_path): model = YOLO('yolo11x.pt') data = Path(data_yaml_path) results = model.train(data=data, epochs=10, imgsz=800, patience=2, cls=0.25, box=0.05, project="final-table-detection", device=[0, 1, 2, 3, 4, 5], batch=36) ``` trained only 3 epochs the model takes first epoch as best one.. ![Image](https://github.com/user-attachments/assets/a893c719-9c98-4183-9705-454c16d04fdb).. but the result are worst..wht the reason @glenn-jocher
2hard
Title: [Feature request] Add apply_to_batch Body: There is request how to apply to video or to batch. Albumentations was not originally designed to be applied to batch. But it looks like we can add such functionality without too much pain, by just looping over frames with annotations. Related to: https://github.com/albumentations-team/albumentations/issues/465 https://github.com/albumentations-team/albumentations/issues/683 https://github.com/albumentations-team/albumentations/issues/1561
1medium
Title: Question - how do I mark fields as minLength 0 and nullable? Body: I have an app using DRF 3.8.2, Django 1.11.16, and drf-yasg 1.10.2. Schema generation works, but I tried to add automated test cases to verify that the response matches the generated schema. I have a CharField on a ModelSerializer that is for a model field that has null=True, blank=True. Despite this, drf-yasg appears to be generating a minLength: 1 requirement. Oracle makes no difference between NULL and empty string for VARCHAR2 and NVARCHAR2. DRF returns these fields as '', which I can change, but is there an easier to control this than a "NullableCharField" subclass?
1medium
Title: better dependency manager for project Body: currently marzban use pip to manage dependencies. using pip can lead to some problems and always we need some 3rd party programs as venv we can replace pip with [uv ](https://docs.astral.sh/uv/) to avoid this and have better dependency manager.
1medium
Title: parameter alpha in focal loss is not the same as that of paper Body: ### Describe the bug The parameter alpha in focal loss is a scalor, so it couldn't has the effect of balanced the different classes. I think alpha should be a tensor with n_classses element and every value refers to the weiht of this class ### Reproduction steps ```bash Please refer to the description ``` ### Expected behavior Please refer to the description ### Environment ```shell wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py # For security purposes, please check the contents of collect_env.py before running it. python collect_env.py ``` - PyTorch Version (e.g., 1.0): - OS (e.g., Linux): - How you installed PyTorch (`conda`, `pip`, source): - Build command you used (if compiling from source): - Python version: - CUDA/cuDNN version: - GPU models and configuration: - Any other relevant information: ``` ### Additional context _No response_
1medium
Title: Reconsider HttpClient interface Body: As of now, the python `BaseHttpClient` look like this: https://github.com/apify/crawlee-python/blob/beac9fa0eb415caafc04cdaef2888e77fad915e0/src/crawlee/http_clients/_base.py#L55 It has two methods, `send_request` and `crawl`. This is the first iteration of decoupled HTTP clients. Later on, we refactored the JS version to use this one: https://github.com/apify/crawlee/blob/f912b8b06da2bc4f3f3db508cc39c936a5c87f23/packages/core/src/http_clients/base-http-client.ts#L179 It also has two methods, `sendRequest` and `stream`. Unlike the python version, the signatures of those methods match quite well. It is worth noting that the two serious attempts at implementing this interface (so far) both couldn't manage to implement `stream` correctly. Although we could probably live without it in the most common case, streaming is paramount for downloading files (potentially large ones), which is a use case that we want to support. We should simplify this interface and make it look the same in both versions. Any thoughts on how to achieve that? @vdusek @B4nan @barjin... and whoever else wants to chat :slightly_smiling_face:
2hard
Title: ่ฎญ็ปƒๆ—ถๅ‡บ้”™๏ผšRuntimeError: Error(s) in loading state_dict for Tacotron: Body: Arguments: run_id: mandarin syn_dir: k:/mockingbird/datame/SV2TTS/synthesizer models_dir: synthesizer/saved_models/ save_every: 1000 backup_every: 25000 log_every: 200 force_restart: False hparams: Checkpoint path: synthesizer\saved_models\mandarin\mandarin.pt Loading training data from: k:\mockingbird\datame\SV2TTS\synthesizer\train.txt Using model: Tacotron Using device: cpu Initialising Tacotron Model... Trainable Parameters: 32.866M Loading weights at synthesizer\saved_models\mandarin\mandarin.pt Traceback (most recent call last): File "synthesizer_train.py", line 37, in <module> train(**vars(args)) File "K:\MockingBird\synthesizer\train.py", line 114, in train model.load(weights_fpath, optimizer) File "K:\MockingBird\synthesizer\models\tacotron.py", line 536, in load self.load_state_dict(checkpoint["model_state"], strict=False) File "f:\anaconda3\envs\mockingbird\lib\site-packages\torch\nn\modules\module.py", line 1482, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for encoder_proj.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([128, 1024]). size mismatch for decoder.attn_rnn.weight_ih: copying a param with shape torch.Size([384, 768]) from checkpoint, the shape in current model is torch.Size([384, 1280]). size mismatch for decoder.rnn_input.weight: copying a param with shape torch.Size([1024, 640]) from checkpoint, the shape in current model is torch.Size([1024, 1152]). size mismatch for decoder.stop_proj.weight: copying a param with shape torch.Size([1, 1536]) from checkpoint, the shape in current model is torch.Size([1, 2048]). ๆˆ‘ๅทฒ็ปๆŠŠsymbol้‡Œ็š„้‚ฃ่กŒๅญ—็ฌฆๆ”นๆˆๆ—ง็‰ˆ็š„้‚ฃไธชไบ†๏ผŒ่ฟ˜ๆ˜ฏๆŠฅ่ฟ™ไธช้”™ใ€‚ๆˆ‘่ฟ™้‡Œ็”จ็š„ๆ˜ฏ่‡ชๅทฑ็š„ๆ•ฐๆฎ๏ผŒๆจกไปฟaishell3็š„็ป“ๆž„ๆ”พไบ†๏ผŒๅทฒ็ปๅšไบ† pre.py ็š„้ข„ๅค„็†๏ผŒๅœจๅผ€ๅง‹่ฎญ็ปƒ่ฟ™ไธ€ๆญฅ็š„ๆ—ถๅ€™ๅฐฑๅ‡บไบ†่ฟ™ไธช้”™
1medium
Title: Cannot install cartopy in Scientific Linux Body: ### Description <!-- Please provide a general introduction to the issue/proposal. --> Cartopy cannot be installed on Scientific Linux because the newest available version of `proj` is 4.8.0 and cartopy requires 4.9.0. I'm not sure why `setup.py` checks the system-level versions of `proj` and `geos` instead of the versions in the current virtualenv though... <!-- If you are reporting a bug, attach the *entire* traceback from Python. If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc. If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy --> #### Code to reproduce ``` pip install cartopy ``` #### Traceback ``` Collecting cartopy Using cached Cartopy-0.17.0.tar.gz (8.9 MB) Installing build dependencies ... done Getting requirements to build wheel ... error ERROR: Command errored out with exit status 1: command: /home/jba/venv/gdal3/bin/python3 /home/jba/venv/gdal3/lib64/python3.6/site-packages/pip/_vendor/pep517/_in_process.py get_requires_for_build_wheel /tmp/tmp312hzzaa cwd: /tmp/pip-install-u9agv4jo/cartopy Complete output (1 lines): Proj version 4.8.0 is installed, but cartopy requires at least version 4.9.0. ---------------------------------------- ERROR: Command errored out with exit status 1: /home/jba/venv/gdal3/bin/python3 /home/jba/venv/gdal3/lib64/python3.6/site-packages/pip/_vendor/pep517/_in_process.py get_requires_for_build_wheel /tmp/tmp312hzzaa Check the logs for full command output. ``` <details> <summary>Full environment definition</summary> <!-- fill in the following information as appropriate --> ### Operating system $ cat /proc/version Linux version 3.10.0-1062.18.1.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 SMP Tue Mar 17 10:44:42 CDT 2020 $ cat /etc/*-release NAME="Scientific Linux" VERSION="7.4 (Nitrogen)" ID="scientific" ID_LIKE="rhel centos fedora" VERSION_ID="7.4" PRETTY_NAME="Scientific Linux 7.4 (Nitrogen)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.4:GA" HOME_URL="http://www.scientificlinux.org//" BUG_REPORT_URL="mailto:[email protected]" REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.4 REDHAT_SUPPORT_PRODUCT="Scientific Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.4" Scientific Linux release 7.4 (Nitrogen) Scientific Linux release 7.4 (Nitrogen) Scientific Linux release 7.4 (Nitrogen) ### Cartopy version 0.17.0 ### conda list ``` n/a ``` ### pip list ``` Package Version Location --------------------- ------- ---------------------- arrow 0.15.5 attrs 19.3.0 backcall 0.1.0 chardet 3.0.4 click 7.1.1 cloudpickle 1.3.0 colorama 0.4.3 coverage 5.0.3 cycler 0.10.0 Cython 0.29.15 dask 2.12.0 decorator 4.4.2 entrypoints 0.3 farmlib 1.5.9 /home/jba/work/farmlib flake8 3.7.9 flake8-builtins 1.4.2 flake8-commas 2.0.0 flake8-comprehensions 3.2.2 flake8-docstrings 1.5.0 flake8-logging-format 0.6.0 flake8-polyfill 1.0.2 flake8-print 3.1.4 flake8-string-format 0.3.0 future 0.18.2 fuzzywuzzy 0.17.0 GDAL 3.0.4 importlib-metadata 1.5.0 ipython 7.1.1 ipython-genutils 0.2.0 jbafarm 1.7.3 /home/jba/work/farmmap jedi 0.16.0 jsonschema 3.2.0 kiwisolver 1.1.0 litmus 1.3.4 matplotlib 3.2.0 mccabe 0.6.1 mock 4.0.2 networkx 2.4 nose 1.3.7 nosexcover 1.0.11 numpy 1.18.1 packaging 20.3 pandas 0.25.0 parso 0.6.2 pep8-naming 0.9.1 pexpect 4.8.0 pickleshare 0.7.5 Pillow 7.0.0 pip 20.0.2 proj 0.1.0 prompt-toolkit 2.0.10 psycopg2-binary 2.8.4 ptyprocess 0.6.0 pycodestyle 2.5.0 pydocstyle 3.0.0 pyflakes 2.1.1 Pygments 2.6.1 pyparsing 2.4.6 pyproj 2.1.3 pyrsistent 0.15.7 python-dateutil 2.8.1 python-Levenshtein 0.12.0 pytz 2019.3 PyWavelets 1.0.3 Rtree 0.9.4 scikit-image 0.14.5 scipy 1.2.1 setuptools 39.2.0 Shapely 1.7.0 simplejson 3.17.0 six 1.14.0 snowballstemmer 2.0.0 toolz 0.10.0 tqdm 4.43.0 traitlets 4.3.3 wcwidth 0.1.8 wheel 0.34.2 zipp 3.1.0 ``` </details>
1medium
Title: Loading up Json_files built and trained in Keras 2 for Keras 3 Body: Using Keras 3, I am trying to load up a built and trained model from Keras 2 API that is stored in .json with weights stored in .h5. The model file is the following: [cnn_model.json](https://github.com/user-attachments/files/16462021/cnn_model.json). Since model_from_json does not exist in Keras 3, I rewrote the function from the Keras 2 API so that I can load the .json file. With Keras 3 (with torch backend), I am trying to load the model and the weights with the following code ``` import os import keras import json os.environ["KERAS_BACKEND"] = "torch" def model_from_json(json_string, custom_objects=None): """Parses a JSON model configuration string and returns a model instance. Args: json_string: JSON string encoding a model configuration. custom_objects: Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization. Returns: A Keras model instance (uncompiled). model_config = json.loads(json_string) return deserialize_keras_object(model_config, custom_objects=custom_objects) def model_torch(): model_name = 'cnn_model' #model file name model_file = model_name + '.json' with open(model_file, 'r') as json_file: print('USING MODEL:' + model_file) loaded_model_json = json_file.read() loaded_model = model_from_json(loaded_model_json) loaded_model.load_weights(model_name + '.h5') loaded_model.compile('sgd', 'mse') if __name__ == "__main__": model_torch() ``` However, when I run this code, I obtain the error below (as shown below). With this, I have the three following questions: 1. How does one possibly fix this error given that the model I want to load (in Keras 3) was built and trained in tensorflow-keras 2? 2. Is it better to rebuild the model in Keras using the load_model() function in Keras 3, and if so, how can you translate the weights from the .h5 file that was created in tensorflow-keras 2 to keras 3? 3. To rebuild how, how should one translate the json dictionary to actual code? Error I obtain: ` TypeError: Could not locate class 'Sequential'. Make sure custom classes are decorated with `@keras.saving.register_keras_serializable()`. Full object config: {'class_name': 'Sequential', 'config': {'name': 'sequential', 'layers': [{'class_name': 'Conv2D', 'config': {'name': 'conv2d_20', 'trainable': True, 'batch_input_shape': [None, 50, 50, 1], 'dtype': 'float32', 'filters': 32, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_13', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'Conv2D', 'config': {'name': 'conv2d_21', 'trainable': True, 'dtype': 'float32', 'filters': 32, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_14', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_10', 'trainable': True, 'dtype': 'float32', 'pool_size': [2, 2], 'padding': 'valid', 'strides': [2, 2], 'data_format': 'channels_last'}}, {'class_name': 'Dropout', 'config': {'name': 'dropout_17', 'trainable': True, 'dtype': 'float32', 'rate': 0.25, 'noise_shape': None, 'seed': None}}, {'class_name': 'Conv2D', 'config': {'name': 'conv2d_22', 'trainable': True, 'dtype': 'float32', 'filters': 64, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'same', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_15', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'Conv2D', 'config': {'name': 'conv2d_23', 'trainable': True, 'dtype': 'float32', 'filters': 64, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_16', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_11', 'trainable': True, 'dtype': 'float32', 'pool_size': [2, 2], 'padding': 'valid', 'strides': [2, 2], 'data_format': 'channels_last'}}, {'class_name': 'Dropout', 'config': {'name': 'dropout_18', 'trainable': True, 'dtype': 'float32', 'rate': 0.25, 'noise_shape': None, 'seed': None}}, {'class_name': 'Flatten', 'config': {'name': 'flatten_8', 'trainable': True, 'dtype': 'float32', 'data_format': 'channels_last'}}, {'class_name': 'Dense', 'config': {'name': 'dense_15', 'trainable': True, 'dtype': 'float32', 'units': 512, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_17', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'Dropout', 'config': {'name': 'dropout_19', 'trainable': True, 'dtype': 'float32', 'rate': 0.5, 'noise_shape': None, 'seed': None}}, {'class_name': 'Dense', 'config': {'name': 'dense_16', 'trainable': True, 'dtype': 'float32', 'units': 2, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_18', 'trainable': True, 'dtype': 'float32', 'activation': 'softmax'}}]}, 'keras_version': '2.2.4-tf', 'backend': 'tensorflow'} `
2hard
Title: Instance Segmentation vs Object Detection Body: I would like to find out whether it is better to use instance segmentation or object detection to classify vehicles and count them, in the case of traffic congestion. From my experience, traffic congestion has a lot of occlusion for bounding box to be accurate, it may classify a car as a truck, and a truck as a car. I have a relatively large datasets, approx 7000 - 10000 images, it may be better to just use object detection as it will be easier to manage as the dataset gets larger Image example: ![image](https://user-images.githubusercontent.com/58838171/91002533-83dee480-e601-11ea-85a1-ea8d11c12f49.png) If anyone can give some input, that would be greatly appreciated. Thanks
1medium
Title: Router cannot find endpoint with id parameter Body: ### Issue When trying to hit an endpoint with an integer variable in the URL, flask-restx responds with `{ "message": "The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. You have requested this URI [/xman/statements/1] but did you mean /xman/statements/<int:statement_id> or /xman/statements ?" }` ### **Code** In my code, there is a namespace defined as such: `api = Namespace("xman", description="External Managers related operations", path="/xman")` and the resource is decorated as such: <code> @api.route("/statements/<int:statement_id>") class StatementDetailsEndpoint(Resource): @inject.autoparams() def __init__(self, logging_service: LoggingService, data_service: EMDataService): super().__init__() self.logging_service = logging_service self.data_service = data_service def get(self, statement_id: int): ...get logic.... </code> When **not** passing in a parameter into the route, everything works correctly. For example: <code> @api.route("/statements") class StatementListEndpoint(Resource): @inject.autoparams() def __init__(self, logging_service: LoggingService, data_service: EMDataService): super().__init__() self.logging_service = logging_service self.data_service = data_service self.parser = reqparse.RequestParser() def get(self): ...code.... </code> ### **Expected Behavior** Expect a JSON response from endpoint ### **Actual Behavior** Endpoint is not found ### **Environment** - Python 3.6.7 - Flask 1.0.2 - Flask-RESTX 0.1.1 - Other installed Flask extensions ### **Additional Context** The endpoint does appear in the Swagger documentation when the application is running and the same issue occurs when inputting a valid integer into the parameter box ![image](https://user-images.githubusercontent.com/10284720/77120972-b4da3e00-69ff-11ea-9bcd-7c9edd392440.png)
1medium
Title: Code for simple for throws PydanticImportError Body: <!-- Thanks for reporting a bug ๐Ÿ™Œ โค๏ธ Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. Also, be sure to check our documentation first. --> **Code for Simple Form in README.md cannot be run** I was starting out with `streamlit-pydantic` and tried to run the code for the simple form given in the `README.md`, but I encountered an import error while running the simple example. ```python 2023-07-21 20:33:40.546 Uncaught app exception Traceback (most recent call last): File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/.venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script exec(code, module.__dict__) File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/simple_form.py", line 4, in <module> import streamlit_pydantic as sp File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/.venv/lib/python3.10/site-packages/streamlit_pydantic/__init__.py", line 9, in <module> from .settings import StreamlitSettings File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/.venv/lib/python3.10/site-packages/streamlit_pydantic/settings.py", line 4, in <module> from pydantic import BaseSettings File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/.venv/lib/python3.10/site-packages/pydantic/__init__.py", line 207, in __getattr__ return _getattr_migration(attr_name) File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/.venv/lib/python3.10/site-packages/pydantic/_migration.py", line 288, in wrapper raise PydanticImportError( pydantic.errors.PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package. See https://docs.pydantic.dev/2.0.3/migration/#basesettings-has-moved-to-pydantic-settings for more details. For further information visit https://errors.pydantic.dev/2.0.3/u/import-error ``` The code I am trying to run is exactly the one given for the simple form - ```python import streamlit as st from pydantic import BaseModel import streamlit_pydantic as sp class ExampleModel(BaseModel): some_text: str some_number: int some_boolean: bool data = sp.pydantic_form(key="my_form", model=ExampleModel) if data: st.json(data.json()) ``` **Expected behaviour:** Rendering a simple form with `streamlit-pydantic`. **Steps to reproduce the issue:** 1. Create a `.venv` 2. Install required dependencies 3. Create the `simple_form.py` file using the above code or copying it from the `README.md` 4. `streamlit run simple_form.py` --> **Technical details:** - Host Machine OS (Windows/Linux/Mac): Mac - Browser (Chrome/Firefox/Safari): Arc/Mozilla - Python Version: 3.10.8 - streamlit-pydantic Version: 0.6.0 - streamlit: 1.24.1 Please let me know if any further details are required, and I will be happy to provide them. Thanks!
1medium
Title: Does tracking mode support NMS threshold? Body: ### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question I'm currently using YOLOv10 to track some objects and there are a lot of cases when two bounding boxes (of the same class) have a high IoU, I tried setting the NMS threshold ("iou" parameter) of the tracker very low it but doesn't change anything... I also tried setting a high NMS threshold (expecting a lot of overlapping BBs) but no matter what value i set, the predictions/tracking looks the same. I tried to search about the parameters of the YOLOv10 tracker on the Ultralytics Docs and on Ultralytics GitHub but couldn't find anything about the NMS Threshold on the tracker. Is it implemented? Is the parameter name "iou" similar to the predict mode? Can someone help me in this regard? Thanks! ### Additional _No response_
1medium
Title: Card layouts break and overlap when in a container of a constrained size and expanded Body: <details> <summary>Software Version Info</summary> ```plaintext panel == 1.6.1 ``` </details> #### Description of expected behavior and the observed behavior I expect the cards to respect the overflow property of the container they are in and not overlap when expanded. **Example 1 overflow: auto in column container** https://github.com/user-attachments/assets/78a749ae-f82b-4836-9b04-85464a60210a **Example 2 no overflow specified** https://github.com/user-attachments/assets/d9d82ab0-a5c3-43f8-9925-07e996223b30 #### Complete, minimal, self-contained example code that reproduces the issue **Example 1 overflow: auto in column container** ```python import panel as pn card1 = pn.layout.Card(pn.pane.Markdown(""" Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. """), title="Card 1") card2 = pn.layout.Card(pn.pane.Markdown(""" In a world where technology and nature coexist, the balance between innovation and preservation becomes crucial. As we advance into the future, we must remember the lessons of the past, embracing sustainable practices that honor our planet. Together, we can forge a path that respects both progress and the environment, ensuring a brighter tomorrow for generations to come. """), title="Card 2") pn.Column(card1, card2, height=200, styles={'overflow': 'auto'}).servable() ``` **Example 2 no overflow specified** ```python import panel as pn card1 = pn.layout.Card(pn.pane.Markdown(""" Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. """), title="Card 1") card2 = pn.layout.Card(pn.pane.Markdown(""" In a world where technology and nature coexist, the balance between innovation and preservation becomes crucial. As we advance into the future, we must remember the lessons of the past, embracing sustainable practices that honor our planet. Together, we can forge a path that respects both progress and the environment, ensuring a brighter tomorrow for generations to come. """), title="Card 2") pn.Column(card1, card2, height=200).servable() ``` I think this stems from the recalculation of this style, but I'm not quite sure how to get around it: ![Image](https://github.com/user-attachments/assets/70e53e29-2737-45a3-83f7-7531e9c19817)
1medium
Title: bug: "include" returns deleted relationships also Body: I have a one to many relationship between Track and Session i.e. a Track can have multiple associated sessions but a session has only one associated Track. **TrackSchema** ```Python sessions = Relationship(attribute='sessions', self_view='v1.track_sessions', self_view_kwargs={'id': '<id>'}, related_view='v1.session_list', related_view_kwargs={'track_id': '<id>'}, schema='SessionSchema', many=True, type_='session') ``` **Track model** ```python sessions = db.relationship('Session', backref='track') ``` **SessionSchema** ```python track = Relationship(attribute='track', self_view='v1.session_track', self_view_kwargs={'id': '<id>'}, related_view='v1.track_detail', related_view_kwargs={'session_id': '<id>'}, schema='TrackSchema', type_='track') ``` **Session Model** ```python track_id = db.Column(db.Integer, db.ForeignKey('tracks.id', ondelete='CASCADE')) ``` When I try to include the sessions under a track using ```tracks/{{track_id}}?include=sessions```. It also returns the sessions which were deleted. Seems to be a bug in the library.
1medium
Title: load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") TypeError: 'NoneType' object is not callable Body: ### Describe the bug from datasets import load_dataset lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") or configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True) both error: Traceback (most recent call last): File "", line 1, in File "/workspace/miniconda/envs/grpo/lib/python3.10/site-packages/datasets/load.py", line 2131, in load_dataset builder_instance = load_dataset_builder( File "/workspace/miniconda/envs/grpo/lib/python3.10/site-packages/datasets/load.py", line 1888, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( TypeError: 'NoneType' object is not callable ### Steps to reproduce the bug from datasets import get_dataset_config_names configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True) OR lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") ### Expected behavior load datasets livecodebench/code_generation_lite ### Environment info import datasets version '3.3.2'
1medium
Title: 'deepface.commons.functions' has no attribute 'preprocess_face' Body: I'm trying to call DeepFace.stream() (library 0.0.78 installed from pip) but get an error message ``` AttributeError: module 'deepface.commons.functions' has no attribute 'preprocess_face' ```
1medium
Title: Support multithreaded profiling Body: Thanks for pyintrument - it's incredibly useful I had need to trace a multithreaded python app and examine the relationship between threads. Obviously in some cases multithreading can be a little interesting in python, but in this particular case works well. I have extended a fork of pyinstrument to support showing all child threads from the one where profiling starts. I get nice results with all threads separated which has been hugely helpful. I'll file a PR and reference this issue. It's still a bit of a WIP, but i'd be curious to see if it looks reasonable to you.
2hard
Title: MIME Type of the generated xlsx file Body: Title: Issue with MIME Type of the generated xlsx file Hello, I am using XlsxWriter to generate excel files (obviously), no problem on the generation part but when I check the MIME type of the generated file, it's not what it should be. I am using Python version 3.7 and XlsxWriter 1.1.5 and Excel version 16.23 (Excel for Mac). Here is the code to demonstrates the problem: ```python from xlsxwriter import Workbook wb = Workbook("created_with_xlsxwriter.xlsx") ws = wb.add_worksheet() ws.write(0, 0, "Hello") wb.close() ``` And when I check the MIME type of the file with the following command: ``` > file --mime-type -b created_with_xlsxwriter.xlsx application/zip ``` If I do the same command on a file created with excel: ``` > file --mime-type -b created_with_excel.xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet ``` The strangest thing is that if I open the generated file in Excel and do ```command + s``` on my Mac, and try the above command again, the MIME type become the good one. Same happens if I open the file with a PHP xlsx library and save it again.
1medium
Title: How patch chromedriver.exe with your package? Body: Hello, could you please guide me on how I can patch the chromedriver.exe file using your package and use that Chrome driver file in a project that is not in Python? Thank you.
1medium
Title: [BUG] xdeepfm error in AzureML test Body: ### Description <!--- Describe your issue/bug/request in detail --> ``` @pytest.mark.gpu @pytest.mark.notebooks @pytest.mark.integration @pytest.mark.parametrize( "syn_epochs, criteo_epochs, expected_values, seed", [ ( 15, 10, *** "res_syn": ***"auc": 0.9716, "logloss": 0.699***, "res_real": ***"auc": 0.749, "logloss": 0.4926***, ***, 42, ) ], ) def test_xdeepfm_integration( notebooks, output_notebook, kernel_name, syn_epochs, criteo_epochs, expected_values, seed, ): notebook_path = notebooks["xdeepfm_quickstart"] pm.execute_notebook( notebook_path, output_notebook, kernel_name=kernel_name, parameters=dict( EPOCHS_FOR_SYNTHETIC_RUN=syn_epochs, EPOCHS_FOR_CRITEO_RUN=criteo_epochs, BATCH_SIZE_SYNTHETIC=1024, BATCH_SIZE_CRITEO=1024, RANDOM_SEED=seed, ), ) results = sb.read_notebook(output_notebook).scraps.dataframe.set_index("name")[ "data" ] for key, value in expected_values.items(): > assert results[key]["auc"] == pytest.approx(value["auc"], rel=TOL, abs=ABS_TOL) E assert 0.5131 == 0.9716 ยฑ 9.7e-02 E comparison failed E Obtained: 0.5131 E Expected: 0.9716 ยฑ 9.7e-02 ``` ### In which platform does it happen? <!--- Describe the platform where the issue is happening (use a list if needed) --> <!--- For example: --> <!--- * Azure Data Science Virtual Machine. --> <!--- * Azure Databricks. --> <!--- * Other platforms. --> ### How do we replicate the issue? <!--- Please be specific as possible (use a list if needed). --> <!--- For example: --> <!--- * Create a conda environment for pyspark --> <!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` --> <!--- * ... --> See https://github.com/microsoft/recommenders/actions/runs/3459763061/jobs/5775521889 ### Expected behavior (i.e. solution) <!--- For example: --> <!--- * The tests for SAR PySpark should pass successfully. --> ### Other Comments
2hard
Title: Populate Slack Welcome message Body: Context: Currently, we will send the user a welcome message when user joins the Ploomber Slack Workspace. We may consider to provide the list of our current links. Example - Ideal After: <img width="768" alt="Screenshot 2023-01-02 at 9 26 32 PM" src="https://user-images.githubusercontent.com/9766828/210292932-b20a7a26-8591-4dd0-b1ce-95ff5fa4efd8.png"> Example - Currently Implementation: <img width="901" alt="Screenshot 2023-01-02 at 9 30 49 PM" src="https://user-images.githubusercontent.com/9766828/210293134-f2b0e0d1-1948-4e2d-b494-fdca8440642a.png"> Action Item: - [ ] Discuss if providing those links will help users to find the resource easier - What to include in the list? Currently we can have Ploomber.io, Github link, Doc link, Blog link - [ ] Modify the bot message with the new adding section
1medium
Title: Unable to install requirements.txt missing Body: ### Description Unable to install on windows 10 ### Steps to Reproduce pip install Scrappy **Expected behavior:** [What you expect to happen] No Errores **Actual behavior:** [What actually happens] Collecting Scrappy Using cached Scrappy-0.3.0.alpha.4.tar.gz (17 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting guessit (from Scrappy) Using cached guessit-3.7.1-py3-none-any.whl (170 kB) Collecting tvdb-api (from Scrappy) Using cached tvdb_api-3.1.0.tar.gz (23 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done INFO: pip is looking at multiple versions of scrappy to determine which version is compatible with other requirements. This could take a while. Collecting Scrappy Using cached Scrappy-0.3.0.alpha.3.tar.gz (16 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Using cached Scrappy-0.3.0.alpha.2.tar.gz (16 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Using cached Scrappy-0.3.0.alpha.tar.gz (16 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Using cached Scrappy-0.2.10.beta.14.tar.gz (16 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Using cached Scrappy-0.2.10.beta.13.tar.gz (15 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Using cached Scrappy-0.2.10.beta.12.tar.gz (15 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Using cached Scrappy-0.2.10.beta.11.tar.gz (15 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error ร— Getting requirements to build wheel did not run successfully. โ”‚ exit code: 1 โ•ฐโ”€> [17 lines of output] Traceback (most recent call last): File "C:\Users\test\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module> main() File "C:\Users\test\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "C:\Users\test\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) File "C:\Users\test\AppData\Local\Temp\pip-build-env-8hm_g8ev\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File "C:\Users\test\AppData\Local\Temp\pip-build-env-8hm_g8ev\overlay\Lib\site-packages\setuptools\build_meta.py", line 323, in _get_build_requires self.run_setup() File "C:\Users\test\AppData\Local\Temp\pip-build-env-8hm_g8ev\overlay\Lib\site-packages\setuptools\build_meta.py", line 487, in run_setup super(_BuildMetaLegacyBackend, File "C:\Users\test\AppData\Local\Temp\pip-build-env-8hm_g8ev\overlay\Lib\site-packages\setuptools\build_meta.py", line 338, in run_setup exec(code, locals()) File "<string>", line 4, in <module> FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error ร— Getting requirements to build wheel did not run successfully. โ”‚ exit code: 1 โ•ฐโ”€> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. **Reproduces how often:** [What percentage of the time does it reproduce?]
1medium
Title: Incorrect model schema generation Body: Hi, I made a dynamic model serializer `UserCouponSerializer`, the fields of which vary according to `fields` argument passed when it initialized. If `fields` is None, it includes all fields of its model. The code is below. ``` class DynamicFieldsModelSerializer(serializers.ModelSerializer): def __init__(self, *args, **kwargs): fields = kwargs.pop('fields', None) super(DynamicFieldsModelSerializer, self).__init__(*args, **kwargs) if fields is not None: allowed = set(fields) existing = set(self.fields) for field_name in existing - allowed: self.fields.pop(field_name) ``` ``` class UserCouponSerializer(DynamicFieldsModelSerializer): id = serializers.CharField(max_length=20, read_only=True) class Meta: model = UserCoupon fields = '__all__' ``` After that, I made another serializer `UserCouponListSerializer`, which uses `UserCouponSerializer` for its own fields(active_coupon_list, inactive_coupon_list). That two fields only needs partial fields from `UserCouponSerializer`, and I specified `fields` argument as seen below. ``` class UserCouponListSerializer(serializers.Serializer): active_coupon_list = serializers.ListField( child=UserCouponSerializer( fields=['affiliate', 'goods_name', 'period_end', 'status', 'id', 'pay_id'] ) ) inactive_coupon_list = serializers.ListField( child=UserCouponSerializer( fields=['affiliate', 'goods_name', 'period_end', 'status', 'id', 'pay_id'] ) ) ``` However, the problem is, when I rendered API documentation, everything generated from `UserCouponSerializer` includes only the fields specified in `UserCouponListSerializer`, that is, 'affiliate', 'goods_name', 'period_end', 'status', 'id', and 'pay_id'. ![image](https://user-images.githubusercontent.com/24687378/66375448-5f2d4e80-e9e8-11e9-8b4b-cde401fa9e16.png) I expected all fields of `UserCoupon` fields are rendered as I defined `UserCouponSerializer`. Can I know the cause and get some help about that?
2hard
Title: Redesign SQLAlchemy dialect layout Body: https://github.com/sqlalchemy/sqlalchemy/blob/main/README.dialects.rst
2hard
Title: Django Admin not loading css Body: **Server Info (please complete the following information):** - OS: Debian 11 - Browser: Chrome - RMM Version (as shown in top left of web UI): v0.15.12 **Installation Method:** - [X] Standard - [ ] Docker **Describe the bug** The Django Admin page doesn't load assets (css and js) **To Reproduce** Steps to reproduce the behavior: 1. Enable Django Admin Interface 2. Restart rmm 3. Go to Django Admin page **Expected behavior** The Django Admin Interface loading with the css and js. **Screenshots** ![image](https://github.com/amidaware/tacticalrmm/assets/93825819/10d10d22-7c08-4a6b-be57-84c4e4ce8452)
1medium
Title: Gradio Block() can't detect imported library like numpy in jupyter notebook Body: ### Describe the bug Exact issue described here is still valid but I cannot reopen this ticket https://github.com/gradio-app/gradio/issues/3625 Gradio fails to pull imports on reload ### Have you searched existing issues? ๐Ÿ”Ž - [x] I have searched and found no existing issues ### Reproduction ```python import gradio as gr %%blocks # Anything in under gr.NO_RELOAD won't be reloaded when the block is re-run (afaik) if gr.NO_RELOAD: import numpy as np from transformers import pipeline transcriber = pipeline("automatic-speech-recognition", model="openai/whisper-base.en") def transcribe(stream, new_chunk): sr, y = new_chunk # Convert to mono if stereo if y.ndim > 1: y = y.mean(axis=1) y = y.astype(np.float32) y /= np.max(np.abs(y)) if stream is not None: stream = np.concatenate([stream, y]) else: stream = y return stream, transcriber({"sampling_rate": sr, "raw": stream})["text"] waveform_options = gr.WaveformOptions( waveform_color="#01C6FF", waveform_progress_color="#0066B4", skip_length=2, show_controls=False, ) with gr.Blocks() as demo: with gr.Row(): with gr.Column(): state = gr.State() audio = gr.Audio( sources=["microphone", "upload"], show_download_button=True, waveform_options=waveform_options, streaming=True, ) with gr.Row(): clear_btn = gr.ClearButton() submit_btn = gr.Button("Submit", variant="primary") output = gr.Textbox(label="Output") submit_btn.click( fn=transcribe, inputs=[state, audio], outputs=[state, output], api_name="transcribe") gr.on( triggers=[audio.stream], fn=transcribe, inputs=[state, audio], outputs=[state, output], ) ``` ### Screenshot _No response_ ### Logs ```shell Traceback (most recent call last): File "/opt/conda/lib/python3.12/site-packages/gradio/queueing.py", line 625, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.12/site-packages/gradio/route_utils.py", line 322, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.12/site-packages/gradio/blocks.py", line 2098, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.12/site-packages/gradio/blocks.py", line 1645, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/opt/conda/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 962, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.12/site-packages/gradio/utils.py", line 883, in wrapper response = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "<string>", line 16, in transcribe NameError: name 'np' is not defined ``` ### System Info ```shell Gradio Environment Information: ------------------------------ Operating System: Linux gradio version: 5.16.0 gradio_client version: 1.7.0 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.8.0 audioop-lts is not installed. fastapi: 0.115.8 ffmpy: 0.5.0 gradio-client==1.7.0 is not installed. httpx: 0.28.1 huggingface-hub: 0.28.1 jinja2: 3.1.5 markupsafe: 2.1.5 numpy: 2.1.3 orjson: 3.10.15 packaging: 24.2 pandas: 2.2.3 pillow: 11.1.0 pydantic: 2.10.6 pydub: 0.25.1 python-multipart: 0.0.20 pyyaml: 6.0.2 ruff: 0.9.6 safehttpx: 0.1.6 semantic-version: 2.10.0 starlette: 0.45.3 tomlkit: 0.13.2 typer: 0.15.1 typing-extensions: 4.12.2 urllib3: 2.3.0 uvicorn: 0.34.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.9.0 httpx: 0.28.1 huggingface-hub: 0.28.1 packaging: 24.2 typing-extensions: 4.12.2 websockets: 14.2 ``` ### Severity Blocking usage of gradio
1medium
Title: [BUG] Beanie projection and Pydantic Schema do not play well together Body: **Describe the bug** Beanie projections are expecting an "_id" field, but Pydantic schema expect "id". This makes it impossible to use the same Schema and create duplicated code (unless Iโ€™m missing the proper method to do it) **To Reproduce** ```python from motor.motor_asyncio import AsyncIOMotorClient from pydantic import BaseModel, Field from beanie import Document, init_beanie, PydanticObjectId class Author(Document, BaseModel): name: str class AuthorRead(BaseModel): id: PydanticObjectId = Field(alias="id") name: str class AuthorProjection(BaseModel): # note the underscore id: PydanticObjectId = Field(alias="_id") name: str async def example(): client = AsyncIOMotorClient("mongodb://localhost:27017") await init_beanie(database=client.db_name, document_models=[Author]) dict = { "name": "Joe" } joe = Author(**dict) await joe.insert() # created object contains "id" print(AuthorRead(**joe.dict())) # Beanie get, also give us an 'id' field, so AuthorRead expect id too # (get() method does not have a project() method) result = await Author.get(joe.id) print(AuthorRead(**joe.dict())) # projection is expecting "_id", not "id" # we cannot use the same Schema! result = await Author.find_all().project(AuthorProjection).to_list() print(result) await example() ``` **Expected behavior** A way to use the same Schema for projections, like mapping _id to id during projection
1medium
Title: Empty report when running GPTR on Docker with Windows Body: **Describe the bug** I'm running gpt-researcher with Docker. When I try to use Deep Researcher, it runs with no problems, but the output that it produces is empty. So, it provides a doc file that is empty. **To Reproduce** I have not made any changes, just used the default settings. **Expected behavior** I would expect the provided document to contain text that answers the raised questions. **Desktop (please complete the following information):** - OS: Windows - Browser: Chrome - Version: 10
1medium
Title: Non-deterministic results based on group_max_seq_len in NaViT Body: I'm having trouble understanding what the various parameters do, even after reading the source code. Specifically, I'm wondering what group_max_seq_len does, and why it has non-deterministic results? For example: ``` v = NaViT(patch_size=60, **vit_args) # these are extremely large images v(image_list, group_images = True, group_max_seq_len=1315) tensor([[ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039], [ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039], [ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039], ..., [ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039], [ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039], [ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039]]) v(image_list, group_images = True, group_max_seq_len=229) tensor([[ 0.2724, -0.8302, 0.4734, ..., 0.7219, 0.6409, 0.4224], [ 0.5486, -0.4530, 0.3360, ..., 0.5885, 0.5462, 0.6067], [ 0.2724, -0.8302, 0.4734, ..., 0.7219, 0.6409, 0.4224], ..., [ 0.4754, -0.4497, 0.3625, ..., 0.6052, 0.6225, 0.5106], [ 0.2724, -0.8302, 0.4734, ..., 0.7219, 0.6409, 0.4224], [ 0.4645, -0.4711, 0.3736, ..., 0.6147, 0.6285, 0.5033]]) ``` For larger maximum sequence lengths, all the images have identical outputs. My problem is, I want deterministic results, therefore I want a constant max sequence length regardless of how the images are batched (kind of the whole reason I want to use NaViT). However, if I pick the maximum of the whole dataset, then you have the above (1315) result where every single image has identical logits. If you can clarify how I decide on this parameter I would really appreciate it.
2hard
Title: Random 'Segmentation fault (core dumped)' error when training for long spancat Body: Hi, I am getting 'Segmentation fault (core dumped)' when trying to train model for long SpanCat. I know this error could be related to OOM issues but this does not seem the case here. I tried to reduce [nlp] batch_size and [training.batcher.size] as shown in the attached config file and used a VM with very large RAM to make sure we are not running out of memory. During training the VM memory usage never goes above 40% and even when reducing the [components.spancat.suggester] min_size and max_size the memory usage does not exceed 20% but the training exits with error 'Segmentation fault (core dumped)'. Note: when training with low [components.spancat.suggester] values the training completes but with all zeroes for F, P and R. His is the command I am using for training: python -m spacy train config_spn.cfg --output ./output_v3_lg_1.3 --paths.train ./spacy_models_v3/train_data.spacy --paths.dev ./spacy_models_v3/test_data.spacy --code functions.py -V This is the training output: [2023-09-28 09:25:08,461] [DEBUG] Config overrides from CLI: ['paths.train', 'paths.dev'] โ„น Saving to output directory: output_v3_lg_1.3 โ„น Using CPU =========================== Initializing pipeline =========================== [2023-09-28 09:25:08,610] [INFO] Set up nlp object from config [2023-09-28 09:25:08,618] [DEBUG] Loading corpus from path: spacy_models_v3/test_data.spacy [2023-09-28 09:25:08,618] [DEBUG] Loading corpus from path: spacy_models_v3/train_data.spacy [2023-09-28 09:25:08,619] [INFO] Pipeline: ['tok2vec', 'spancat'] [2023-09-28 09:25:08,621] [INFO] Created vocabulary [2023-09-28 09:25:09,450] [INFO] Added vectors: en_core_web_lg [2023-09-28 09:25:09,450] [INFO] Finished initializing nlp object [2023-09-28 09:25:16,150] [INFO] Initialized pipeline components: ['tok2vec', 'spancat'] โœ” Initialized pipeline ============================= Training pipeline ============================= [2023-09-28 09:25:16,158] [DEBUG] Loading corpus from path: spacy_models_v3/test_data.spacy [2023-09-28 09:25:16,159] [DEBUG] Loading corpus from path: spacy_models_v3/train_data.spacy โ„น Pipeline: ['tok2vec', 'spancat'] โ„น Initial learn rate: 0.001 E # LOSS TOK2VEC LOSS SPANCAT SPANS_SC_F SPANS_SC_P SPANS_SC_R SCORE --- ------ ------------ ------------ ---------- ---------- ---------- ------ 0 0 98109.47 19535.08 0.00 0.00 4.58 0.00 0 200 528.73 781.51 0.00 0.00 3.75 0.00 Segmentation fault (core dumped) Environment: Operating System: Ubuntu 20.04.6 LTS Python Version Used: 3.8.10 spaCy Version Used: 3.6.0 [config_spn.cfg.txt](https://github.com/explosion/spaCy/files/12748569/config_spn.cfg.txt) Thanks in advance!
2hard
Title: Please install TA-Lib to use 2crows. (pip install TA-Lib) message ... Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.** ```python import pandas_ta as ta print(ta.version) ``` 0.3.14b0 **Do you have _TA Lib_ also installed in your environment?** ```sh $ pip list ``` yes ![image](https://user-images.githubusercontent.com/63505156/152672264-f1c475fd-8b8f-4005-bf26-21c236425e35.png) **Did you upgrade? Did the upgrade resolve the issue?** ```sh $ pip install -U git+https://github.com/twopirllc/pandas-ta ``` Yes.. i have installed talib using the updated version only as mentioned in the readme... **Describe the bug** [A clear and concise description of what the bug is.] i am simply trying to import ticker data from yfinance and using df.ta.strategy(ta.AllStrategy) to build all the indicators. However, i am getting messages saying: **Please install TA-Lib to use 2crows. (pip install TA-Lib)** first 7 lines of the output of df.ta.strategy(ta.AllStrategy) **0it [00:00, ?it/s][X] Please install TA-Lib to use 2crows. (pip install TA-Lib) [X] Please install TA-Lib to use 3blackcrows. (pip install TA-Lib) [X] Please install TA-Lib to use 3inside. (pip install TA-Lib) [X] Please install TA-Lib to use 3linestrike. (pip install TA-Lib) [X] Please install TA-Lib to use 3outside. (pip install TA-Lib) [X] Please install TA-Lib to use 3starsinsouth. (pip install TA-Lib) [X] Please install TA-Lib to use 3whitesoldiers. (pip install TA-Lib)** **To Reproduce** Provide sample code. Code: (https://colab.research.google.com/drive/1eGviU_45HrZLDj_hSHLvXkTvw2gln-vd?usp=sharing) **Expected behavior** A clear and concise description of what you expected to happen. My expectation is what is written in the readme that "Runs and appends all indicators to the current DataFrame by default" **Screenshots** If applicable, add screenshots to help explain your problem. ![image](https://user-images.githubusercontent.com/63505156/152672066-087b1fc5-a040-4fe3-88e7-ed4d1f9c9c68.png) ![image](https://user-images.githubusercontent.com/63505156/152672086-63279f2d-a2d5-404a-be2a-33600d5eb62c.png) **Additional context** Add any other context about the problem here. I want to generate all the indicators at one go.. Also, I am seeing many indicators are having Null. Not sure whether it is because of the installation issue itself mentioned previously.. ![image](https://user-images.githubusercontent.com/63505156/152672178-90ef5b8e-f3d7-4c55-a473-e2346bce2a3e.png) Thanks for using Pandas TA!
1medium
Title: CockroachDB + SQLAlchemy trouble Body: <!-- Thank you for reporting an issue/feature request. If this is a feature request, please disregard this template. If this is a bug report, please answer to the questions below. It will be much easier for us to fix the issue if a test case that reproduces the problem is provided, with clear instructions on how to run it. Thank you! --> * **asyncpg version**: 0.23.0 * **PostgreSQL version**: cockroachdb/cockroach:latest * **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce the issue with a local PostgreSQL install?**: N/A * **Python version**: 3.8.10 * **Platform**: Arch Linux * **Do you use pgbouncer?**: No * **Did you install asyncpg with pip?**: Yes * **If you built asyncpg locally, which version of Cython did you use?**: N/A * **Can the issue be reproduced under both asyncio and [uvloop](https://github.com/magicstack/uvloop)?**: Yes <!-- Enter your issue details below this comment. --> Disclaimer, I'm not entirely sure whether the actual bug is in this repository or in SQLAlchemy. That said, when trying to use `sqlalchemy` with the `asyncpg` driver to connect to CockroachDB, [this function](https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/dialects/postgresql/asyncpg.py#L1015-L1039) causes some problems. The [`json` block](https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/dialects/postgresql/asyncpg.py#L1026-L1032) causes `asyncpg` to emit a `ValueError: unknown type: pg_catalog.json`, and, when commented out, the [`jsonb` block](https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/dialects/postgresql/asyncpg.py#L1033-L1039) emits `asyncpg.exceptions._base.InterfaceError: cannot use custom codec on non-scalar type pg_catalog.jsonb`. I wrote [this patch](https://github.com/SoftwareSheriff/sqlalchemy/commit/c9a386c0a6b401838a87b678cf97543b8b0e263c) for SQLAlchemy which successfully works around these two problems, but I feel like it probably breaks something else and also that there's just no way that this right way to fix this.
1medium
Title: TypeError: cannot unpack non-iterable NoneType object Body: ## generating video: 1 => ./storage/tasks/a240439d-b011-484d-8aa8-a223665691ee/final-1.mp4 2024-04-12 16:01:33 | INFO | "./app/services/video.py:183": generate_video - start, video size: 1080 x 1920 2024-04-12 16:01:33 | INFO | "./app/services/video.py:184": generate_video - โ‘  video: ./storage/tasks/a240439d-b011-484d-8aa8-a223665691ee/combined-1.mp4 2024-04-12 16:01:33 | INFO | "./app/services/video.py:185": generate_video - โ‘ก audio: ./storage/tasks/a240439d-b011-484d-8aa8-a223665691ee/audio.mp3 2024-04-12 16:01:33 | INFO | "./app/services/video.py:186": generate_video - โ‘ข subtitle: ./storage/tasks/a240439d-b011-484d-8aa8-a223665691ee/subtitle.srt 2024-04-12 16:01:33 | INFO | "./app/services/video.py:187": generate_video - โ‘ฃ output: ./storage/tasks/a240439d-b011-484d-8aa8-a223665691ee/final-1.mp4 2024-04-12 16:01:33 | INFO | "./app/services/video.py:202": generate_video - using font: ./resource/fonts/STHeitiLight.ttc 2024-04-12 16:01:34.052 Uncaught app exception Traceback (most recent call last): File "/opt/anaconda3/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script exec(code, module.__dict__) File "/Users/xunuoqing/Downloads/MoneyPrinterTurbo-main/webui/Main.py", line 432, in <module> result = tm.start(task_id=task_id, params=params) File "/Users/xunuoqing/Downloads/MoneyPrinterTurbo-main/app/services/task.py", line 155, in start video.generate_video(video_path=combined_video_path, File "/Users/xunuoqing/Downloads/MoneyPrinterTurbo-main/app/services/video.py", line 238, in generate_video sub = SubtitlesClip(subtitles=subtitle_path, encoding='utf-8') File "/opt/anaconda3/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/video/tools/subtitles.py", line 69, in __init__ self.duration = max([tb for ((ta, tb), txt) in self.subtitles]) File "/opt/anaconda3/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/video/tools/subtitles.py", line 69, in <listcomp> self.duration = max([tb for ((ta, tb), txt) in self.subtitles]) TypeError: cannot unpack non-iterable NoneType object [audio.mp3.json](https://github.com/harry0703/MoneyPrinterTurbo/files/14956044/audio.mp3.json) [script.json](https://github.com/harry0703/MoneyPrinterTurbo/files/14956045/script.json) [subtitle.srt.json](https://github.com/harry0703/MoneyPrinterTurbo/files/14956046/subtitle.srt.json)
1medium
Title: Pack rule operator exists / nexists should not have criteria pattern mandatory Body: ## SUMMARY When using exists / nexists rule operator, we need to provide a criteria pattern which is not correct, as the value of this pattern needs to be null / should not be there. ### STACKSTORM VERSION st2 3.7.0, on Python 3.6.8 ##### OS, environment, install method Running on Docker in Mackbook Pro (x86) ## Steps to reproduce the problem Use exists / nexists rule operator in criteria section. Omit the value of criteria pattern (as it is not needed for this operator). ``` criteria: trigger.org_id: type: exists ``` ## Expected Results The pack should get installed correctly as all information is available. ## Actual Results Pack installation fails and we need to add a pattern field unnecessarily. ``` criteria: trigger.org_id: type: exists pattern: "placeholder" ``` Making sure to follow these steps will guarantee the quickest resolution possible. Thanks!
1medium
Title: Continuous Driver download problem Body: Every time I run it it downloads the driver as in the image, does it have to do this every time? <img width="1017" alt="image" src="https://github.com/seleniumbase/SeleniumBase/assets/88004617/99ee1baa-440f-41be-a930-fe71a8f2d1c8">
1medium
Title: Access to application Body: **Describe the bug** A clear and concise description of what the bug is. Please include timestamps and HTTP status codes. If possible include the [httpie](https://httpie.org/) or `curl` request and response. Please include the verbose flag. `-v` **To Reproduce** `httpie/curl` request to reproduce the behavior: 1. Getting Italy data at `v2/locations/IT` gives a 422. 2. Expected to same data as `/v2/locations?country_code=IT` 2. See httpie request & response below **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots or Requests** If applicable, add screenshots or `httpie/curl`requests to help explain your problem. ```sh $ http GET https://coronavirus-tracker-api.herokuapp.com/v2/locations/IT -v GET /v2/locations/IT HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate Connection: keep-alive Host: coronavirus-tracker-api.herokuapp.com User-Agent: HTTPie/2.0.0 HTTP/1.1 422 Unprocessable Entity Connection: keep-alive Content-Length: 99 Content-Type: application/json Date: Sat, 18 Apr 2020 12:50:29 GMT Server: uvicorn Via: 1.1 vegur { "detail": [ { "loc": [ "path", "id" ], "msg": "value is not a valid integer", "type": "type_error.integer" } ] } ``` **Additional context** Add any other context about the problem here. Does the other instance at https://covid-tracker-us.herokuapp.com/ produce the same result?
1medium
Title: Configure OAuthClient with OpenID Discovery configuration Body: https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata
1medium
Title: On the clang-format check of the project Body: I noticed that horovod use [clang-format](https://clang.llvm.org/docs/ClangFormat.html) to format C++ code. But when I check the project with clang-format-12, I still got many errors. The cmd I used as below: ```bash #!/usr/bin/env bash for src in $(find ./horovod -name "*.h" -or -name "*.cc") do clang-format-12 -style=file ${src} done ``` And the output as attached [hvd_cf12.txt](https://github.com/horovod/horovod/files/8220236/hvd_cf12.txt). May I know did I use the wrong clang-format version, or the cmd is not right?
1medium
Title: possible to escape already jsonified string ? Body: I am working with pandas objects that have a very fast to_json() method to serialise pandas.Series. I need to serialize a dict with some elements which are pandas objects, e.g.: `{ "data": pandas.Series(...) }` I would like to be able to transform the dict as `{ "data": SerializedJSON("[1,3,4, ...]") }` with ``` class SerializedJSON(str): """An object of type SerializedJSON is a string that will be injected as such in the finale JSON string""" pass ``` so that orjson would not escape again this string. Does this make sense ? Would it be useful to add this SerializedJSON ?
1medium
Title: How to change a response timezone based on a query parameter? Body: Hi, I want to know how to change a response timezone based on a query parmeter like https:://example.com?timezone=utc , https://example.com?timezone=Asia/Tokyo ,or https://exampl.com?timezone=Asia/Kolkata? I think the answer of https://github.com/vitalik/django-ninja/issues/786 is related to this problem, but I can't understand how to change timezone dynamicaly. ``` from ninja.renderers import JSONRenderer from ninja.responses import NinjaJSONEncoder class MyJsonEncoder(NinjaJSONEncoder): def default(self, o): if isinstance(o, datetime.datetime): ############################### #####how to set timezone dynamically? ################################## return o.astimezone().isoformat() return super().default(o) class MyJsonRenderer(JSONRenderer): encoder_class = NinjaJSONEncoder api = NinjaAPI(renderer=MyJsonRenderer()) ```
1medium
Title: Resolve incorrect handling of configuration overrides in pydantic Body: https://github.com/pydantic/pydantic-settings/issues/180 - we probably want to make the sources in [`_settings_build_values`](https://github.com/pydantic/pydantic-settings/blob/8c5a45e43cca4e88a6d65fcb280529499fc6200a/pydantic_settings/main.py#L146) convert the keys to field names from aliases in case `populate_by_name` is set. Note that we'll need to wait for upstream to release this
1medium
Title: Feature request: Support client side keepalives Body: <!-- Enter your issue details below this comment. --> I have been using version 0.20.1. I was able to configure server side TCP keepalives via the server_settings dict object, but I can't seem to find the place where I can set client side keepalives from either a connection pool or a connection object. Does it make sense for an async library like asyncpg to support something like this? Thanks and Regards, Keith
1medium
Title: python custom ops tutorial stopped working in PyTorch 2.7 RC1 Body: Get PyTorch 2.7 RC1. Repro in next comment. Error looks like: ```py Traceback (most recent call last): File "/home/rzou/dev/2.7/pco.py", line 124, in <module> cropped_img = f(img) ^^^^^^ File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/rzou/dev/2.7/pco.py", line 120, in f @torch.compile(fullgraph=True) File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward return compiled_fn(full_args) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 328, in runtime_wrapper all_outs = call_func_at_runtime_with_args( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in cal l_func_at_runtime_with_args out = normalize_as_list(f(args)) ^^^^^^^ File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 689, in inner_fn outs = compiled_fn(args) ^^^^^^^^^^^^^^^^^ File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 495, in wrapper return compiled_fn(runtime_args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_inductor/output_code.py", line 460, in __call__ return self.current_callable(inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/torchinductor_rzou/oy/coy5shd4xlyzvhkrwtaiad5zxz7jhd654636vqhwxsyeux5q27d7.py", line 42, in call assert_size_stride(buf1, (3, 40, 40), (1600, 40, 1)) AssertionError: expected size 3==3, stride 1==1600 at dim=0; expected size 40==40, stride 120==40 at dim=1; expected s ize 40==40, stride 3==1 at dim=2 This error most often comes from a incorrect fake (aka meta) kernel for a custom op. Use torch.library.opcheck to test your custom op. See https://pytorch.org/docs/stable/library.html#torch.library.opcheck ``` cc @ezyang @gchanan @kadeng @msaroufim @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @chauhang @aakhundov
2hard
Title: ๆŠฅ้”™ Body: ### Expected Behavior ๆŠฅ้”™ ### Actual Behavior Trimap did not contain background values ### Steps to Reproduce Trimap did not contain background values ### Debug Logs ```powershell Trimap did not contain background values ``` ### Other _No response_
1medium
Title: TypeError raised from hidden_tag() on Jinja 3.0.0rc1 Body: # Requirements ``` click==8.0.0rc1 Flask==2.0.0rc1 Flask-WTF==0.14.3 itsdangerous==2.0.0rc2 Jinja2==3.0.0rc1 MarkupSafe==2.0.0rc2 Werkzeug==2.0.0rc4 WTForms==2.3.3 ``` # Example ``` import os from flask import Flask, render_template_string from flask_wtf import FlaskForm from wtforms import StringField from wtforms.validators import DataRequired from flask_wtf.csrf import CSRFProtect class MyForm(FlaskForm): name = StringField('name', validators=[DataRequired()]) app = Flask(__name__) app.config['SECRET_KEY'] = os.urandom(24).hex() csrf = CSRFProtect(app) @app.route('/') def hello_world(): form = MyForm() return render_template_string(''' <form method="POST" action="/"> {{ form.hidden_tag() }} {{ form.name.label }} {{ form.name(size=20) }} <input type="submit" value="Go"> </form> ''', form=form) if __name__ == '__main__': app.run() ``` # Traceback ``` Traceback (most recent call last): File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1971, in __call__ return self.wsgi_app(environ, start_response) File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1956, in wsgi_app response = self.handle_exception(e) File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1953, in wsgi_app response = self.full_dispatch_request() File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1454, in full_dispatch_request rv = self.handle_user_exception(e) File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1452, in full_dispatch_request rv = self.dispatch_request() File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1438, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/.../app.py", line 21, in hello_world return render_template_string(''' File "/.../pre-venv/lib/python3.9/site-packages/flask/templating.py", line 145, in render_template_string return _render(ctx.app.jinja_env.from_string(source), context, ctx.app) File "/.../pre-venv/lib/python3.9/site-packages/flask/templating.py", line 110, in _render rv = template.render(context) File "/.../pre-venv/lib/python3.9/site-packages/jinja2/environment.py", line 1127, in render self.environment.handle_exception() File "/.../pre-venv/lib/python3.9/site-packages/jinja2/environment.py", line 814, in handle_exception raise rewrite_traceback_stack(source=source) File "<template>", line 3, in top-level template code File "/.../pre-venv/lib/python3.9/site-packages/flask_wtf/form.py", line 133, in hidden_tag return Markup( File "/.../pre-venv/lib/python3.9/site-packages/jinja2/utils.py", line 843, in __init__ super().__init__(*args, **kwargs) TypeError: object.__init__() takes exactly one argument (the instance to initialize) ``` # Notes Using `form.csrf_token` in the place of `hidden_tag()` in the template raises no exception. Also my assumption is that `hidden_tag()` does not require the existence of additional hidden form elements as I believe this works as expected on Flask 1.1.2. Thank you for the consideration in preparation for Flask 2.0
1medium
Title: pipe.disable_model_cpu_offload Body: **Is your feature request related to a problem? Please describe.** If I enable the following in Gradio interface sana_pipe.enable_model_cpu_offload() and during next generation I want to disable cpu offload, how to do it? I mentioned Gradio specifically as command line inference will not have this problem unless after initializing pipe you generate multiple times with and without cpu offload. I already searched but nothing found https://github.com/search?q=repo%3Ahuggingface%2Fdiffusers%20disable_model_cpu_offload&type=code **Describe the solution you'd like.** Add method to disable for 1. enable_model_cpu_offload() 2. enable_sequential_cpu_offload() **Describe alternatives you've considered.** I will have to delete the pipe completely and load again for each inference in Gradio UI Kindly suggest if any alternative solution. ``` import torch from diffusers import SanaPipeline pipe = SanaPipeline.from_pretrained( "Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", torch_dtype=torch.float32 ) pipe.to("cuda") pipe.text_encoder.to(torch.bfloat16) pipe.transformer = pipe.transformer.to(torch.bfloat16) pipe.enable_model_cpu_offload() image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana"')[0] image[0].save("output.png") pipe.disable_model_cpu_offload() image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana 1"')[0] image[0].save("output1.png") ``` P.S. How to delete a pipe completely so all models are removed completely and GPU memory is freed I did checked documentation but unable to find find anything relevant https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py https://github.com/huggingface/diffusers/blob/4e44534845d35248436abf87688906f52e71b868/src/diffusers/pipelines/pipeline_utils.py
1medium
Title: Postgresql Problem Body: Hello, I am using trying to connect PostgreSQL on my fastapi-admin project. Is this possible? I'm getting an error about timeout: File "/usr/local/lib/python3.9/asyncio/tasks.py", line 492, in wait_for raise exceptions.TimeoutError() from exc asyncio.exceptions.TimeoutError On another hand, I would like to connect the project with a custom PostgreSQL DB. Is there any way to do it using my own tables? ![image](https://user-images.githubusercontent.com/98756380/178712576-ff894acc-764d-4e3d-8e12-3ff68764ca22.png)
1medium
Title: Missing Validation Set Body: Is there a validation set used for choosing the best model before testing the accuracy on the test set? From what I see from the code, the model with Best Rank 1 is chosen based on test set result. Won't this mean that the Best Rank 1 result is overfitting on the test set?
1medium
Title: sample inference code for llama sft 7 model Body: Hello, Can you send me the inference sample python code snippet of llama sft 7 model that shows how to do inference with llama model? I know about <assistant> <prompter> these special tokens from code are used to learn model differences between user & assistant prompts. But i want to know once user enters prompt/message, what is the pipeline for inference. like EOS tokens used, Stop sequences used. where & how to use special tokens to frame user & assistant responses. I was looking through inference code but its quite complex to understand for new user.
1medium
Title: model.load(..., weights_only=True) dose not load batch normalization weights Body: when I'm using model.load(..., weights_only=True), the weights correspond to BN layers are not loaded. that's why when I use model.predict the output of the classifier is completely different and incorrect. How can I load the weights of conv layers as well as BN layers but not input size and other optimization parameters? Thanks
1medium
Title: Allow applying patterns to single values Body: Context: plotly.express.timeline Looking at (and neighboring lines which perform analogous checks) https://github.com/plotly/plotly.py/blob/216fca2a7ed14d2e58f5423557b6e731e2a0f575/packages/python/plotly/plotly/express/_core.py#L973, there is no way to set `pattern_shape_sequence` to `['']` and set patterns for individual values within `pattern_shape_map`. All I want is to make one specific thing stand out without changing its color. โ€ฆunless there is another way to achieve that that Iโ€™d missed.
1medium
Title: Support for LocalAI Body: Hey :wave: LocalAI (https://github.com/mudler/LocalAI) author here - nice project! **Is your feature request related to a problem? Please describe.** I'd like to run this locally with LocalAI - only Ollama seems to be supported. **Describe the solution you'd like** LocalAI provides a drop-in API compatible with OpenAI - the only requirement is to be able to specify a base API url endpoint for the client to hit. If Scrapegraph could let the user specify a base url for OpenAI would be enough to make it compatible with LocalAI **Describe alternatives you've considered** N/A **Additional context** n/a
1medium
Title: Darkflow - YOLOv1 - Loss function Body: Hi, Guys Can someone explain how the loss function is working? Because from what I can tell it's being called cli.py builds the model, creates a framework and does a single pass to build the network. Then Train is called and calls the loss function and the training begins. Is this correct chain of events? Also flow.py contains: Please see the comment below. ``` def train(self): loss_ph = self.framework.placeholders loss_mva = None; profile = list() batches = self.framework.shuffle() # Function Pointer to loss function loss_op = self.framework.loss print("LOSS FUNCTION POINTER: ",loss_op) for i, (x_batch, datum) in enumerate(batches): if not i: self.say(train_stats.format( self.FLAGS.lr, self.FLAGS.batch, self.FLAGS.epoch, self.FLAGS.save )) feed_dict = { loss_ph[key]: datum[key] for key in loss_ph } feed_dict[self.inp] = x_batch feed_dict.update(self.feed) fetches = [self.train_op, loss_op] if self.FLAGS.summary: fetches.append(self.summary_op) fetched = self.sess.run(fetches, feed_dict) print(fetched) loss = fetched[1] print(loss) # WHAT DOES THIS DO BELOW?????????? if loss_mva is None: loss_mva = loss loss_mva = .9 * loss_mva + .1 * loss step_now = self.FLAGS.load + i + 1 if self.FLAGS.summary: self.writer.add_summary(fetched[2], step_now) form = 'step {} - loss {} - moving ave loss {}' self.say(form.format(step_now, loss, loss_mva)) profile += [(loss, loss_mva)] ckpt = (i+1) % (self.FLAGS.save // self.FLAGS.batch) args = [step_now, profile] if not ckpt: _save_ckpt(self, *args) if ckpt: _save_ckpt(self, *args) ```
1medium
Title: Fix failing builds: https://travis-ci.org/github/man-group/arctic/jobs/762598165 Body: On a quick glance, some failing tests seem to be returning None, both VersionStore and ChunkStore. Need to take a proper look at this, as it's blocking quite a few PRs
2hard
Title: A question about loss backward Body: Thank you for sharing the well organized code. I learned a lot from the code. Now I would like to ask you a question about loss.backward. For the classification problem, the final output of the model is a classification vector, such as 1 x M vector, your code is also targeted at the classification task, for a certain category, one of the categories corresponding to the classification score can be propagated backward. My question is, if the output of the model is not a 1 x M vector, but a matrix like N x M, and I want to take one of N and propagate backwards, how do I do that? Your code: 1 x M -> value I want: N x M -> a vector Looking forward to your reply.
1medium
Title: djceleryๅ‡บ้”™. Body: ![Uploading image.pngโ€ฆ]() Request Method: | GET -- | -- https://csds.nkhdkj.com/admin/djcelery/periodictask/ 2.1.8 TypeError Object of type '__proxy__' is not JSON serializable
1medium
Title: Visual glitches with thick line(>1) on version 0.12.4 Body: <!-- In the following, please describe your issue in detail! --> <!-- If some of the sections do not apply, just remove them. --> ### Short description When setting line width thicker than 1, zooming in causes some line to fill up a large area https://user-images.githubusercontent.com/66480156/156883464-b96a4b43-101e-46aa-afed-2ff300aae082.mp4 ### Code to reproduce That light green line was created this way ```python import pyqtgraph as pyqtgraph plot_item = pyqtgraph.PlotItem() line = plot_item.plot( pen=pyqtgraph.mkPen("#E2F200", width=2.5), connect="finite", ), ``` ### Expected behavior Zooming in normally ### Real behavior Light green line fills up a large space. No error occured. ### Tested environment(s) * PyQtGraph version: 0.12.4 * Qt Python binding: Pyside6 6.2.3 Qt 6.2.3 * Python version: 3.10.2 * NumPy version: 1.22.2 * Operating system: Windows 11 * Installation method: pip ### Additional context None
1medium
Title: [BUG] File not found in autotuner cache in multi-node setting on SLURM Body: **Describe the bug** I am training an LLM using DeepSpeed and 12 nodes a 8 V100s per node. My training is generally working well (thanks DeepSpeed), but when I run multiple training runs in parallel, I run into trouble. I am getting these kinds of errors ``` Traceback (most recent call last): File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 473, in matmul_ext_update_autotune_table fp16_matmul._update_autotune_table() fp16_matmul._update_autotune_table() File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel) File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel) File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table cache_manager.put(autotune_table) File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put cache_manager.put(autotune_table) os.rename(self.file_path + ".tmp", self.file_path) FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle' File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put os.rename(self.file_path + ".tmp", self.file_path) FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/.cache/Fp16Matmul_2d_kernel.pickle' ``` I thought that this is because maybe the directories are shared between the multiple runs, which can create race conditions. My `TMPDIR`, `TRITON_CACHE_DIR`, and `TORCH_EXTENSIONS_DIR` are set as follows ``` export TMPDIR=$HOME/scratch/.cache export TRITON_CACHE_DIR=$HOME/scratch/.cache export TORCH_EXTENSIONS_DIR=$HOME/scratch/.cache/torch-extensions ``` To fix this, I tried to allocate one cache folder per run like so ``` export TMPDIR=$HOME/scratch/.cache export TRITON_CACHE_DIR=$HOME/scratch/$SLURM_JOBID/.cache export TORCH_EXTENSIONS_DIR=$HOME/scratch/$SLURM_JOBID/.cache/torch-extensions mkdir -p $TRITON_CACHE_DIR mkdir -p $TORCH_EXTENSIONS_DIR ``` but that also didn't work. Now I am getting this error ``` Traceback (most recent call last): File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 473, in matmul_ext_update_autotune_table fp16_matmul._update_autotune_table() fp16_matmul._update_autotune_table() File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel) File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel) File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table cache_manager.put(autotune_table) File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put cache_manager.put(autotune_table) os.rename(self.file_path + ".tmp", self.file_path) FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle' File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put os.rename(self.file_path + ".tmp", self.file_path) FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle' ``` **ds_report output** ``` [2024-06-12 03:08:15,154] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2024-06-12 03:08:15,765] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-devel package with yum [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH No ROCm runtime is found, using ROCM_HOME='/opt/rocm-4.3.0' [WARNING] NVIDIA Inference is only supported on Ampere and newer architectures [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3 [WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible -------------------------------------------------- DeepSpeed C++/CUDA extension op report -------------------------------------------------- NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op. -------------------------------------------------- JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-devel package with yum [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. async_io ............... [NO] ....... [NO] fused_adam ............. [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] cpu_adagrad ............ [NO] ....... [OKAY] cpu_lion ............... [NO] ....... [OKAY] [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH evoformer_attn ......... [NO] ....... [NO] [WARNING] NVIDIA Inference is only supported on Ampere and newer architectures fp_quantizer ........... [NO] ....... [NO] fused_lamb ............. [NO] ....... [OKAY] fused_lion ............. [NO] ....... [OKAY] inference_core_ops ..... [NO] ....... [OKAY] cutlass_ops ............ [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] ragged_device_ops ...... [NO] ....... [OKAY] ragged_ops ............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3 [WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/gpfs/u/home/ANFM/ANFMbchl/scratch/miniconda3/envs/torch-nightly/lib/python3.10/site-packages/torch'] torch version .................... 2.3.0+cu121 deepspeed install path ........... ['/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed'] deepspeed info ................... 0.14.3+488a823, 488a823, master torch cuda version ............... 12.1 torch hip version ................ None nvcc version ..................... 12.1 deepspeed wheel compiled w. ...... torch 2.4, cuda 12.1 shared memory (/dev/shm) size .... 377.69 GB ```
2hard
Title: [Feature request] using pre-extracted SE files(se.pt) Body: Can't I use xtts using pre-extracted SE files(se.pt)? I want to apply it and do a voice clone
1medium
Title: [Question] How could I hide the "Finding best initial lr" message from pytorch_lightning when using Darts' Torch Forecasting Models? Body: I am unable to hide the `Finding best initial lr` message when calling the [`lr_find` method](https://github.com/unit8co/darts/blob/c3a611236690f0704ced6078982adf20b0a33886/darts/models/forecasting/torch_forecasting_model.py#L1111) associated with Darts' Torch `Forecasting Models`, such as `BlockRNNModel`: ![image](https://github.com/unit8co/darts/assets/469246/1e9a8603-72ec-44ef-b31c-0be04251b94d) Based on my understanding, this message is generated by `pytorch-lightning`. In particular, by the [`on_train_batch_start` method](https://github.com/Lightning-AI/pytorch-lightning/blob/58ad56afece3ea7faec2f1b7f68d90195f316d78/src/lightning/pytorch/tuner/lr_finder.py#L384) from the [`_LRCallback` class](https://github.com/Lightning-AI/pytorch-lightning/blob/58ad56afece3ea7faec2f1b7f68d90195f316d78/src/lightning/pytorch/tuner/lr_finder.py#L349). At least in this specific case :thinking: I have tried the following: 1. Including `verbose=False` when calling the `lr_find` method. 2. Passing a `TQDMProgressBar(refresh_rate=0)` instance through the `callbacks` list in the `pl_trainer_kwargs` dict passed to the `BlockRNNModel` constructor. 3. Including an `"enable_progress_bar": False` in the `pl_trainer_kwargs` dict passed to the `BlockRNNModel` constructor. So far, no luck :disappointed: I don't know if I have misunderstood something or I am missing some critical bit of information :grimacing: Could you help me solve this issue? :pray: Any feedback would be much appreciated :relaxed: *** I am using: - python 3.11.0 - darts 0.28.0 - pytorch_lightning 1.9.5 Let me know if there are any missing but relevant and clarifying details I should mention.
1medium
Title: The DataFrame serialisation is slower than in v1 Body: Using python pandas. Version 1 i used this: ```python def dbpop_influx(data, dbname, measurement, columns): ## constants: dbclient = DataFrameClient(host='localhost', port=8086, username='root', password='root', database=dbname) n_import_chunks = math.ceil(len(data) / 10000) data_chunks = np.array_split(data, n_import_chunks) for d in data_chunks: dbclient.write_points(d, measurement, tag_columns = columns, protocol = 'line') ``` Takes 29 seconds (was looking to improve that speed with multiprocessing) Version 2 i used this: ```python _client = InfluxDBClient(url="http://localhost:9999", token=token, org="org") _write_client = _client.write_api(write_options=WriteOptions(batch_size=10000, flush_interval=10_000, jitter_interval=0, retry_interval=5_000)) start = time.time() _write_client.write('data', record=imp_dat[0], data_frame_measurement_name='coinmarketcap_ohlcv', data_frame_tag_columns=['quote_asset','base_asset']) print(time.time() - start) ``` this takes 118 seconds... data looks like: ![image](https://user-images.githubusercontent.com/32384270/81547780-3a30fd80-9374-11ea-9df4-f0d030fb08c9.png) @bednar
1medium
Title: Civit AI flux model razor-8step-rapid-real not working with diffusers single file Body: ### Describe the bug We have this civit AI model: https://civitai.com/models/849864/razor-8step-rapid-real which we want to run using `from_single_file`, but it errors out ### Reproduction 1) First create your CivitAI API key by logging into civit ai and navigating to https://civitai.com/user/account Then go to "API Keys" section in the bottom and create your key. 2) Run the following command on terminal: `wget --show-progress -O model.safetensors "https://api.civitai.com/download/models/950841?token=YOUR_TOKEN"` 3) Try the code: ``` import torch from diffusers import FluxPipeline #wget --show-progress -O model.safetensors "https://api.civitai.com/download/models/950841?token=" pipe = FluxPipeline.from_single_file( "model.safetensors", torch_dtype=torch.bfloat16, ) pipe.to("cuda") prompt = "A cat holding a sign that says hello world" image = pipe( prompt, height=1024, width=1024, guidance_scale=3.5, num_inference_steps=50, max_sequence_length=512, generator=torch.Generator("cuda").manual_seed(0) ).images[0] image.save("flux.png") ``` ### Logs ```shell (3.7) user@c6dbd33b-904f-4d4e-bc4e-f68f78a80315:~/runware/Ali/sd-base-api$ python ali.py Fetching 16 files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 16/16 [00:00<00:00, 24745.16it/s] Loading pipeline components...: 57%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ– | 4/7 [00:16<00:12, 4.16s/it]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers Loading pipeline components...: 71%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ– | 5/7 [00:16<00:06, 3.37s/it] Traceback (most recent call last): File "/home/user/runware/Ali/sd-base-api/ali.py", line 7, in <module> pipe = FluxPipeline.from_single_file( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/diffusers/loaders/single_file.py", line 509, in from_single_file loaded_sub_model = load_single_file_sub_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/diffusers/loaders/single_file.py", line 104, in load_single_file_sub_model loaded_sub_model = load_method( ^^^^^^^^^^^^ File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/diffusers/loaders/single_file_model.py", line 343, in from_single_file diffusers_format_checkpoint = checkpoint_mapping_fn( ^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/diffusers/loaders/single_file_utils.py", line 2255, in convert_flux_transformer_checkpoint_to_diffusers q, k, v, mlp = torch.split(checkpoint.pop(f"single_blocks.{i}.linear1.weight"), split_size, dim=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/torch/functional.py", line 207, in split return tensor.split(split_size_or_sections, dim) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/torch/_tensor.py", line 983, in split return torch._VF.split_with_sizes(self, split_size, dim) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: split_with_sizes expects split_sizes to sum exactly to 33030144 (input tensor's size at dimension 0), but got split_sizes=[3072, 3072, 3072, 12288] ``` ### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - ๐Ÿค— Diffusers version: 0.33.0.dev0 - Platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35 - Running on Google Colab?: No - Python version: 3.12.9 - PyTorch version (GPU?): 2.5.1+cu124 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.27.1 - Transformers version: 4.47.1 - Accelerate version: 1.2.1 - PEFT version: 0.14.0 - Bitsandbytes version: not installed - Safetensors version: 0.5.0 - xFormers version: not installed - Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sayakpaul
2hard
Title: Current version of chromedriver breaks pinned version of selenium Body: Running the tests on my Ubuntu 20.4 server against tag `5d` resulted in ``` test_admin_home_page (test_selenium.SeleniumTestCase) ... skipped 'Web browser not available' ``` So I hacked `tests/test_selenium.py` to add code to re-raise the exception caught in `SeleniumTestCase.setUpClass()` which got me ``` ERROR: setUpClass (test_selenium.SeleniumTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/bkline/repos/flasky/tests/test_selenium.py", line 19, in setUpClass cls.client = webdriver.Chrome(chrome_options=options) File "/home/bkline/repos/flasky/venv/lib/python3.8/site-packages/selenium/webdriver/chrome/webdriver.py", line 65, in __init__ RemoteWebDriver.__init__( File "/home/bkline/repos/flasky/venv/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 98, in __init__ self.start_session(desired_capabilities, browser_profile) File "/home/bkline/repos/flasky/venv/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 188, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "/home/bkline/repos/flasky/venv/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 256, in execute self.error_handler.check_response(response) File "/home/bkline/repos/flasky/venv/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 194, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: invalid argument: unrecognized capability: chromeOptions ``` A little more digging found some [advice](https://stackoverflow.com/questions/57995521/selenium-common-exceptions-webdriverexception-message-invalid-argument-unreco) which suggested upgrading `selenium` to accommodate changes to the behavior of the launcher in `chromedriver`. Sure enough, running `pip install -U selenium` got the selenium test back in the game. So it would seem that the requirements documents need to be updated to bump the version of `selenium`. I'm at 3.141.0 which is what PyPI is currently serving up. I haven't done the testing to determine the minimum version needed to avoid the breakage I ran into, but I'm not seeing any adverse side effects from this version. ```bash (venv) $ grep selenium requirements/dev.txt selenium==3.4.3 (venv) $ pip freeze | grep selenium selenium==3.141.0 (venv) $ dpkg -l | grep chromium ii chromium-browser 1:85.0.4183.83-0ubuntu0.20.04.2 amd64 Transitional package - chromium-browser -> chromium snap ii chromium-chromedriver 1:85.0.4183.83-0ubuntu0.20.04.2 amd64 Transitional package - chromium-chromedriver -> chromium snap ```
1medium
Title: [BUG] For an nrms model, run_fast_eval does not return the correct prediction scores Body: ### Description I believe that for the nrms model, the output of `run_fast_eval` is incorrect. (See code [here](https://github.com/microsoft/recommenders/blob/98d661edc6a9965c7f42b76dc5317af3ae74d5e0/recommenders/models/newsrec/models/base_model.py#L399).) ### In which platform does it happen? Jupyter Lab running in Ubuntu 20.04.4 LTS (Focal Fossa). Using Python version 3.8.10 with tensorflow version 2.8.0. ### How do we replicate the issue? Run the following code and verify that the outputs are different (I believe they should be the same): ``` group_impr_indexes, group_labels, group_preds = model.run_slow_eval(valid_news_file, valid_behaviors_file) group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file) ``` Also, notice that the group_preds output of `run_slow_eval` is in the range [0,1] (as expected) while the output group_preds of `run_fast_eval` is not. ### Expected behavior (i.e. solution) As stated in the [NRMS paper](https://wuch15.github.io/paper/EMNLP2019-NRMS.pdf) (equation 11), the output should be click probabilities (in the range [0,1]). I think this can be fixed by adding a sigmoid function after the computation of the dot product on [line 416](https://github.com/microsoft/recommenders/blob/98d661edc6a9965c7f42b76dc5317af3ae74d5e0/recommenders/models/newsrec/models/base_model.py#L416). ### Other Comments
1medium
Title: Troubles regarding removing the progress bar after loop Body: Hi, I want to use tqdm in a nested loop and it would help if the progress bar for the inner loop can be removed after finishing the inner loop. I try to pass the parameter "leave=False" as doc said. It did remove the progress bar at the end, but leave a blank line, which means the next progress bar starts at a new line. ![image](https://github.com/tqdm/tqdm/assets/53689465/31f654c4-7592-4f2b-af55-2aaa7779433d) The mini example is listed as below. ``` from tqdm.auto import tqdm, trange import time for i in trange(4, leave=True): for j in trange(5, total=5, leave=False): time.sleep(0.5) ``` I wonder it may relate to my working environment (2 types): 1. JupyterLab 3.4.4, RHEL 8.8 (linux), jupyterlab 3.4.4, python 3.10.11, tqdm 4.65.0, ipywidgets 7.6.5 2. Pycharm using jupyter notebook 6.5.2, python 3.10.12, tqdm 4.65.0, ipywidgets 8.0.4
1medium
Title: Generate lists of pages based on tag metadata Body: It would be useful if users could attach **tags** to certain pages, and then use these tags to generate lists of pages that fall under that tag. This is a common thing to do in blogging platforms, and may be useful here as well. [ABlog kind-of supports this](https://ablog.readthedocs.io/en/latest/), but it uses a directive rather than page-level metadata. Ideally, people would be able to include metadata at the top of their pages like: ``` tags: tag1, tag2 ``` and then include a list of pages with those tags via something like: ```` # To generate a list of pages that have `tag1` in their metadata ```{tag-list} tag1 ``` ````
1medium
Title: Not able to index when using List[str] in custom document class Body: System: `macOS 13.3.1` Python version: `3.11.0` IPython version: `8.10.0` In the latest docarray version, when building hnsw index from the following simple document class: ```python from docarray import BaseDoc from docarray.index import HnswDocumentIndex class MyDoc(BaseDoc): test: List[str] index = HnswDocumentIndex[MyDoc]() ``` The following error will pop up: ``` File ~/miniconda3/envs/test/lib/python3.11/site-packages/docarray/index/backends/hnswlib.py:76, in HnswDocumentIndex.__init__(self, db_config, **kwargs) 73 if db_config is not None and getattr(db_config, 'index_name'): 74 db_config.work_dir = db_config.index_name.replace("__", "/") ---> 76 super().__init__(db_config=db_config, **kwargs) 77 self._db_config = cast(HnswDocumentIndex.DBConfig, self._db_config) 78 self._work_dir = self._db_config.work_dir File ~/miniconda3/envs/test/lib/python3.11/site-packages/docarray/index/abstract.py:111, in BaseDocIndex.__init__(self, db_config, subindex, **kwargs) 109 self._runtime_config = self.RuntimeConfig() 110 self._logger.info('Runtime config created') --> 111 self._column_infos: Dict[str, _ColumnInfo] = self._create_column_infos( 112 self._schema 113 ) 114 self._is_subindex = subindex 115 self._subindices: Dict[str, BaseDocIndex] = {} File ~/miniconda3/envs/test/lib/python3.11/site-packages/docarray/index/abstract.py:880, in BaseDocIndex._create_column_infos(self, schema) 873 """Collects information about every column that is implied by a given schema. 874 875 :param schema: The schema (subclass of BaseDoc) to analyze and parse 876 columns from 877 :returns: A dictionary mapping from column names to column information. 878 """ 879 column_infos: Dict[str, _ColumnInfo] = dict() --> 880 for field_name, type_, field_ in self._flatten_schema(schema): 881 # Union types are handle in _flatten_schema 882 if issubclass(type_, AnyDocArray): 883 column_infos[field_name] = _ColumnInfo( 884 docarray_type=type_, db_type=None, config=dict(), n_dim=None 885 ) File ~/miniconda3/envs/test/lib/python3.11/site-packages/docarray/index/abstract.py:860, in BaseDocIndex._flatten_schema(cls, schema, name_prefix) 856 else: 857 raise ValueError( 858 f'Union type {t_} is not supported. Only Union of subclasses of AbstractTensor or Union[type, None] are supported.' 859 ) --> 860 elif issubclass(t_, BaseDoc): 861 names_types_fields.extend( 862 cls._flatten_schema(t_, name_prefix=inner_prefix) 863 ) 864 elif issubclass(t_, AbstractTensor): File <frozen abc>:123, in __subclasscheck__(cls, subclass) TypeError: issubclass() arg 1 must be a class ``` This also happens in other typings like Sequence, Iterable and Tuple.
2hard
Title: Saving Early Stopping Patience Value in last.pt Checkpoint Body: ### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Hello, I have a question regarding the checkpointing mechanism in YOLOv5, specifically related to saving and resuming the training process. When training a YOLOv5 model, the last.pt checkpoint saves the model's weights and optimizer state. However, it appears that training process parameters, such as the early stopping patience value, are not included in this checkpoint. **If my training is interrupted and I restart from the last.pt checkpoint, does the **patience value reset to zero, or does it continue from the previously recorded value?**** ### Additional _No response_
1medium
Title: How to use pandarallel_apply with multiple arguments? Body: I have code like below: ``` def similarity(txt1, txt2): return xxxxxxx vSimilarity = np.vectorize(similarity) vSimilarity(df['var1'], df['var2']) ``` How to convert it to utilize parallel_apply? Below does not work. `(df['var1'], df['var2']).parallel_apply(similarity)`
1medium
Title: Alpine Linux Body: Have you considered using alpine linux?
3misc
Title: Connection Error When Using By-pass Proxies Body: ### Describe the bug I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides๐Ÿค”, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))" I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library. ### Steps to reproduce the bug 1. Turn on any proxy software like Clash / ShadosocksR etc. 2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library) 3. load any dataset from hugginface online ### Expected behavior --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3) [1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric ----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval") File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs) [44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2) [45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash) ---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs) [2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning) [2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS) -> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory( [2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path, [2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision, [2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config, [2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode, [2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code, [2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path [2111](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2111) metric_cls = import_main_class(metric_module, dataset=False) [2112](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2112) metric = metric_cls( [2113](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2113) config_name=config_name, [2114](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2114) process_id=process_id, ... --> [633](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:633) raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") [634](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:634) elif response is not None: [635](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:635) raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))"))) ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
1medium
Title: AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'LocallyConnected1D' Body: AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'LocallyConnected1D' ![Image](https://github.com/user-attachments/assets/d8f3e1e3-efaa-4b87-9197-a7df446ac37f)
2hard
Title: After resolving python circular import error, docs not working. Body: Using odmantic 0.3.5 with this model: person.py ``` from __future__ import annotations from typing import Optional from odmantic import Field , Model # from pydantic import Field, BaseModel as Model <-- docs work fine but odmantic raise error class Person(Model): name: Optional[str] = Field(None, <-- some extra options --> ) rooms: "Optional[Room]" = Field(None, <-- some extra options --> ) from ..models.room import Room # nopep8 Person.update_forward_refs() ``` room.py ``` from __future__ import annotations from typing import Optional from odmantic import Field , Model # from pydantic import Field, BaseModel as Model <-- docs work fine but odmantic raise error class Room(Model): name: Optional[str] = Field(None, <-- some extra options --> ) persons: "Optional[Person]" = Field(None, <-- some extra options --> ) from ..models.person import Person # nopep8 Room.update_forward_refs() ``` ### Current Behavior this [project](https://github.com/SSaeedHoseini/odmantic-crash-docs-example) reproduce the bug, you can execute ./run.sh and open the http://127.0.0.1:8000/docs#/ ### Expected behavior show docs. ![image](https://user-images.githubusercontent.com/47109202/132976341-968a3bdc-6181-4f14-97a1-e549ee87638d.png) ### Environment - ODMantic version: 0.3.5 - MongoDB version: 4.4.8 - Pydantic infos (output of `python -c "import pydantic.utils; print(pydantic.utils.version_info())`): ``` pydantic version: 1.8.2 pydantic compiled: True install path: /xxx/lib/python3.9/site-packages/pydantic python version: 3.9.5 (default, May 11 2021, 08:20:37) [GCC 10.3.0] platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.33 optional deps. installed: ['dotenv', 'typing-extensions'] ``` **Additional context** error raised from when open the docs: ``` Traceback (most recent call last): File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 375, in run_asgi result = await app(self.scope, self.receive, self.send) File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__ return await self.app(scope, receive, send) File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/fastapi/applications.py", line 208, in __call__ await super().__call__(scope, receive, send) File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ await self.middleware_stack(scope, receive, send) File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ raise exc from None File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ raise exc from None File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/routing.py", line 580, in __call__ await route.handle(scope, receive, send) File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/routing.py", line 241, in handle await self.app(scope, receive, send) File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/routing.py", line 52, in app response = await func(request) File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/fastapi/applications.py", line 161, in openapi return JSONResponse(self.openapi()) File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/fastapi/applications.py", line 136, in openapi self.openapi_schema = get_openapi( File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/fastapi/openapi/utils.py", line 387, in get_openapi definitions = get_model_definitions( File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/fastapi/utils.py", line 24, in get_model_definitions m_schema, m_definitions, m_nested_models = model_process_schema( File "pydantic/schema.py", line 548, in pydantic.schema.model_process_schema File "pydantic/schema.py", line 589, in pydantic.schema.model_type_schema File "pydantic/schema.py", line 241, in pydantic.schema.field_schema File "pydantic/schema.py", line 495, in pydantic.schema.field_type_schema File "pydantic/schema.py", line 839, in pydantic.schema.field_singleton_schema File "/usr/lib/python3.9/abc.py", line 102, in __subclasscheck__ return _abc_subclasscheck(cls, subclass) TypeError: issubclass() arg 1 must be a class ```
2hard
Title: HTTP Request Interception Body: I'm trying the code below and it keeps giving me the following error: `No module named 'blinker._saferef'` How can I fix it? Also it would be great if I could change the request's parameters/headers, like changing a certain header before continuing the request like in playwright/puppeteer. ```from seleniumbase import Driver def intercept_response(request, response): print(request.headers) driver = Driver(wire=True) try: driver.response_interceptor = intercept_response driver.get("https://wikipedia.org") finally: driver.quit()```
1medium
Title: Documentation / help: where is the config stored Body: Running http --help should state where the config file is. The man page should also state that. It should also state the format of the config file.
0easy