text
stringlengths
20
57.3k
labels
class label
4 classes
Title: [BUG] For MIND small dataset utils, it can not download. Body: ### Description ### In which platform does it happen? Local Host: Ubuntu 18.04 ### How do we replicate the issue? use example/00_quick_start/lstur_MIND.ipynb and change the MIND_TYPE to 'small'. ### Expected behavior (i.e. solution) run this demo ### Other Comments Could you please check this file name is correct? https://github.com/microsoft/recommenders/blob/837b8081a4421e144f2bc05ba949c5ac6c52320f/reco_utils/recommender/newsrec/newsrec_utils.py#L358 In this code, the file named "MINDsma_utils.zip" may be not correct. I also tested "MINDsmall_utils.zip" but it can't download either.
1medium
Title: Flaky test: `tests/chainer_tests/links_tests/loss_tests/test_crf1d.py::TestCRF1d` Body: https://jenkins.preferred.jp/job/chainer/job/chainer_pr/1816/TEST=chainer-py2,label=mn1-p100/console >`FAIL tests/chainer_tests/links_tests/loss_tests/test_crf1d.py::TestCRF1d_param_1_{initial_cost='random', dtype=float16, transpose=False}::test_forward_cpu` ``` 00:10:59 _ TestCRF1d_param_1_{initial_cost='random', dtype=float16, transpose=False}.test_forward_cpu _ 00:10:59 00:10:59 self = <chainer.testing._bundle.TestCRF1d_param_1_{initial_cost='random', dtype=float16, transpose=False} testMethod=test_forward_cpu> 00:10:59 args = (), kwargs = {} 00:10:59 e = AssertionError('\nNot equal to tolerance rtol=0.0001, atol=0.005\n\n(mismatch ...5\n total tolerance: 0.00544704666138\nx: 4.4765625\ny: 4.470466613769531\n',) 00:10:59 s = <StringIO.StringIO instance at 0x7fe9161c9b00>, k = 'transpose', v = False 00:10:59 00:10:59 @functools.wraps(base_method) 00:10:59 def new_method(self, *args, **kwargs): 00:10:59 try: 00:10:59 return base_method(self, *args, **kwargs) 00:10:59 except unittest.SkipTest: 00:10:59 raise 00:10:59 except Exception as e: 00:10:59 s = six.StringIO() 00:10:59 s.write('Parameterized test failed.\n\n') 00:10:59 s.write('Base test method: {}.{}\n'.format( 00:10:59 base.__name__, base_method.__name__)) 00:10:59 s.write('Test parameters:\n') 00:10:59 for k, v in six.iteritems(param2): 00:10:59 s.write(' {}: {}\n'.format(k, v)) 00:10:59 > utils._raise_from(e.__class__, s.getvalue(), e) 00:10:59 00:10:59 args = () 00:10:59 base = <class 'chainer_tests.links_tests.loss_tests.test_crf1d.TestCRF1d'> 00:10:59 base_method = <unbound method TestCRF1d.test_forward_cpu> 00:10:59 e = AssertionError('\nNot equal to tolerance rtol=0.0001, atol=0.005\n\n(mismatch ...5\n total tolerance: 0.00544704666138\nx: 4.4765625\ny: 4.470466613769531\n',) 00:10:59 k = 'transpose' 00:10:59 kwargs = {} 00:10:59 param2 = {'dtype': <type 'numpy.float16'>, 'initial_cost': 'random', 'transpose': False} 00:10:59 s = <StringIO.StringIO instance at 0x7fe9161c9b00> 00:10:59 self = <chainer.testing._bundle.TestCRF1d_param_1_{initial_cost='random', dtype=float16, transpose=False} testMethod=test_forward_cpu> 00:10:59 v = False 00:10:59 00:10:59 chainer/testing/parameterized.py:89: 00:10:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 00:10:59 chainer/utils/__init__.py:104: in _raise_from 00:10:59 six.reraise(exc_type, new_exc, sys.exc_info()[2]) 00:10:59 chainer/testing/parameterized.py:78: in new_method 00:10:59 return base_method(self, *args, **kwargs) 00:10:59 tests/chainer_tests/links_tests/loss_tests/test_crf1d.py:86: in test_forward_cpu 00:10:59 self.check_forward(self.xs, self.ys) 00:10:59 tests/chainer_tests/links_tests/loss_tests/test_crf1d.py:83: in check_forward 00:10:59 **self.check_forward_options) 00:10:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 00:10:59 00:10:59 x = array(4.4765625, dtype=float16), y = array(4.470466613769531), atol = 0.005 00:10:59 rtol = 0.0001, verbose = True 00:10:59 00:10:59 def assert_allclose(x, y, atol=1e-5, rtol=1e-4, verbose=True): 00:10:59 """Asserts if some corresponding element of x and y differs too much. 00:10:59 00:10:59 This function can handle both CPU and GPU arrays simultaneously. 00:10:59 00:10:59 Args: 00:10:59 x: Left-hand-side array. 00:10:59 y: Right-hand-side array. 00:10:59 atol (float): Absolute tolerance. 00:10:59 rtol (float): Relative tolerance. 00:10:59 verbose (bool): If ``True``, it outputs verbose messages on error. 00:10:59 00:10:59 """ 00:10:59 x = backend.CpuDevice().send(utils.force_array(x)) 00:10:59 y = backend.CpuDevice().send(utils.force_array(y)) 00:10:59 try: 00:10:59 numpy.testing.assert_allclose( 00:10:59 x, y, atol=atol, rtol=rtol, verbose=verbose) 00:10:59 except AssertionError as e: 00:10:59 f = six.StringIO() 00:10:59 f.write(str(e) + '\n\n') 00:10:59 f.write( 00:10:59 'assert_allclose failed: \n' + 00:10:59 ' shape: {} {}\n'.format(x.shape, y.shape) + 00:10:59 ' dtype: {} {}\n'.format(x.dtype, y.dtype)) 00:10:59 if x.shape == y.shape: 00:10:59 xx = numpy.atleast_1d(x) 00:10:59 yy = numpy.atleast_1d(y) 00:10:59 err = numpy.abs(xx - yy) 00:10:59 00:10:59 tol_rtol = rtol * numpy.abs(yy).astype(numpy.float64) 00:10:59 tol_err = atol + tol_rtol 00:10:59 00:10:59 i = numpy.unravel_index( 00:10:59 numpy.argmax(err.astype(numpy.float64) - tol_err), err.shape) 00:10:59 00:10:59 if yy[i] == 0: 00:10:59 rel_err = 'inf' 00:10:59 else: 00:10:59 rel_err = err[i] / numpy.abs(yy[i]) 00:10:59 00:10:59 f.write( 00:10:59 ' i: {}\n'.format(i) + 00:10:59 ' x[i]: {}\n'.format(xx[i]) + 00:10:59 ' y[i]: {}\n'.format(yy[i]) + 00:10:59 ' relative error[i]: {}\n'.format(rel_err) + 00:10:59 ' absolute error[i]: {}\n'.format(err[i]) + 00:10:59 ' relative tolerance * |y[i]|: {}\n'.format(tol_rtol[i]) + 00:10:59 ' absolute tolerance: {}\n'.format(atol) + 00:10:59 ' total tolerance: {}\n'.format(tol_err[i])) 00:10:59 00:10:59 opts = numpy.get_printoptions() 00:10:59 try: 00:10:59 numpy.set_printoptions(threshold=10000) 00:10:59 f.write('x: ' + numpy.array2string(x, prefix='x: ') + '\n') 00:10:59 f.write('y: ' + numpy.array2string(y, prefix='y: ') + '\n') 00:10:59 finally: 00:10:59 numpy.set_printoptions(**opts) 00:10:59 > raise AssertionError(f.getvalue()) 00:10:59 E AssertionError: Parameterized test failed. 00:10:59 E 00:10:59 E Base test method: TestCRF1d.test_forward_cpu 00:10:59 E Test parameters: 00:10:59 E initial_cost: random 00:10:59 E dtype: <type 'numpy.float16'> 00:10:59 E transpose: False 00:10:59 E 00:10:59 E 00:10:59 E (caused by) 00:10:59 E AssertionError: 00:10:59 E Not equal to tolerance rtol=0.0001, atol=0.005 00:10:59 E 00:10:59 E (mismatch 100.0%) 00:10:59 E x: array(4.4765625, dtype=float16) 00:10:59 E y: array(4.470466613769531) 00:10:59 E 00:10:59 E assert_allclose failed: 00:10:59 E shape: () () 00:10:59 E dtype: float16 float64 00:10:59 E i: (0,) 00:10:59 E x[i]: 4.4765625 00:10:59 E y[i]: 4.47046661377 00:10:59 E relative error[i]: 0.00136359059515 00:10:59 E absolute error[i]: 0.00609588623047 00:10:59 E relative tolerance * |y[i]|: 0.000447046661377 00:10:59 E absolute tolerance: 0.005 00:10:59 E total tolerance: 0.00544704666138 00:10:59 E x: 4.4765625 00:10:59 E y: 4.470466613769531 00:10:59 00:10:59 atol = 0.005 00:10:59 e = AssertionError('\nNot equal to tolerance rtol=0.0001, atol=0.005\n\n(mismatch 100.0%)\n x: array(4.4765625, dtype=float16)\n y: array(4.470466613769531)',) 00:10:59 err = array([ 0.00609589]) 00:10:59 f = <StringIO.StringIO instance at 0x7fe9161c9a28> 00:10:59 i = (0,) 00:10:59 opts = {'edgeitems': 3, 'formatter': None, 'infstr': 'inf', 'linewidth': 75, ...} 00:10:59 rel_err = 0.001363590595150123 00:10:59 rtol = 0.0001 00:10:59 tol_err = array([ 0.00544705]) 00:10:59 tol_rtol = array([ 0.00044705]) 00:10:59 verbose = True 00:10:59 x = array(4.4765625, dtype=float16) 00:10:59 xx = array([ 4.4765625], dtype=float16) 00:10:59 y = array(4.470466613769531) 00:10:59 yy = array([ 4.47046661]) 00:10:59 00:10:59 chainer/testing/array.py:68: AssertionError ```
1medium
Title: Formatted Strings Body: I noticed that there is no documentation on how to use formatted strings which is a great help for strings that may require **multiple variables** which one can encounter in real life situations and other situations.
0easy
Title: Is 2D CNN's description correct ? Body: > This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs. If `use_bias` is True, a bias vector is created and added to the outputs. Finally, if `activation` is not `None`, it is applied to the outputs as well. [keras docs](https://github.com/keras-team/keras/blob/v3.4.1/keras/src/layers/convolutional/conv2d.py#L5) Isn't convolved over *2 spatial dimensions* ?
0easy
Title: Error when running demo_cli.py. Please Help! Body: When I run demo_cli.py I get this error: Traceback (most recent call last): File ".\demo_cli.py", line 96, in <module> mels = synthesizer.synthesize_spectrograms(texts, embeds) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\inference.py", line 77, in synthesize_spectrograms self.load() File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\inference.py", line 58, in load self._model = Tacotron2(self.checkpoint_fpath, hparams) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\tacotron2.py", line 28, in __init__ split_infos=split_infos) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\models\tacotron.py", line 146, in initialize zoneout=hp.tacotron_zoneout_rate, scope="encoder_LSTM")) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\models\modules.py", line 221, in __init__ name="encoder_fw_LSTM") File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\models\modules.py", line 114, in __init__ self._cell = tf.contrib.cudnn_rnn.CudnnLSTM(num_units, name=name) TypeError: __init__() missing 1 required positional argument: 'num_units' Can somone help fix it?
1medium
Title: For small data distable stacking and allow 10-fold cv Body:
1medium
Title: Option to allow one-time password login for old users when using SOCIALACCOUNT_ONLY Body: One of my sites would like to migrate to allowing login only using social accounts, but has existing users with passwords. I think it would be good to have a configuration with the concept of a "legacy password login" option which allows users to authenticate with their password, then migrate to a social account. I'd be happy to look at doing the work for this, but wanted to raise an issue to see if there was appetite to accept this before I do.
1medium
Title: Wrong results with yolo running inference multiple times with Qt backend Body: I'm getting wrong results when running this simple code (essentially the YOLO demo example) using the Qt backend for matplotlib. Both on CPU and GPU. The first inference gives correct results, the following ones return nonsense. This doesn't happen with other models (tried with `ssd_512_mobilenet1.0_coco`). or using other backends (e.g. with `TkAgg`). ```python from gluoncv import model_zoo, data, utils import matplotlib matplotlib.use('Qt5Agg') from matplotlib import pyplot as plt import mxnet as mx for i in range(5): print("########", i) net = model_zoo.get_model('yolo3_darknet53_coco', pretrained=True, ctx=mx.gpu(0)) im_fname = utils.download('ht//raw.githubusercontent.com/zhreshold/' + 'mxnet-ssd/master/data/demo/dog.jpg', path='dog.jpg') x, img = data.transforms.presets.yolo.load_test(im_fname, short=512) x = x.as_in_context(mx.gpu(0)) print('Shape of pre-processed im', x.shape) class_IDs, scores, bounding_boxs = net(x) plt.figure() print(scores[0,:10]) ``` plt.figure() is useless but calling pyplot is what causes things to go haywire. The output is: ``` ######## 0 Shape of pre-processed im (1, 3, 512, 683) [11:26:09] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable) [[ 0.9919528 ] [ 0.9600397 ] [ 0.62269807] [ 0.29241946] [ 0.01795176] [ 0.01141726] [-1. ] [-1. ] [-1. ] [-1. ]] <NDArray 10x1 @gpu(0)> ######## 1 Shape of pre-processed im (1, 3, 512, 683) [[ 2.00238937e-04] [ 1.32904228e-04] [ 1.25731094e-04] [ 1.21111851e-04] [ 1.12316993e-04] [ 1.06516934e-04] [ 9.43234991e-05] [ 5.61804554e-05] [-1.00000000e+00] [-1.00000000e+00]] <NDArray 10x1 @gpu(0)> ######## 2 Shape of pre-processed im (1, 3, 512, 683) [[ 2.00238937e-04] [ 1.32904228e-04] [ 1.25731094e-04] [ 1.21111851e-04] [ 1.12316993e-04] [ 1.06516934e-04] [ 9.43234991e-05] [ 5.61804554e-05] [-1.00000000e+00] [-1.00000000e+00]] <NDArray 10x1 @gpu(0)> ######## 3 Shape of pre-processed im (1, 3, 512, 683) [[ 2.00238937e-04] [ 1.32904228e-04] [ 1.25731094e-04] [ 1.21111851e-04] [ 1.12316993e-04] [ 1.06516934e-04] [ 9.43234991e-05] [ 5.61804554e-05] [-1.00000000e+00] [-1.00000000e+00]] <NDArray 10x1 @gpu(0)> ######## 4 Shape of pre-processed im (1, 3, 512, 683) [[ 2.00238937e-04] [ 1.32904228e-04] [ 1.25731094e-04] [ 1.21111851e-04] [ 1.12316993e-04] [ 1.06516934e-04] [ 9.43234991e-05] [ 5.61804554e-05] [-1.00000000e+00] [-1.00000000e+00]] <NDArray 10x1 @gpu(0)> ``` Of course plotting in a loop has small practical reason, but this clearly happens also when you run yolo multiple times in a shell.
1medium
Title: `adrf` (Async DRF) Support Body: It was recently decided that the "official" async support for DRF would be the [`adrf`](https://github.com/em1208/adrf) package: - https://github.com/encode/django-rest-framework/discussions/7774#discussioncomment-5063336 - https://github.com/encode/django-rest-framework/issues/8496#issuecomment-1438345677 Does `drf-spectacular` support this package? I didn't see it listed in the third party packages: - https://drf-spectacular.readthedocs.io/en/latest/readme.html# It provides a [new Async class-based and functional views](https://github.com/em1208/adrf#async-views). These allow end users to use `async` functionalities to better scale their DRF applications. Somewhat related to - #931
1medium
Title: Ability to poll db to pull current alembic version Body: ### **Desired Behavior** When running a new build from Jenkins it creates a new build environment 'flask db init' migration folder and has the ability to poll the database to pull the current alembic version and then perform migration, upgrade, downgrade as needed as the migrations folder is not saved between build environments or between builds. ### **Current Behavior** I need to remove the current alembic table from the database OR save the migrations folder and move it between builds and build environments to then perform an upgrade otherwise I receive the following complaint. (venv) root@build1:~/manual_build_dirs/balance_service# flask db migrate INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. ERROR [root] Error: Can't locate revision identified by '42f262e6ba92' **### Environment** * Python version: 3.6.9 * Flask-SQLAlchemy version: 2.4.4 * Flask-Migrate version: 2.5.3 * SQLAlchemy version: 1.3.19
1medium
Title: Error after loading a JS file Body: Hello, I have created a folder called "1" in _/var/globaleaks/scripts_ and I have uploaded the file _custom_script.js_ (example I have seen in the forum from _https://github.com/AjuntamentdeBarcelona/bustia-etica-bcn/tree/master/theme_ -changing GLClient by GL-). I restart application with _service globaleaks restart_ But I dont see changes in home page and the console shows an error like this: **Refused to execute script from 'http://127.0.0.1:8082/script' because its MIME type ('application/json') is not executable, and strict MIME type checking is enabled.** I am using a Laptop with Debian 11 and the last version of Globaleaks (4.10.18). Could you help me, please? Thanks
1medium
Title: Jumpy scroll in a Dash DataTable Body: When scrolling the DataTable on a mobile phone, the DataTable is jumpy. Issue first reported on [Forum](https://community.plotly.com/t/jumpy-scroll-in-a-dash-datatable/67940) The community member created [this sample app](https://datatable-scroll-issue.onrender.com/) to replicated the problem. They also shared the [code for the app](https://github.com/timofeymukha/datatable_scroll_issue) on GitHub.
1medium
Title: BT discovery crashes on MacOS 12.2, "attempted to access privacy-sensitive data without a usage description" Body: * bleak version: 0.14.2 * Python version: 3.10.2 * Operating System: MacOS 12.2 * Hardware: Apple Silicon/arm64 ### Description Bluetooth discovery crashes (segfaults). ### What I Did ``` % cat x.py import asyncio from bleak import BleakScanner hub_uuid = "00001623-1212-EFDE-1623-785FEABCD123" async def main(): devices = await BleakScanner.discover(service_uuids=[hub_uuid]) for d in devices: print(d) asyncio.run(main()) % BLEAK_LOGGING=1 python x.py zsh: abort BLEAK_LOGGING=1 python x.py ``` Traceback: ``` Crashed Thread: 1 Dispatch queue: com.apple.root.default-qos Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Termination Reason: Namespace TCC, Code 0 This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSBluetoothAlwaysUsageDescription key with a string value explaining to the user how the app uses this data. Thread 0:: Dispatch queue: com.apple.main-thread 0 libsystem_kernel.dylib 0x1b424d0c0 __psynch_cvwait + 8 1 libsystem_pthread.dylib 0x1b4285808 _pthread_cond_wait + 1228 2 Python 0x100b5d5d4 PyThread_acquire_lock_timed + 396 3 Python 0x100bba2ac acquire_timed + 256 4 Python 0x100bba4e8 lock_PyThread_acquire_lock + 56 5 Python 0x100a12a34 method_vectorcall_VARARGS_KEYWORDS + 156 6 Python 0x100b01854 call_function + 128 7 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708 8 Python 0x100af30ec _PyEval_Vector + 328 9 Python 0x100b01854 call_function + 128 10 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708 11 Python 0x100af30ec _PyEval_Vector + 328 12 Python 0x100b01854 call_function + 128 13 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708 14 Python 0x100af30ec _PyEval_Vector + 328 15 _objc.cpython-310-darwin.so 0x1015f5054 _PyObject_VectorcallTstate + 120 16 _objc.cpython-310-darwin.so 0x1015f4fd0 PyObject_Vectorcall + 60 17 _objc.cpython-310-darwin.so 0x1015f2f90 pysel_vectorcall + 440 18 Python 0x100b01854 call_function + 128 19 Python 0x100afc404 _PyEval_EvalFrameDefault + 32836 20 Python 0x100af30ec _PyEval_Vector + 328 21 Python 0x100a05c24 _PyObject_FastCallDictTstate + 208 22 Python 0x100a7c630 slot_tp_init + 196 23 Python 0x100a74968 type_call + 288 24 Python 0x100a0636c _PyObject_Call + 128 25 Python 0x100afc5ec _PyEval_EvalFrameDefault + 33324 26 Python 0x100a1d5e0 gen_send_ex2 + 224 27 Python 0x100af7888 _PyEval_EvalFrameDefault + 13512 28 Python 0x100a1d5e0 gen_send_ex2 + 224 29 _asyncio.cpython-310-darwin.so 0x100ee0a74 task_step_impl + 440 30 _asyncio.cpython-310-darwin.so 0x100ee0848 task_step + 52 31 Python 0x100a0594c _PyObject_MakeTpCall + 136 32 Python 0x100b17d50 context_run + 92 33 Python 0x100a58de8 cfunction_vectorcall_FASTCALL_KEYWORDS + 84 34 Python 0x100afc5ec _PyEval_EvalFrameDefault + 33324 35 Python 0x100af30ec _PyEval_Vector + 328 36 Python 0x100b01854 call_function + 128 37 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708 38 Python 0x100af30ec _PyEval_Vector + 328 39 Python 0x100b01854 call_function + 128 40 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708 41 Python 0x100af30ec _PyEval_Vector + 328 42 Python 0x100b01854 call_function + 128 43 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708 44 Python 0x100af30ec _PyEval_Vector + 328 45 Python 0x100b01854 call_function + 128 46 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708 47 Python 0x100af30ec _PyEval_Vector + 328 48 Python 0x100b01854 call_function + 128 49 Python 0x100afc404 _PyEval_EvalFrameDefault + 32836 50 Python 0x100af30ec _PyEval_Vector + 328 51 Python 0x100af2f90 PyEval_EvalCode + 104 52 Python 0x100b4c6cc run_eval_code_obj + 84 53 Python 0x100b4c614 run_mod + 112 54 Python 0x100b4c280 pyrun_file + 148 55 Python 0x100b4bb94 _PyRun_SimpleFileObject + 268 56 Python 0x100b4b1d4 _PyRun_AnyFileObject + 232 57 Python 0x100b6d40c pymain_run_file_obj + 220 58 Python 0x100b6cb5c pymain_run_file + 72 59 Python 0x100b6c3cc Py_RunMain + 868 60 Python 0x100b6d578 pymain_main + 36 61 Python 0x100b6d7ec Py_BytesMain + 40 62 dyld 0x1005650f4 start + 520 Thread 1 Crashed:: Dispatch queue: com.apple.root.default-qos 0 libsystem_kernel.dylib 0x1b4273eb8 __abort_with_payload + 8 1 libsystem_kernel.dylib 0x1b4276864 abort_with_payload_wrapper_internal + 104 2 libsystem_kernel.dylib 0x1b4276898 abort_with_payload + 16 3 TCC 0x1b943a874 __TCC_CRASHING_DUE_TO_PRIVACY_VIOLATION__ + 172 4 TCC 0x1b943b19c __TCCAccessRequest_block_invoke.194 + 600 5 TCC 0x1b9438794 __tccd_send_message_block_invoke + 632 6 libxpc.dylib 0x1b3fd99e8 _xpc_connection_reply_callout + 116 7 libxpc.dylib 0x1b3fd98e0 _xpc_connection_call_reply_async + 88 8 libdispatch.dylib 0x1b40c6c2c _dispatch_client_callout3 + 20 9 libdispatch.dylib 0x1b40e4698 _dispatch_mach_msg_async_reply_invoke + 348 10 libdispatch.dylib 0x1b40d90c0 _dispatch_kevent_worker_thread + 1316 11 libsystem_pthread.dylib 0x1b428133c _pthread_wqthread + 344 12 libsystem_pthread.dylib 0x1b4280018 start_wqthread + 8 Thread 2: 0 libsystem_pthread.dylib 0x1b4280010 start_wqthread + 0 ```
1medium
Title: When trying to count relationship instances, it returns wrong result Body: **Describe the bug** When following the README, I tried to insert one author with some books, and when I tried to call len(author.books) it returned a wrong number of result in one case and a good number in another. **To Reproduce** Full code, you can just copy paste it ```python from typing import Optional import databases import ormar import sqlalchemy DATABASE_URL = "sqlite:///db.sqlite" database = databases.Database(DATABASE_URL) metadata = sqlalchemy.MetaData() class BaseMeta(ormar.ModelMeta): metadata = metadata database = database class Author(ormar.Model): class Meta(BaseMeta): tablename = "authors" id: int = ormar.Integer(primary_key=True) name: str = ormar.String(max_length=100) class Book(ormar.Model): class Meta(BaseMeta): tablename = "books" id: int = ormar.Integer(primary_key=True) author: Optional[Author] = ormar.ForeignKey(Author) title: str = ormar.String(max_length=100) year: int = ormar.Integer(nullable=True) engine = sqlalchemy.create_engine(DATABASE_URL) metadata.drop_all(engine) metadata.create_all(engine) async def with_connect(function): async with database: await function() async def create(): tolkien = await Author.objects.create(name="J.R.R. Tolkien") await Book.objects.create(author=tolkien, title="The Hobbit", year=1937) await Book.objects.create(author=tolkien, title="The Lord of the Rings", year=1955) await Book.objects.create(author=tolkien, title="The Silmarillion", year=1977) # returns 2 ---> WEIRD print(f"Tolkien books : {len(tolkien.books)}") another_author = await Author.objects.create(name="Another author") book1 = Book(title="Book1", year=1999) book2 = Book(title="Book2", year=1999) book3 = Book(title="Book3", year=1999) another_author.books.append(book1) another_author.books.append(book2) another_author.books.append(book3) await another_author.update() # returns 3 ---> GOOD print(f"Another author books : {len(another_author.books)}") import asyncio asyncio.run(with_connect(create)) ``` **Expected behavior** It should return 3 in both cases. **Versions (please complete the following information):** - Database backend used : Sqlite - Python version : 3.9.2 - `ormar` version : ormar==0.10.22 - `pydantic` version : pydantic==1.8.2
1medium
Title: 显示api_key错误 Body: **Describe the bug 描述bug** 游玩游戏时需要chatgpt的API key,但输入这个项目得到的API key(免费版)后显示API key错误。 **To Reproduce 复现方法** 1. 游戏名称:世界尽头与可爱猫娘 ~ 病娇AI女友;下载网址:[网址](https://helixngc7293.itch.io/yandere-ai-girlfriend-simulator) 2. 进入设置选项输入API key 3. 进入游戏游玩,显示“Incorrect API key provided” **Screenshots 截图** ![图片17216383947321](https://github.com/user-attachments/assets/4a854ac4-bb7e-4c04-bac1-593593dbc11c) ![图片17216383947542](https://github.com/user-attachments/assets/8cbbd595-d646-44a8-a90b-95ecf67bbd2f)
1medium
Title: Freqtrade CI pipeline failing on ERROR: Failed to build installable wheels for some pyproject.toml based projects (blosc2) Body: <!-- Have you searched for similar issues before posting it? If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue). If it hasn't been reported, please create a new issue. Please do not use bug reports to request new features. --> ## Describe the problem: The step in Freqtrade CI pipeline which builds and pushes to the docker registries is failing due to `blosc2`. It seems to be failing from 3 days ago as docker images on atleast dockerhub haven't been updated since then. ```sh #14 240.6 Building wheels for collected packages: blosc2, MarkupSafe #14 240.6 Building wheel for blosc2 (pyproject.toml): started #14 257.8 Building wheel for blosc2 (pyproject.toml): finished with status 'error' #14 257.8 error: subprocess-exited-with-error #14 257.8 #14 257.8 × Building wheel for blosc2 (pyproject.toml) did not run successfully. #14 257.8 │ exit code: 1 #14 257.8 ╰─> [39 lines of output] #14 257.8 *** scikit-build-core 0.10.7 using CMake 3.25.1 (wheel) #14 257.8 *** Configuring CMake... #14 257.8 loading initial cache file /tmp/tmp4ezbjt6w/build/CMakeInit.txt #14 257.8 -- The C compiler identification is GNU 12.2.0 #14 257.8 -- The CXX compiler identification is GNU 12.2.0 #14 257.8 -- Detecting C compiler ABI info #14 257.8 -- Detecting C compiler ABI info - done #14 257.8 -- Check for working C compiler: /usr/bin/gcc - skipped #14 257.8 -- Detecting C compile features #14 257.8 -- Detecting C compile features - done #14 257.8 -- Detecting CXX compiler ABI info #14 257.8 -- Detecting CXX compiler ABI info - done #14 257.8 -- Check for working CXX compiler: /usr/bin/g++ - skipped #14 257.8 -- Detecting CXX compile features #14 257.8 -- Detecting CXX compile features - done #14 257.8 -- Found Python: /usr/local/bin/python3.11 (found version "3.11.10") found components: Interpreter NumPy Development.Module #14 257.8 CMake Error at /usr/share/cmake-3.25/Modules/ExternalProject.cmake:2790 (message): #14 257.8 error: could not find git for clone of blosc2-populate #14 257.8 Call Stack (most recent call first): #14 257.8 /usr/share/cmake-3.25/Modules/ExternalProject.cmake:4185 (_ep_add_download_command) #14 257.8 CMakeLists.txt:23 (ExternalProject_Add) #14 257.8 #14 257.8 #14 257.8 -- Configuring incomplete, errors occurred! #14 257.8 See also "/tmp/tmp4ezbjt6w/build/_deps/blosc2-subbuild/CMakeFiles/CMakeOutput.log". #14 257.8 #14 257.8 CMake Error at /usr/share/cmake-3.25/Modules/FetchContent.cmake:1604 (message): #14 257.8 CMake step for blosc2 failed: 1 #14 257.8 Call Stack (most recent call first): #14 257.8 /usr/share/cmake-3.25/Modules/FetchContent.cmake:1756:EVAL:2 (__FetchContent_directPopulate) #14 257.8 /usr/share/cmake-3.25/Modules/FetchContent.cmake:1756 (cmake_language) #14 257.8 /usr/share/cmake-3.25/Modules/FetchContent.cmake:1970 (FetchContent_Populate) #14 257.8 CMakeLists.txt:55 (FetchContent_MakeAvailable) #14 257.8 #14 257.8 #14 257.8 -- Configuring incomplete, errors occurred! #14 257.8 See also "/tmp/tmp4ezbjt6w/build/CMakeFiles/CMakeOutput.log". #14 257.8 #14 257.8 *** CMake configuration failed #14 257.8 [end of output] #14 257.8 #14 257.8 note: This error originates from a subprocess, and is likely not a problem with pip. #14 257.8 ERROR: Failed building wheel for blosc2 #14 257.8 Building wheel for MarkupSafe (pyproject.toml): started #14 265.6 Building wheel for MarkupSafe (pyproject.toml): finished with status 'done' #14 265.6 Created wheel for MarkupSafe: filename=MarkupSafe-3.0.2-cp311-cp311-linux_armv7l.whl size=21971 sha256=92522a143bfa45c45bc2786b11887211221ad6b6ee58be2f0656f739465fe57c #14 265.6 Stored in directory: /tmp/pip-ephem-wheel-cache-ssr909f5/wheels/9d/38/99/1f61f3b0dd7ab4898edfa9fcf6feb13644d4d49a44b3bed19d #14 265.6 Successfully built MarkupSafe #14 265.6 Failed to build blosc2 #14 267.9 ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (blosc2) #14 ERROR: process "/bin/sh -c pip install --user --no-cache-dir numpy && pip install --user --no-index --find-links /tmp/ pyarrow TA-Lib && pip install --user --no-cache-dir -r requirements.txt" did not complete successfully: exit code: 1 ------ > [python-deps 4/4] RUN pip install --user --no-cache-dir numpy && pip install --user --no-index --find-links /tmp/ pyarrow TA-Lib && pip install --user --no-cache-dir -r requirements.txt: 257.8 257.8 note: This error originates from a subprocess, and is likely not a problem with pip. 257.8 ERROR: Failed building wheel for blosc2 257.8 Building wheel for MarkupSafe (pyproject.toml): started 265.6 Building wheel for MarkupSafe (pyproject.toml): finished with status 'done' 265.6 Created wheel for MarkupSafe: filename=MarkupSafe-3.0.2-cp311-cp311-linux_armv7l.whl size=21971 sha256=92522a143bfa45c45bc2786b11887211221ad6b6ee58be2f0656f739465fe57c 265.6 Stored in directory: /tmp/pip-ephem-wheel-cache-ssr909f5/wheels/9d/38/99/1f61f3b0dd7ab4898edfa9fcf6feb13644d4d49a44b3bed19d 265.6 Successfully built MarkupSafe 265.6 Failed to build blosc2 267.9 ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (blosc2) ------ Dockerfile.armhf:37 -------------------- 36 | USER ftuser 37 | >>> RUN pip install --user --no-cache-dir numpy \ 38 | >>> && pip install --user --no-index --find-links /tmp/ pyarrow TA-Lib \ 39 | >>> && pip install --user --no-cache-dir -r requirements.txt 40 | -------------------- ERROR: failed to solve: process "/bin/sh -c pip install --user --no-cache-dir numpy && pip install --user --no-index --find-links /tmp/ pyarrow TA-Lib && pip install --user --no-cache-dir -r requirements.txt" did not complete successfully: exit code: 1 failed building multiarch image ``` ### Steps to reproduce: 1. commit to `develop` branch
1medium
Title: Multiple Profiles Body: ## Summary I would love it if tableau authentication behaved more [like AWS](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials) I obviously can build wrappers for this, but this seems like pretty common functionality in python api packages. (Databricks,DBT,AWS,etc) ## Request Type I don't believe this would affect or be affected at all by rest API. as this is just some defaulting and setting before the API. ## Description Specifically allowing profiles to be specified or defaulting back to environment variables. I imagine a profile would have the same information as the call for PAT auth * token_name * personal_access_token * site_id
1medium
Title: Object has no attribute 'set_camera_orientation' Body: First off, I'm not sure whether this error is related more to Manim or Python so don't kill me for that. So I've tried to run this program from a YouTube video ([here it is](https://www.youtube.com/watch?v=oqDQwEvHGfE&ab_channel=Visualization101)), which is supposed to be an animation of the Lorenz Attractor. **Code**: ```python from manimlib.imports import * class Lorenz_Attractor(ThreeDScene): def construct(self): axes = ThreeDAxes(x_min=-3.5,x_max=3.5,y_min=-3.5,y_max=3.5,z_min=0,z_max=6,axis_config={"include_tip": True,"include_ticks":True,"stroke_width":1}) dot = Sphere(radius=0.05,fill_color=BLUE).move_to(0*RIGHT + 0.1*UP + 0.105*OUT) self.set_camera_orientation(phi=65 * DEGREES,theta=30*DEGREES,gamma = 90*DEGREES) self.begin_ambient_camera_rotation(rate=0.05) #Start move camera dtime = 0.01 numsteps = 30 self.add(axes,dot) def lorenz(x, y, z, s=10, r=28, b=2.667): x_dot = s*(y - x) y_dot = r*x - y - x*z z_dot = x*y - b*z return x_dot, y_dot, z_dot def update_trajectory(self, dt): new_point = dot.get_center() if get_norm(new_point - self.points[-1]) > 0.01: self.add_smooth_curve_to(new_point) traj = VMobject() traj.start_new_path(dot.get_center()) traj.set_stroke(BLUE, 1.5, opacity=0.8) traj.add_updater(update_trajectory) self.add(traj) def update_position(self,dt): x_dot, y_dot, z_dot = lorenz(dot.get_center()[0]*10, dot.get_center()[1]*10, dot.get_center()[2]*10) x = x_dot * dt/10 y = y_dot * dt/10 z = z_dot * dt/10 self.shift(x/10*RIGHT + y/10*UP + z/10*OUT) dot.add_updater(update_position) self.wait(420) ``` <!-- The code you run --> When I run this code, I get this: **Error**: ``` Traceback (most recent call last): File "C:\Users\Azelide\manim\manim.py", line 5, in <module> manimlib.main() File "C:\Users\Azelide\manim\manimlib\__init__.py", line 12, in main scene.run() File "C:\Users\Azelide\manim\manimlib\scene\scene.py", line 76, in run self.construct() File "lorenztrick.py", line 8, in construct self.set_camera_orientation(phi=65 * DEGREES,theta=30*DEGREES,gamma = 90*DEGREES) AttributeError: 'Lorenz_Attractor' object has no attribute 'set_camera_orientation' ``` This looks to me quite inexplicable, since ``` set_camera_orientation ``` does exist in the Manim library and I assume the code should work fine otherwise, you can see the video for the demonstration. Also, this is already a different issue but when I've tried running the 8 scenes from the example_scenes.py, they worked quite ok apart from the first scene (OpeningManimExample) giving me this traceback after I see the transformation of z to z^2: ``` C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-79 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-114 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-116 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-104 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-105 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-110 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-107 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-103 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-111 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-102 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-101 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-112 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-108 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-97 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-115 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-44 not recognized warnings.warn(f"{ref} not recognized") C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-109 not recognized warnings.warn(f"{ref} not recognized") ``` I'm running on Python 3.9.1, Windows 10 64-bit, in virtualenv. I was getting exactly the same errors both when not running via virtualenv and running via virtualenv. Reinstalled everything a few times, routed everything necessary to PATH in environment variable settings, nothing has improved.
1medium
Title: add a tooltip option on icon, button, ... component Body: Components that have an `ss-click` event should have a setting to configure a tooltip on mouseover. ![image](https://github.com/streamsync-cloud/streamsync/assets/159559/bdb24d46-d044-435d-a2b2-00a67c8a71b1) If the tooltip field is empty, nothing appears. The components work as they do today. #### Components * Button * Text * Image * Icon (even if it doesn't have the `ss-click`) #### Design proposal A simple tooltip above the button as in material ui. ![Peek 2024-03-11 11-31](https://github.com/streamsync-cloud/streamsync/assets/159559/46af5294-d4be-4baa-988e-64696823dfa4)
1medium
Title: --test-filter works but not with a normal execution Body: I just set up urlwatch with 4 urls. My first one went great, and also the second one, here is my urls.yaml: ``` name: "Ransomfeed" kind: url url: "https://www.ransomfeed.it/?country=ITA" filter: - xpath: "//div[@class='table-responsive']//tbody/tr[1]" - html2text - strip --- name: "Hacker News" kind: url url: "https://thehackernews.com/" filter: - xpath: "//div[@class='body-post clear'][1]" - html2text - shellpipe: "tr -cd '\\11\\12\\15\\40-\\176'" - strip --- name: "Red Hot Cyber" kind: url url: "https://www.redhotcyber.com/" filter: - xpath: "(//article[contains(@class, 'elementor-post')])[1]" - html2text - strip --- name: "Commissariato di PS Notizie" kind: url url: "https://www.commissariatodips.it/notizie/index.html" filter: - xpath: "//div[@class='media article articletype-0 topnews dotted-h'][1]" - html2text - strip I just want to use urlwatch for when specific sites publuish a new article. ``` The thing is, that this 4 urls that I set up they all pass the test: ``` user@ubuntu:~$ urlwatch --test-filter 1 16078 2024-06-24 15:33:25 Compagnia Trasporti Integrati S.R.L monti Italy user@ubuntu:~$ urlwatch --test-filter 2 New MOVEit Transfer Vulnerability Under Active Exploitation - Patch ASAP! Jun 26, 2024 Vulnerability / Data Protection A newly disclosed critical security flaw impacting Progress Software MOVEit Transfer is already seeing exploitation attempts in the wild shortly after details of the bug were publicly disclosed. The vulnerability, tracked as CVE-2024-5806 (CVSS score: 9.1), concerns an authentication bypass that impacts the following versions - From 2023.0.0 before 2023.0.11 From 2023.1.0 before 2023.1.6, and From 2024.0.0 before 2024.0.2 "Improper authentication vulnerability in Progress MOVEit Transfer (SFTP module) can lead to Authentication Bypass," the company said in an advisory released Tuesday. Progress has also addressed another critical SFTP-associated authentication bypass vulnerability (CVE-2024-5805, CVSS score: 9.1) affecting MOVEit Gateway version 2024.0.0. Successful exploitation of the flaws could allow attackers to bypass SFTP authentication and gain access to MOVEit Transfer and Gateway systems. watchTowr Labs has since published additional technical specifi user@ubuntu:~$ urlwatch --test-filter 3 Cybercrime e Dark Web 150.000 dollari. Il costo di uno 0-Day UAF nel Kernel Linux sul Dark Web Recentemente è emerso un allarme nel mondo della sicurezza informatica: un attore malintenzionato ha annunciato la vendita di una vulnerabilità 0-Day di tipo Use After Free (UAF) nel kernel Linux su un noto forum del dark web. Questa vulnerabilità, se sfruttata, permetterebbe l’esecuzione di codice con privilegi elevati, rappresentando una RHC Dark Lab 26/06/2024 16:20 user@ubuntu:~$ urlwatch --test-filter 4 25.06.2024 POLIZIA DI STATO E ANCI PIEMONTE: PATTO PER LA CYBER SICUREZZA È stato siglato presso la Questura di Torino il Protocollo d’Intesa tra il Centro Operativo Sicurezza Cibernetica della Polizia Postale Piemonte e... ``` But only the first and the third one actually send me something to discord(I used a discord webhook for reporting) And also the console shows this: ``` user@ubuntu:~$ urlwatch =========================================================================== 01. NEW: Commissariato di PS Notizie =========================================================================== --------------------------------------------------------------------------- NEW: Commissariato di PS Notizie ( https://www.commissariatodips.it/notizie/index.html ) --------------------------------------------------------------------------- -- ``` And this is what I got from Discord: ``` =========================================================================== 01. NEW: Commissariato di PS Notizie =========================================================================== --------------------------------------------------------------------------- NEW: Commissariato di PS Notizie ( https://www.commissariatodips.it/notizie/index.html ) --------------------------------------------------------------------------- -- urlwatch 2.28, Copyright 2008-2023 Thomas Perl Website: https://thp.io/2008/urlwatch/ Support urlwatch development: https://github.com/sponsors/thp watched 4 URLs in 0 seconds ``` I don't know if I am missing out on something important but I can't seem to figure out the issue by looking at the wiki
1medium
Title: Prepare universal http interceptor for both static and browser crawlers tests Body: Currently in our tests sometimes respx is used to mock http traffic for static crawlers and for PlaywrightCrawler mostly real requests are done. It would be convenient, faster and more robust to create a fixture that can mock both. For Playwright related browser requests it can be done using custom browser **BrowserPool**, **page.route** for example: ```python class _StaticRedirectBrowserPool(BrowserPool): """BrowserPool for redirecting browser requests to static content.""" async def new_page( self, *, page_id: str | None = None, browser_plugin: BaseBrowserPlugin | None = None, proxy_info: ProxyInfo | None = None, ) -> CrawleePage: crawlee_page = await super().new_page(page_id=page_id, browser_plugin=browser_plugin, proxy_info=proxy_info) await crawlee_page.page.route( '**/*', lambda route: route.fulfill( status=200, content_type='text/plain', body='<!DOCTYPE html><html><body>What a body!</body></html>' ), ) return crawlee_page ```
1medium
Title: docs: explain idempotent requests Body: Idempotent requests are a standard of the REST POST and PUT api's wherein duplicate requests or re-tried requests for certain use cases have a single effect and outcome. For example, user account creation is unique and retries will return an error response. In a similar way asynchronous jobs that have the same identifier will only be scheduled to run once. This concept must be documented in jina so that the custom Executors are aware that retrying requests with side effects are carefully handled.
1medium
Title: The answer of statsmodels.sandbox.stats.runs is not the same with R. Body: My code of statsmodels: ``` '''Step 1: Importing libraries''' from statsmodels.sandbox.stats.runs import runstest_1samp import numpy as np '''Step 2: Loading (Importing) the Data''' seq = np.array([1,0,1,1,0,1,1,0,1,0,0,1,1,0,0 ,0,1,0,1,0,1,0,0,0,0,1,1,1]) '''Step 3: Runs Test''' res = runstest_1samp(seq) print('Z-statistic value:', np.round(res[0], 3)) print('\nP-value:', np.round(res[1], 3)) ``` result: ``` Z-statistic value: 0.578 P-value: 0.563 ``` while using R: ``` # Step 1: Load necessary libraries library(tseries) # Step 2: Load the data seq <- c(1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1) # Step 3: Runs Test res <- runs.test(as.factor(seq)) cat("Z-statistic value:", round(res$statistic, 3), "\n") cat("P-value:", round(res$p.value, 3), "\n") ``` result: ``` Z-statistic value: 0.77 P-value: 0.441 ```
1medium
Title: [Bug]: Possible defect in backend selection with Python 3.13 on Windows Body: ### Bug summary I've ran into this while upgrading some CI to Python 3.13 Apparently a test as simple as `plt.subplots()` will crash specifically on Python 3.13 + Windows I created a minimal reprod repo, where I run the same test successfully with the following combos: - Python 3.13 + Ubuntu - Python 3.12 + Windows see https://github.com/neutrinoceros/reprod-mpl-win-python-3.13-bug/actions/runs/11257049452 From my non-expert perspective, there are multiple suspects: - Python 3.13 itself (note that in my reprod I'm using a uv-managed binary from https://github.com/indygreg/python-build-standalone, but I also obtain the same failure with a binary from Github Actions) - matplotlib - my own test configuration: I'm not explicitly configuring a matplotlib backend, but never needed to until now, so I'm not sure this would be a solution or a workaround. However, I have verified that doing so *works*. Please feel free to close this issue if this is the recommended solution. ### Code for reproduction ```Python import matplotlib.pyplot as plt def test_subplots_simple(): fig, ax = plt.subplots() ``` ### Actual outcome ``` ============================= test session starts ============================= platform win32 -- Python 3.13.0, pytest-8.3.3, pluggy-1.5.0 rootdir: D:\a\reprod-mpl-win-python-3.13-bug\reprod-mpl-win-python-3.13-bug configfile: pyproject.toml collected 1 item tests\test_1.py F [100%] ================================== FAILURES =================================== ____________________________ test_subplots_simple _____________________________ def test_subplots_simple(): > fig, ax = plt.subplots() tests\test_1.py:4: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .venv\Lib\site-packages\matplotlib\pyplot.py:1759: in subplots fig = figure(**fig_kw) .venv\Lib\site-packages\matplotlib\pyplot.py:1027: in figure manager = new_figure_manager( .venv\Lib\site-packages\matplotlib\pyplot.py:550: in new_figure_manager return _get_backend_mod().new_figure_manager(*args, **kwargs) .venv\Lib\site-packages\matplotlib\backend_bases.py:3507: in new_figure_manager return cls.new_figure_manager_given_figure(num, fig) .venv\Lib\site-packages\matplotlib\backend_bases.py:3512: in new_figure_manager_given_figure return cls.FigureCanvas.new_manager(figure, num) .venv\Lib\site-packages\matplotlib\backend_bases.py:1797: in new_manager return cls.manager_class.create_with_canvas(cls, figure, num) .venv\Lib\site-packages\matplotlib\backends\_backend_tk.py:483: in create_with_canvas window = tk.Tk(className="matplotlib") _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <tkinter.Tk object .>, screenName = None, baseName = 'pytest' className = 'matplotlib', useTk = True, sync = False, use = None def __init__(self, screenName=None, baseName=None, className='Tk', useTk=True, sync=False, use=None): """Return a new top level widget on screen SCREENNAME. A new Tcl interpreter will be created. BASENAME will be used for the identification of the profile file (see readprofile). It is constructed from sys.argv[0] without extensions if None is given. CLASSNAME is the name of the widget class.""" self.master = None self.children = {} self._tkloaded = False # to avoid recursions in the getattr code in case of failure, we # ensure that self.tk is always _something_. self.tk = None if baseName is None: import os baseName = os.path.basename(sys.argv[0]) baseName, ext = os.path.splitext(baseName) if ext not in ('.py', '.pyc'): baseName = baseName + ext interactive = False > self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) E _tkinter.TclError: Can't find a usable init.tcl in the following directories: E C:/Users/runneradmin/AppData/Roaming/uv/python/cpython-3.13.0-windows-x86_64-none/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/python/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/python/library C:/Users/runneradmin/AppData/Roaming/uv/library C:/Users/runneradmin/AppData/Roaming/uv/tcl8.6.12/library C:/Users/runneradmin/AppData/Roaming/tcl8.6.12/library E E E E This probably means that Tcl wasn't installed properly. C:\Users\runneradmin\AppData\Roaming\uv\python\cpython-3.13.0-windows-x86_64-none\Lib\tkinter\__init__.py:2459: TclError =========================== short test summary info =========================== FAILED tests/test_1.py::test_subplots_simple - _tkinter.TclError: Can't find a usable init.tcl in the following directories: C:/Users/runneradmin/AppData/Roaming/uv/python/cpython-3.13.0-windows-x86_64-none/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/python/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/python/library C:/Users/runneradmin/AppData/Roaming/uv/library C:/Users/runneradmin/AppData/Roaming/uv/tcl8.6.12/library C:/Users/runneradmin/AppData/Roaming/tcl8.6.12/library This probably means that Tcl wasn't installed properly. ============================= 1 failed in 12.18s ============================== ``` ### Expected outcome no failure ### Additional information _No response_ ### Operating system Windows ### Matplotlib Version 3.9.2 ### Matplotlib Backend ? ### Python version 3.13.0 ### Jupyter version _No response_ ### Installation None
1medium
Title: Ray Tune AimLoggerCallback not support repo server? Body: ### What happened + What you expected to happen ![Image](https://github.com/user-attachments/assets/f6ffcc1c-5ae9-4c6c-ae73-4036cb5b4b85) the aim config see above, but the result in aim ui is below, only one trail track on aim, the other trial experment is not same with the first trial(my ray tune name is hpo) and it's status is running and track nothing useful information when the trail completed, when I set the aim repo a local path, it works well, does the AimLoggerCallback not support repo server ? ![Image](https://github.com/user-attachments/assets/44a3f495-95ee-4649-9b42-ed9eba00f095) ### Versions / Dependencies ray 2.40.0 aim 3.19.0 ### Reproduction script `def run_config(args):     aim_server = "aim://172.20.32.185:30058"     print("aim server is :", aim_server)     return RunConfig(         name=args.name,         callbacks=[AimLoggerCallback(             repo=aim_server,             system_tracking_interval=None           )],         storage_path=args.storage_path,         log_to_file=True     )` ### Issue Severity High: It blocks me from completing my task.
2hard
Title: CryptoCurrencies.get_digital_currency_exchange_rate doesnt work Body: This funtion sends the api call: https://www.alphavantage.co/query?function=CURRENCY_EXCHANGE_RATE&symbol=BTC&market=EUR&apikey=XXXX&datatype=json The right one would be: https://www.alphavantage.co/query?function=CURRENCY_EXCHANGE_RATE&from_currency=BTC&to_currency=EUR&apikey=XXXX&datatype=json To fix this the keyword arguments of CryptoCurrencies.get_digital_currency_exchange_rate should be from_currency and to_currency (line 63 in cryptocurrencys.py) Then the return value should be changed to: FUNCTION_KEY, 'Realtime Currency Exchange Rate', None But im not sure with the Meta_data = None, I just couldnt figure out a secound entry in the dict. With this changes it works for the default Outputformat but not for output_format = 'pamdas'. Maybe anyone can fix this completely.
1medium
Title: Composite (multi-column) features Body: ### Feature request Structured data types (graphs etc.) might often be most efficiently stored as multiple columns, which then need to be combined during feature decoding Although it is currently possible to nest features as structs, my impression is that in particular when dealing with e.g. a feature composed of multiple numpy array / ArrayXD's, it would be more efficient to store each ArrayXD as a separate column (though I'm not sure by how much) Perhaps specification / implementation could be supported by something like: ``` features=Features(**{("feature0", "feature1")=Features(feature0=Array2D((None,10), dtype="float32"), feature1=Array2D((None,10), dtype="float32")) ``` ### Motivation Defining efficient composite feature types based on numpy arrays for representing data such as graphs with multiple node and edge attributes is currently challenging. ### Your contribution Possibly able to contribute
1medium
Title: [Bug] Tacotron2-DDC denial of service + bizarre behavior when input ends with "?!?!" Body: ### Describe the bug Before I start, I just want to say this is funniest bug I've come across in my 20+ years of software development. To keep the issue a bit more readable, I've put the audio uploads in detail tags. Click on the arrow by each sample to hear it. --- Adding on `?!?!` to the end of a prompt using Tacotron2-DDC causes the decoder to trail off (hence the "DOS" aspect of this bug). After `max_decoder_steps` is exceeded, the audio gets dumped to disk and the results are... well, somehow both nightmare fuel and the most hilarious sounds at the same time. After the original prompt is finished speaking, it trails off into repeating bizarre remnants of the prompt over and over, akin to a baby speaking, or someone having a mental break down. In some cases it sounds much more, uh... explicit, depending on what was in the prompt. Note how it says the prompt correctly before trailing off. <details><summary><code>squibbidy bop boop doop bomp pewonkus dinkus womp womp womp deebop scoop top lop begomp?!?!?!</code></summary> <p> [boopdoop.webm](https://github.com/coqui-ai/TTS/assets/885648/233570c2-0b5c-48bb-8a84-a92605a127de) </p> </details> It appears the question marks / bangs must come at the end of the input text; being present in the middle of the prompt seems to work fine. <details><summary><code>before the question marks?!?!?!? after them</code></summary> <p> [middle.webm](https://github.com/coqui-ai/TTS/assets/885648/409915f4-e49f-4e63-98c4-e7de2fbd99f9) </p> </details> Conversely, removing ` after them` from the prompt causes the bug, but it completes before `max_decoder_steps` is exceeded, suggesting that the decoder doesn't go off into infinity but has _some_ point of termination, albeit exponentially beyond the input text length. <details><summary><code>before the question marks?!?!?!?!</code></summary> <p> [just-before.webm](https://github.com/coqui-ai/TTS/assets/885648/5220c870-a83d-4e21-a962-0258d3aa8029) </p> </details> Further, it seems as little as `?!?!` causes the bug. `?!` and `?!?` do not. <details><summary><code>what are you doing today?!</code></summary> <p> [wayd_1.webm](https://github.com/coqui-ai/TTS/assets/885648/33223b5e-68ca-488f-a52c-458940c90e1c) </p> </details> <details><summary><code>what are you doing today?!?</code></summary> <p> [wayd_2.webm](https://github.com/coqui-ai/TTS/assets/885648/6210adf5-a62b-4fb5-a3aa-c8fb9786d9ac) </p> </details> <details><summary><code>what are you doing today?!?!</code></summary> <p> [wayd_3.webm](https://github.com/coqui-ai/TTS/assets/885648/763ccb7a-af24-4984-aed0-9dd6d79e3094) </p> </details> Some inputs, however, are completely unaffected. <details><summary><code>woohoo I'm too cool for school weehee you're too cool for me?!?!?!</code></summary> <p> [in-situ-bug.webm](https://github.com/coqui-ai/TTS/assets/885648/31171d73-abcf-4e73-9d15-cfe8e8edcef0) </p> </details> ### Examples Here are more examples, just because... well, why not. <details><summary><code>blahblahblahblahblah?!?!?!</code></summary> <p> [blahblahblah.webm](https://github.com/coqui-ai/TTS/assets/885648/22c467c0-26b7-4d7c-8da6-1e96e03b11a7) </p> </details> <details><summary><code>ah ah ah let's count to ten AH AH AH LET'S COUNT TO TEN?!?!?!</code></summary> <p> [counttoten.webm](https://github.com/coqui-ai/TTS/assets/885648/442c72b9-f16e-4457-b6c5-d54ee15e2a28) </p> </details> <details><summary><code>holy smokes it's an artichoke gone broke woah ho ho?!?!?!</code></summary> <p> [artichoke.webm](https://github.com/coqui-ai/TTS/assets/885648/d700acf8-4b68-448d-8311-a88d9185fe40) </p> </details> <details><summary><code>hahahahaha reeeeeeeeeeeee maaaaaaaaaaaaa?!?!?!</code></summary> <p> [hahahaha.webm](https://github.com/coqui-ai/TTS/assets/885648/ae7ec6ff-7d0e-4f29-9bab-31e495a5c28b) </p> </details> <details><summary><code>scooby dooby doo where are you we've got some work to do now?!?!?!?!?!</code></summary> <p> [scoobydoo.webm](https://github.com/coqui-ai/TTS/assets/885648/a6131b66-0cdf-4068-bb5b-25816f2b1335) </p> </details> <details><summary><code>ayyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy ah ah ah le-meow u r so dang funny amirite bros?!?!?!</code></summary> <p> [ayyy_bugged.webm](https://github.com/coqui-ai/TTS/assets/885648/e5095500-6063-4d79-a8c4-bdae2e135547) </p> </details> ### To Reproduce Generate some speech with the tacotron2-ddc model with `?!?!?!` at the end. ```shell tts \ --out_path output/hello.wav \ --model_name "tts_models/en/ljspeech/tacotron2-DDC" \ --text "holy smokes it's an artichoke gone broke woah ho ho?!?!?!" ``` ### Expected behavior Just speaking the input prompt and ending, not... whatever it's doing now. ### Logs ```console $ tts --out_path output/hello.wav --model_name "tts_models/en/ljspeech/tacotron2-DDC" --text "holy smokes it's an artichoke gone broke woah ho ho?!?!?!" > tts_models/en/ljspeech/tacotron2-DDC is already downloaded. > vocoder_models/en/ljspeech/hifigan_v2 is already downloaded. > Using model: Tacotron2 > Setting up Audio Processor... | > sample_rate:22050 | > resample:False | > num_mels:80 | > log_func:np.log | > min_level_db:-100 | > frame_shift_ms:None | > frame_length_ms:None | > ref_level_db:20 | > fft_size:1024 | > power:1.5 | > preemphasis:0.0 | > griffin_lim_iters:60 | > signal_norm:False | > symmetric_norm:True | > mel_fmin:0 | > mel_fmax:8000.0 | > pitch_fmin:1.0 | > pitch_fmax:640.0 | > spec_gain:1.0 | > stft_pad_mode:reflect | > max_norm:4.0 | > clip_norm:True | > do_trim_silence:True | > trim_db:60 | > do_sound_norm:False | > do_amp_to_db_linear:True | > do_amp_to_db_mel:True | > do_rms_norm:False | > db_level:None | > stats_path:None | > base:2.718281828459045 | > hop_length:256 | > win_length:1024 > Model's reduction rate `r` is set to: 1 > Vocoder Model: hifigan > Setting up Audio Processor... | > sample_rate:22050 | > resample:False | > num_mels:80 | > log_func:np.log | > min_level_db:-100 | > frame_shift_ms:None | > frame_length_ms:None | > ref_level_db:20 | > fft_size:1024 | > power:1.5 | > preemphasis:0.0 | > griffin_lim_iters:60 | > signal_norm:False | > symmetric_norm:True | > mel_fmin:0 | > mel_fmax:8000.0 | > pitch_fmin:1.0 | > pitch_fmax:640.0 | > spec_gain:1.0 | > stft_pad_mode:reflect | > max_norm:4.0 | > clip_norm:True | > do_trim_silence:False | > trim_db:60 | > do_sound_norm:False | > do_amp_to_db_linear:True | > do_amp_to_db_mel:True | > do_rms_norm:False | > db_level:None | > stats_path:None | > base:2.718281828459045 | > hop_length:256 | > win_length:1024 > Generator Model: hifigan_generator > Discriminator Model: hifigan_discriminator Removing weight norm... > Text: holy smokes it's an artichoke gone broke woah ho ho?!?!?! > Text splitted to sentences. ["holy smokes it's an artichoke gone broke woah ho ho?!?!?!"] > Decoder stopped with `max_decoder_steps` 10000 > Processing time: 77.33241438865662 > Real-time factor: 0.662833806507867 > Saving output to output/hello.wav ``` ### Environment ```shell { "CUDA": { "GPU": [], "available": false, "version": "11.7" }, "Packages": { "PyTorch_debug": false, "PyTorch_version": "2.0.1+cu117", "TTS": "0.14.3", "numpy": "1.23.5" }, "System": { "OS": "Linux", "architecture": [ "64bit", "ELF" ], "processor": "x86_64", "python": "3.10.6", "version": "#2311-Microsoft Tue Nov 08 17:09:00 PST 2022" } } ``` To be clear, this is on WSL1 on Windows, so things are running under "Ubuntu". ### Additional context I'm unsure if other models are affected, I haven't tried.
2hard
Title: Remove pandas and plotly from dependency Body: Congratulations on the release! Is it possible to remove pandas and plotly from dependency?
1medium
Title: Uvicorn 0.3.30 only accepts keyword arguments. Body: Looks like the argument `app` no longer exists in uvicorn's run function and only kwargs can be passed. https://github.com/kennethreitz/responder/blob/be56e92d65ca59a7d532016955127328ab38cdd8/responder/api.py#L656 ``` def run(**kwargs): ``` https://github.com/encode/uvicorn/blob/9f0ef8a9a90173fc39da34b0f56a633f40434b7d/uvicorn/main.py#L173
1medium
Title: [BUG] SAR needs to be modified due to a breaking change in scipy Body: ### Description <!--- Describe your issue/bug/request in detail --> With scipy 1.10.1, the item similarity matrix is a dense matrix ``` print(type(model.item_similarity)) print(type(model.user_affinity)) print(type(model.item_similarity) == np.ndarray) print(type(model.item_similarity) == scipy.sparse._csr.csr_matrix) print(model.item_similarity.shape) print(model.item_similarity) <class 'numpy.ndarray'> <class 'scipy.sparse._csr.csr_matrix'> True False (1646, 1646) [[1. 0.10650888 0.03076923 ... 0. 0. 0. ] [0.10650888 1. 0.15104167 ... 0. 0.00729927 0.00729927] [0.03076923 0.15104167 1. ... 0. 0. 0.01190476] ... [0. 0. 0. ... 1. 0. 0. ] [0. 0.00729927 0. ... 0. 1. 0. ] [0. 0.00729927 0.01190476 ... 0. 0. 1. ]] ``` but with scipy 1.11.1 the item similarity matrix is sparse ``` print(type(model.item_similarity)) print(type(model.user_affinity)) type(model.item_similarity) == np.ndarray type(model.item_similarity) == scipy.sparse._csr.csr_matrix print(model.item_similarity.shape) <class 'numpy.ndarray'> <class 'scipy.sparse._csr.csr_matrix'> () ``` ### In which platform does it happen? <!--- Describe the platform where the issue is happening (use a list if needed) --> <!--- For example: --> <!--- * Azure Data Science Virtual Machine. --> <!--- * Azure Databricks. --> <!--- * Other platforms. --> Related to https://github.com/microsoft/recommenders/issues/1951 ### How do we replicate the issue? <!--- Please be specific as possible (use a list if needed). --> <!--- For example: --> <!--- * Create a conda environment for pyspark --> <!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` --> <!--- * ... --> ### Expected behavior (i.e. solution) <!--- For example: --> <!--- * The tests for SAR PySpark should pass successfully. --> ### Other Comments We found that the issue was that during a division in Jaccard, scipy change the type. We talked to the authors of scipy and they told us that they did a breaking change in 1.11.0 https://github.com/scipy/scipy/issues/18796#issuecomment-1619125257
2hard
Title: question for hootnot 2 Body: #### CONTEXT 1. imagine a situation where _multiple_ trades are open in the _same subaccount_, _same instrument_, and _unit quantity_... Like this: ![image](https://user-images.githubusercontent.com/93844523/215911596-a906972f-a152-4fec-ba27-b790ad30fa23.png) 2. attempting to close all but one of these open trades will result in order being cancelled for FIFO reasons. #### QUESTION Do you know any tricks to test which trade is "allowed" to be closed per US FIFO law? Or stated differently, given a list of trades, can we determine which ones are NOT eligible to be closed? I understand I could query for the lowest trade_id among such a group, and that should, by definition, work. But I thought _maybe_ you might know a better way. This is all the info I see when I query for a trade... nothing lends itself to "can be closed", or im just blind ``` { "trade": { "id": "148", "instrument": "EUR_USD", "price": "1.09190", "openTime": "2023-01-26T01:48:04.744487039Z", "initialUnits": "-1", "initialMarginRequired": "0.0218", "state": "OPEN", "currentUnits": "-1", "realizedPL": "0.0000", "financing": "0.0001", "dividendAdjustment": "0.0000", "unrealizedPL": "0.0057", "marginUsed": "0.0217" }, "lastTransactionID": "266" } ```
3misc
Title: Improve "Link existing booking" feature Body: When you click on "Link existing booking", you are redirected to the bookings page: ![image](https://github.com/user-attachments/assets/86e072ac-ac22-4d91-8cca-ffd40481bda2) There are two things we could improve here: - Currently, the bookings page uses the current date. It'd be better if it used the date of the event (or the first day in case of multi-day events) - The bookings page currently shows all bookings which makes it hard to find a booking which can be linked. We should filter the bookings using the linked event to only show relevant bookings. cc @Moliholy @OmeGak
1medium
Title: Re-Export apis like we do models Body: **Is your feature request related to a problem? Please describe.** Sometimes there are a lot of apis to import to even do a simple thing **Describe the solution you'd like** Models are re-exported, allowing us to ```python import my-openapi-lib.models as m from my-openapi-lib.api.do_the_things import create_thing_to_do create_thing_to_do.sync(client=c, json_body=m.ThingModel(...)) # I'd like to be able to do from my-openapi-lib import api api.do_the_things.create_thing_to_do.sync(client=c, json_body=m.ThingModel(...)) # and from my-openapi-lib.api import do_the_things do_the_things.create_thing_to_do.sync(client=c, json_body=m.ThingModel(...)) ```
1medium
Title: locking on to a single person in a multi-person frame Body: ### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question Hi, I am using yolo11-pose to assess power lifts (squats) against standards and decide whether or not the lift passes or fails based on keypoints in the critical frame. In dummy videos with only one person (the lifter) in the frame, the model and application perform perfectly. However the application is intended for competition, and in competition, while there is only one lifter on the platform at a time, there are multiple other people (spotters) visible in the frame. This creates undesirable model behavior. The desired behavior is that the model should only focus on the lifter and assess the quality of his lift. While the result derived is correct, the skeleton overlay is unstable: some times it correctly overlays the skeleton on the lifter, at other times during the lift the skeleton may be temporarily overlaid against a spotter or other person in frame. This is a problem. I have attached images to illustrate. I have tried to overcome this by specifying the lifters person id number: ``` results = model.track( source=video_file, device=device, show=False, # conf=0.7, save=True, max_det=1 ) ``` I have also tried to exclude ids which are erroneously annotated, reduce the ROI, experimented with increasing euclidean distance, and confidence weights. ``` lifter_selector: expected_center: [0.5, 0.5] roi: [0.3, 0.3, 0.7, 0.7] distance_weight: 2.0 confidence_weight: 0.8 lifter_id: 4 excluded_ids: [1, 7, 10] ``` I am having no success, and i hope that someone can help me to find a way to "fix" the bounding box and skeleton overlay to the lifter and prevent those annotations on non-litfters on the platform. thank you correct ![Image](https://github.com/user-attachments/assets/04b80fcd-6e47-435a-9c62-77ee73f0ec39) incorrect ![Image](https://github.com/user-attachments/assets/13f1f6b5-0e82-42e2-88bb-cd9a69c14898) incorrect ![Image](https://github.com/user-attachments/assets/eed88dd6-07df-4fb1-a926-db885b18d9bb) ### Additional Please let me know if you'd like me to share the code via GitHub repo. I am happy to do so. I am really hoping you can help me and i thank you in advance. Please let me know if my explanation is not clear or if you require more information.
1medium
Title: [DEV-1122] [feature proposal] Add native support for pandas API Body: For now, pygwalker kernel computation uses sql + duckdb for data queries. Another approach might be using the native pandas API for all those computations. Benefits of this implementation include: + Test and switch to different high-performance dataframe library, like modin, polars. It would be even better for the community if developers could customize their own query engines. <sub>[DEV-1122](https://linear.app/kanaries/issue/DEV-1122/[feature-proposal]-add-native-support-for-pandas-api)</sub>
2hard
Title: 对于post请求a_bogus参数是如何生成的? Body: 对于post请求a_bogus参数是如何生成的?猜测会用到请求的data和param
2hard
Title: How to modify the generative script to take an input image? Body: To calculate the latent variable for it (by running the encoder), modify the latent variable by adding/subtracting an attribute vector, and then generate a new image by running the decoder
2hard
Title: 文档教程考虑重写吗 Body: **Describe the feature** **Motivation** A clear and concise description of the motivation of the feature. Ex1. It is inconvenient when \[....\]. Ex2. There is a recent paper \[....\], which is very helpful for \[....\]. **Related resources** If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful. **Additional context** Add any other context or screenshots about the feature request here. If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
3misc
Title: script_location PATH config is not OS agnostic Body: **Describe the bug** This setting is different if you are running on Windows OS or Linux OS ``` script_location = app/db/migrations ``` **Expected behavior** I am developing on Windows machine and i set the value of the setting to "app\db\migrations" But then I build docker image, deploy and want to run it the server and I have to fix the config file to "app/db/migrations" I find myself changing this quite frequently, is there a way to define this with mutli OS support in mind? Thanks **To Reproduce** Please try to provide a [Minimal, Complete, and Verifiable](http://stackoverflow.com/help/mcve) example, with the migration script and/or the SQLAlchemy tables or models involved. See also [Reporting Bugs](https://www.sqlalchemy.org/participate.html#bugs) on the website. ```py # Insert code here ``` **Error** ``` # Copy error here. Please include the full stack trace. ``` **Versions.** - OS: - Python: - Alembic: - SQLAlchemy: - Database: - DBAPI: **Additional context** <!-- Add any other context about the problem here. --> **Have a nice day!**
1medium
Title: Clickable Container Body: ### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [x] I added a descriptive title and summary to this issue. ### Summary Hi, it would be very nice, if I could turn a st.container clickable (with an on_click event), so I could build clickable boxes with awesome content. :-) Best Regards ### Why? Because I have already tried to build a tile with streamlit, and it is very very difficult. That would make it much easier. ### How? _No response_ ### Additional Context _No response_
1medium
Title: Dashtable case-insensitive filter causes exception when the column contains null value Body: If a column in the table contains null value at some rows and you try to do a case insensitive filtering like "ine foo', a javascript exception "Cannot read property 'toString' of null" exception" will occur. Apparently it is caused by the line "lhs.toString().toUpperCase()" of fnEval() method in relational.ts failed to check whether lhs (i.e. the cell value) is null or not. A sample app to reproduce the problem. ``` from dash import html from dash import dash_table import pandas as pd from collections import OrderedDict import dash app = dash.Dash(__name__) df = pd.DataFrame(OrderedDict([ ('climate', [None, 'Snowy', 'Sunny', 'Rainy']), ('temperature', [13, 43, 50, 30]), ('city', ['NYC', None, 'Miami', 'NYC']) ])) app.layout = html.Div([ dash_table.DataTable( id='table', data=df.to_dict('records'), columns=[ {'id': 'climate', 'name': 'climate'}, {'id': 'temperature', 'name': 'temperature'}, {'id': 'city', 'name': 'city'}, ], filter_action="native", ), html.Div(id='table-dropdown-container') ]) if __name__ == '__main__': app.run_server(debug=True, port=8051) ``` Run the app, in the 'city' column header of the table, type in 'ine foo' and hit enter, which should reproduce the problem. I had a PR for fixing this bug for an old version of dash-table at https://github.com/plotly/dash-table/pull/935/files Environment: ``` dash 2.17.1 dash-core-components 2.0.0 dash-html-components 2.0.0 dash-table 5.0.0 ``` - OS: Ubuntu 22.04 - Browser: Chrome - Version: 127.0.6533.119 ![image](https://github.com/user-attachments/assets/a49e1332-8cce-4783-ae37-994a8aeb2477)
1medium
Title: Adjusting Blur Mask Modifier Changes The Brightness Body: I've never noticed this before, but I just installed the 2/3/20 nvidia version. When I adjust the blur higher or lower, the face gets significantly brighter or darker depending which way I go. Here's a video I just shared on Google Drive illustrating this. Maybe this has happened in past versions and I never noticed? But I think I would have. Let me know if this is normal. Here's the video.... https://drive.google.com/file/d/1L77hcgAt8zcGM6jpOB53e6TRDslccivJ/view?usp=sharing
1medium
Title: AssertionError: MMCV==1.7.2 is used but incompatible. Please install mmcv>=2.0.0rc4, <2.2.0. Body: Thanks for your error report and we appreciate it a lot. **Checklist** 1. I have searched related issues but cannot get the expected help. 2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help. 3. The bug has not been fixed in the latest version. **Describe the bug** A clear and concise description of what the bug is. **Reproduction** 1. What command or script did you run? ```none A placeholder for the command. ``` 2. Did you make any modifications on the code or config? Did you understand what you have modified? 3. What dataset did you use? **Environment** 1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment information and paste it here. 2. You may add addition that may be helpful for locating the problem, such as - How you installed PyTorch \[e.g., pip, conda, source\] - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.) **Error traceback** If applicable, paste the error trackback here. ```none A placeholder for trackback. ``` **Bug fix** If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
1medium
Title: Hatch Test Environment Dependency Resolution Fails for Python 3.10+ Body: **Describe the bug** Running `hatch run test:lint` in the default Haystack `test` environment fails for Python 3.10+ due to dependency resolution issues with `llvmlite`. The error occurs because `openai-whisper` (included in the default test environment) depends on `numba==0.53.1`, which in turn requires `llvmlite==0.36.0`. The version of `llvmlite` is incompatible with Python versions >= 3.10. **Error message** ``` × Failed to build `llvmlite==0.36.0` ╰─▶ Call to `setuptools.build_meta:__legacy__.build_wheel` failed (exit status: 1) [stderr] RuntimeError: Cannot install on Python version 3.11.11; only versions >=3.6,<3.10 are supported. hint: This usually indicates a problem with the package or the build environment. help: `llvmlite` (v0.36.0) was included because `openai-whisper` (v20240930) depends on `numba` (v0.53.1) which depends on `llvmlite`. ``` **Expected behavior** The test environment should successfully resolve dependencies and execute `hatch run test:lint` on all Python versions supported by Haystack (`>=3.8,<3.13`). **Additional context** - The issue occurs only for Python 3.10+ as `llvmlite==0.36.0` supports Python < 3.10. - Dependencies like `llvmlite` and `numba` are resolved automatically and are not explicitly included in the `extra-dependencies` section of the `test` environment in `pyproject.toml`. **To Reproduce** Steps to reproduce the behavior: 1. Clone the Haystack repository. 2. Set up a `hatch` environment with Python 3.10, 3.11, or 3.12. 3. Run `hatch run test:lint`. 4. Observe the dependency resolution failure caused by `llvmlite`. **FAQ Check** - [x] Have you had a look at [[our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)](https://docs.haystack.deepset.ai/docs/faq)? **System:** - OS: Ubuntu 24.04.1 WSL - GPU/CPU: N/A - Haystack version (commit or version number): Latest (`main` branch) - DocumentStore: N/A - Reader: N/A - Retriever: N/A
1medium
Title: vector_animation_k3d.ipynb Body: Widgets show double objects. Interestingly "screenshot" is For example: OS - screenshot: <img width="666" alt="screen shot 2018-06-21 at 16 52 07" src="https://user-images.githubusercontent.com/1192310/41727078-c78f06fc-7573-11e8-9d72-1483bf96f3fc.png"> ![k3d-1529592599276](https://user-images.githubusercontent.com/1192310/41727079-c7ab0640-7573-11e8-9e47-9d79403f7300.png) Correct screenshot: ![k3d-1529592599276](https://user-images.githubusercontent.com/1192310/41727275-3a63dd10-7574-11e8-90e7-cca87e62985b.png)
1medium
Title: Cannot use gensim 3.8.x when `nltk` package is installed Body: #### Problem description > What are you trying to achieve? What is the expected result? What are you seeing instead? In my script i'm trying to `import gensim.models.keyedvectors` and also import another package, that requires `nltk` package internally. Whenever i have NLTK installed in the same virtualenv (i'm not using virtualenv, but a docker image actually) - the gensim model fails to import. #### Steps/code/corpus to reproduce ``` # pip list | grep -E 'gensim|nltk' gensim 3.8.1 # pip install nltk Processing /root/.cache/pip/wheels/96/86/f6/68ab24c23f207c0077381a5e3904b2815136b879538a24b483/nltk-3.4.5-cp36-none-any.whl Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from nltk) (1.13.0) Installing collected packages: nltk Successfully installed nltk-3.4.5 # pip list | grep -E 'gensim|nltk' gensim 3.8.1 nltk 3.4.5 # python Python 3.6.8 (default, Jun 11 2019, 01:16:11) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/site-packages/gensim/__init__.py", line 5, in <module> from gensim import parsing, corpora, matutils, interfaces, models, similarities, summarization, utils # noqa:F401 File "/usr/local/lib/python3.6/site-packages/gensim/corpora/__init__.py", line 14, in <module> from .wikicorpus import WikiCorpus # noqa:F401 File "/usr/local/lib/python3.6/site-packages/gensim/corpora/wikicorpus.py", line 539, in <module> class WikiCorpus(TextCorpus): File "/usr/local/lib/python3.6/site-packages/gensim/corpora/wikicorpus.py", line 577, in WikiCorpus def __init__(self, fname, processes=None, lemmatize=utils.has_pattern(), dictionary=None, File "/usr/local/lib/python3.6/site-packages/gensim/utils.py", line 1614, in has_pattern from pattern.en import parse # noqa:F401 File "/usr/local/lib/python3.6/site-packages/pattern/text/en/__init__.py", line 61, in <module> from pattern.text.en.inflect import ( File "/usr/local/lib/python3.6/site-packages/pattern/text/en/__init__.py", line 80, in <module> from pattern.text.en import wordnet File "/usr/local/lib/python3.6/site-packages/pattern/text/en/wordnet/__init__.py", line 57, in <module> nltk.data.find("corpora/" + token) File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 673, in find return find(modified_name, paths) File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 660, in find return ZipFilePathPointer(p, zipentry) File "/usr/local/lib/python3.6/site-packages/nltk/compat.py", line 228, in _decorator return init_func(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 506, in __init__ zipfile = OpenOnDemandZipFile(os.path.abspath(zipfile)) File "/usr/local/lib/python3.6/site-packages/nltk/compat.py", line 228, in _decorator return init_func(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 1055, in __init__ zipfile.ZipFile.__init__(self, filename) File "/usr/local/lib/python3.6/zipfile.py", line 1131, in __init__ self._RealGetContents() File "/usr/local/lib/python3.6/zipfile.py", line 1198, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file ``` #### Versions ```python python Python 3.6.8 (default, Jun 11 2019, 01:16:11) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import platform; print(platform.platform()) Linux-5.0.0-050000rc8-generic-x86_64-with-debian-9.11 >>> import sys; print("Python", sys.version) Python 3.6.8 (default, Jun 11 2019, 01:16:11) [GCC 6.3.0 20170516] >>> import numpy; print("NumPy", numpy.__version__) NumPy 1.17.4 >>> import scipy; print("SciPy", scipy.__version__) SciPy 1.3.3 >>> import gensim; print("gensim", gensim.__version__) gensim 3.8.1 >>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION) FAST_VERSION 1 ```
2hard
Title: Feature Request: Support formatting in MQTT response topic Body: I am able to use variables to paramaterize the URL when doing HTTP tests. Additionally, when publishing via MQTT, the publish topic can be parameterized. However, the response topic does not allow parameters. All the examples and references to topics in MQTT land in tavern appear to hard code the topics. * In a production system, a device may make use of wildcards in the topic. A good example is publishing a config value from the cloud to an IoT device, where the topic includes some unique identifier * Another example is the [request/reply](https://www.emqx.com/en/blog/mqtt5-request-response) model. In MQTTv3, a design pattern used has the ACK from the device including a UUID (correlation ID) in the topic name that is different for each MQTT request. My goal is shown below in the minimal example.Topics may need to change from saved values, or perhaps from other areas such as `conftest.py`. ```python # conftest.py import pytest PING_THING = "foo" PONG_THING = "bar" @pytest.fixture def get_ping_topic(): return PING_THING @pytest.fixture def get_pong_topic(): return PONG_THING ``` ```yaml # test_mqtt.tavern.yaml --- test_name: Test mqtt message response paho-mqtt: client: transport: websockets client_id: tavern-tester connect: host: localhost port: 9001 timeout: 3 stages: - name: step 1 - ping/pong mqtt_publish: # Use value from conftest.py topic: /device/123/{get_ping_topic} payload: ping mqtt_response: # Use value from conftest.py topic: /device/123/{get_ping_topic} save: json: first_val: "thing.device_id[0]" timeout: 5 - name: step 2 - using saved data in topic mqtt_publish: # Use saved value from an earlier stage topic: /device/123/ping_{first_val} payload: ping mqtt_response: # Use saved value from an earlier stage topic: /device/123/pong_{first_val} timeout: 5 ``` I am willing to discuss the work needed and doing the contribution. When I get a chance, I will create a branch with an integration test that this is failing on. I have not found any workaround yet.
1medium
Title: Choropleth map behavior Body: Hi, I was looking at the choropleth map function of this module, but the input argument seems a little bit confusing: 1) The first input argument, df, is the "data being plotted". From your example, I deduced that df should be the information obtained from some shapefiles. If this is true, could you update the documentation and variable name, so that it is less confusing? 2) From the documentation (as well as the examples), I cannot figure out how to plot continuously-valued data (for example, population density per state). The more intuitive input structure, in my opinion, is using a Python dictionary, with keys being names of polygons (corresponding to the polygons in the shapefiles), and values being the values to be mapped into colors (which can either be categorical or continuous). In this way, potential users who have their own shapefiles and the corresponding data (stored as a Python dict) can easily plot a choropleth map. 3) The USA map lacks Alaska and Hawaii. Incidentally, I was working on a similar choropleth map plotting problem, for which I submitted a pull request to matplotlib: https://github.com/matplotlib/basemap/pull/366. I added Alaska and Hawaii elegantly into the corner of the USA map, and I also used Python dictionary as my input data structure. Just offering my two cents.
1medium
Title: 视频分辨率低和同分辨率下码流异常 Body: 博主 ![image](https://github.com/JoeanAmier/TikTokDownloader/assets/32630090/84c7de0e-f265-4b00-a4c8-9bfa81b6b00e) 地址 ``` https://www.douyin.com/user/MS4wLjABAAAAJ6Lr2yJ-SAFg7GjMu7E2nHZd1nhGhzsygP7_uqQXlI0 ``` --- 下面截图的视频ID为```7214153200159558973``` ![image](https://github.com/JoeanAmier/TikTokDownloader/assets/32630090/22b96b43-3291-4dcb-a907-81d8eef899f4) 左边是 本项目下载的 右边是 [TikTokDownload](https://github.com/Johnserf-Seed/TikTokDownload)下载的,下同 由于分辨率低了一些,所以文件体积也小一些。 --- 然后下面的视频ID为```7198388067944631610``` 在相同分辨率下文件体积不一样, ![image](https://github.com/JoeanAmier/TikTokDownloader/assets/32630090/d7c686b1-bb04-43a2-b105-e5c534d57f0a) 检查发现是码流小了。 ![image](https://github.com/JoeanAmier/TikTokDownloader/assets/32630090/bb9b3860-316b-4ce0-a313-61bbae1f382f) --- 还有一个是会下载到0K的文件,视频是可以正常播放的,地址```https://www.douyin.com/video/7020014643720539429``` ![image](https://github.com/JoeanAmier/TikTokDownloader/assets/32630090/6cd6447d-8820-447d-b0a9-5d13efaf615d) console中没有报错。 --- 分辨率低的问题,不是个例,一百多视频超过一半。 ![image](https://github.com/JoeanAmier/TikTokDownloader/assets/32630090/f241dbc2-e648-4122-8ee2-bd470ad42b21) 大概下载三四个博主会出现一个0k问题。
1medium
Title: 【开源自荐】UtilMeta | 极简高效的 Python 后端元框架 Body: ## 推荐项目 - 项目地址:https://github.com/utilmeta/utilmeta-py - 类别:Python - 项目标题:UtilMeta | 极简高效的 Python 后端元框架 - 项目描述: UtilMeta 是一个用于开发 API 服务的后端元框架,基于 Python 类型注解标准高效构建声明式接口与 ORM 查询,能够自动解析请求参数与生成 OpenAPI 文档,高效开发 RESTful 接口,产出的代码简洁清晰,并且支持使用主流 Python 框架(django, flask, fastapi, starlette, sanic, tornado 等)作为运行时实现或渐进式整合 [框架主页](https://utilmeta.com/py) [快速开始](https://docs.utilmeta.com/py/zh/) [案例教程](https://docs.utilmeta.com/py/zh/tutorials/) - 亮点: * UtilMeta 的声明式 ORM 能高效处理大部分 CRUD 场景,what you define is what you get,自动规避 N+1 关系查询问题,简洁高效 * UtilMeta 可以使用主流的 Python 框架作为运行时实现,仅需一个参数即可切换整个底层实现,你也可以从现有项目渐进式地集成 * UtilMeta 配套了一个全周期的 API 管理平台,可以一站式解决小团队的接口文档调试,日志查询,服务器监控,报警通知与事件管理等运维与管理需求 - 示例代码: ```python from utilmeta.core import api, orm from django.db import models from .models import User, Article class UserSchema(orm.Schema[User]): username: str articles_num: int = models.Count('articles') class ArticleSchema(orm.Schema[Article]): id: int author: UserSchema content: str class ArticleAPI(api.API): async def get(self, id: int) -> ArticleSchema: return await ArticleSchema.ainit(id) ``` - 截图: ![image](https://github.com/521xueweihan/HelloGitHub/assets/22250415/ae0c533c-40dd-42a5-b4ec-38c06ce67e62) ![image](https://github.com/521xueweihan/HelloGitHub/assets/22250415/6d9222c0-8f42-47e0-91eb-acd98d8f2570) ![image](https://github.com/521xueweihan/HelloGitHub/assets/22250415/0608e49c-e842-4ba3-9530-d5bf0f8eb011) - 后续更新计划: 1. 支持 Websocket 接口开发 2. 支持适配更多的 Python 框架作为运行时实现 3. 支持更多的 ORM 模型实现,如 tortoise-orm, peewee, sqlachemy
3misc
Title: Using Chronos and Chronos-bolt models in offline machine Body: ## Description In the moudle `timeseries`, there is no way to change the configuration to work in a offline machine when I have already the models in that machine. How can I handle with that?
1medium
Title: latest dbfixtures release breaks unit tests Body: See this commit: https://github.com/ClearcodeHQ/pytest-dbfixtures/commit/572629afc475f446fda09fabc4d33a613dd2af6f Note that passing '?' into the fixtures (see https://github.com/manahl/arctic/blob/master/arctic/fixtures/arctic.py#L15 ) looks to be no longer supported.
1medium
Title: Support GAN based model training with deepspeed which need to setup fabric twice Body: ### Description & Motivation I have same issue like https://github.com/Lightning-AI/pytorch-lightning/issues/17856 when training dcgan with fabric + deepspeed. The official example works fine with deepspeed: https://github.com/microsoft/DeepSpeedExamples/blob/master/training/gan/gan_deepspeed_train.py After adapting it to use fabric, ``` import torch import torch.backends.cudnn as cudnn import torch.nn as nn import torch.utils.data import torchvision.datasets as dset import torchvision.transforms as transforms import torchvision.utils as vutils from torch.utils.tensorboard import SummaryWriter from time import time from lightning.fabric import Fabric from gan_model import Generator, Discriminator, weights_init from utils import get_argument_parser, set_seed, create_folder def get_dataset(args): if torch.cuda.is_available() and not args.cuda: print("WARNING: You have a CUDA device, so you should probably run with --cuda") if args.dataroot is None and str(args.dataset).lower() != 'fake': raise ValueError("`dataroot` parameter is required for dataset \"%s\"" % args.dataset) if args.dataset in ['imagenet', 'folder', 'lfw']: # folder dataset dataset = dset.ImageFolder(root=args.dataroot, transform=transforms.Compose([ transforms.Resize(args.imageSize), transforms.CenterCrop(args.imageSize), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])) nc=3 elif args.dataset == 'lsun': classes = [ c + '_train' for c in args.classes.split(',')] dataset = dset.LSUN(root=args.dataroot, classes=classes, transform=transforms.Compose([ transforms.Resize(args.imageSize), transforms.CenterCrop(args.imageSize), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])) nc=3 elif args.dataset == 'cifar10': dataset = dset.CIFAR10(root=args.dataroot, download=True, transform=transforms.Compose([ transforms.Resize(args.imageSize), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])) nc=3 elif args.dataset == 'mnist': dataset = dset.MNIST(root=args.dataroot, download=True, transform=transforms.Compose([ transforms.Resize(args.imageSize), transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)), ])) nc=1 elif args.dataset == 'fake': dataset = dset.FakeData(image_size=(3, args.imageSize, args.imageSize), transform=transforms.ToTensor()) nc=3 elif args.dataset == 'celeba': dataset = dset.ImageFolder(root=args.dataroot, transform=transforms.Compose([ transforms.Resize(args.imageSize), transforms.CenterCrop(args.imageSize), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])) nc = 3 assert dataset return dataset, nc def train(args): writer = SummaryWriter(log_dir=args.tensorboard_path) create_folder(args.outf) set_seed(args.manualSeed) cudnn.benchmark = True dataset, nc = get_dataset(args) dataloader = torch.utils.data.DataLoader(dataset, batch_size=args.batchSize, shuffle=True, num_workers=int(args.workers)) ngpu = 0 nz = int(args.nz) ngf = int(args.ngf) ndf = int(args.ndf) netG = Generator(ngpu, ngf, nc, nz) netG.apply(weights_init) if args.netG != '': netG.load_state_dict(torch.load(args.netG)) netD = Discriminator(ngpu, ndf, nc) netD.apply(weights_init) if args.netD != '': netD.load_state_dict(torch.load(args.netD)) criterion = nn.BCELoss() real_label = 1 fake_label = 0 fabric = Fabric(accelerator="auto", devices=1, precision='16-mixed', strategy="deepspeed_stage_1") fabric.launch() fixed_noise = torch.randn(args.batchSize, nz, 1, 1, device=fabric.device) # setup optimizer optimizerD = torch.optim.Adam(netD.parameters(), lr=args.lr, betas=(args.beta1, 0.999)) optimizerG = torch.optim.Adam(netG.parameters(), lr=args.lr, betas=(args.beta1, 0.999)) netD, optimizerD = fabric.setup(netD, optimizerD) netG, optimizerG = fabric.setup(netG, optimizerG) dataloader = fabric.setup_dataloaders(dataloader) torch.cuda.synchronize() start = time() for epoch in range(args.epochs): for i, data in enumerate(dataloader, 0): ############################ # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z))) ########################### # train with real netD.zero_grad() real = data[0] batch_size = real.size(0) label = torch.full((batch_size,), real_label, dtype=real.dtype, device=fabric.device) output = netD(real) errD_real = criterion(output, label) fabric.backward(errD_real, model=netD) D_x = output.mean().item() # train with fake noise = torch.randn(batch_size, nz, 1, 1, device=fabric.device) fake = netG(noise) label.fill_(fake_label) output = netD(fake.detach()) errD_fake = criterion(output, label) fabric.backward(errD_fake, model=netD) D_G_z1 = output.mean().item() errD = errD_real + errD_fake optimizerD.step() ############################ # (2) Update G network: maximize log(D(G(z))) ########################### netG.zero_grad() label.fill_(real_label) # fake labels are real for generator cost output = netD(fake) errG = criterion(output, label) fabric.backward(errG, model=netG) D_G_z2 = output.mean().item() optimizerG.step() print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f D(x): %.4f D(G(z)): %.4f / %.4f' % (epoch, args.epochs, i, len(dataloader), errD.item(), errG.item(), D_x, D_G_z1, D_G_z2)) writer.add_scalar("Loss_D", errD.item(), epoch*len(dataloader)+i) writer.add_scalar("Loss_G", errG.item(), epoch*len(dataloader)+i) if i % 100 == 0: vutils.save_image(real, '%s/real_samples.png' % args.outf, normalize=True) fake = netG(fixed_noise) vutils.save_image(fake.detach(), '%s/fake_samples_epoch_%03d.png' % (args.outf, epoch), normalize=True) # do checkpointing #torch.save(netG.state_dict(), '%s/netG_epoch_%d.pth' % (args.outf, epoch)) #torch.save(netD.state_dict(), '%s/netD_epoch_%d.pth' % (args.outf, epoch)) torch.cuda.synchronize() stop = time() print(f"total wall clock time for {args.epochs} epochs is {stop-start} secs") def main(): parser = get_argument_parser() args = parser.parse_args() train(args) if __name__ == "__main__": main() ``` the error is like ``` Traceback (most recent call last): File "/home/ichigo/LocalCodes/github/DeepSpeedExamples/training/gan/gan_fabric_train.py", line 183, in <module> main() File "/home/ichigo/LocalCodes/github/DeepSpeedExamples/training/gan/gan_fabric_train.py", line 180, in main train(args) File "/home/ichigo/LocalCodes/github/DeepSpeedExamples/training/gan/gan_fabric_train.py", line 152, in train fabric.backward(errG, model=netG) File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/lightning/fabric/fabric.py", line 449, in backward self._strategy.backward(tensor, module, *args, **kwargs) File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/lightning/fabric/strategies/strategy.py", line 191, in backward self.precision.backward(tensor, module, *args, **kwargs) File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/lightning/fabric/plugins/precision/deepspeed.py", line 91, in backward model.backward(tensor, *args, **kwargs) File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1976, in backward self.optimizer.backward(loss, retain_graph=retain_graph) File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2056, in backward self.loss_scaler.backward(loss.float(), retain_graph=retain_graph) File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward scaled_loss.backward(retain_graph=retain_graph) File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_tensor.py", line 522, in backward torch.autograd.backward( File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/torch/autograd/__init__.py", line 266, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 903, in reduce_partition_and_remove_grads self.reduce_ready_partitions_and_remove_grads(param, i) File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1416, in reduce_ready_partitions_and_remove_grads self.reduce_independent_p_g_buckets_and_remove_grads(param, i) File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 949, in reduce_independent_p_g_buckets_and_remove_grads new_grad_tensor = self.ipg_buffer[self.ipg_index].narrow(0, self.elements_in_ipg_bucket, param.numel()) TypeError: 'NoneType' object is not subscriptable ``` cc @williamFalcon @Borda @carmocca @awaelchli
2hard
Title: No `tensorboardX.__version__` attribute Body: Title says it all... it would be nice to have a `__version__` attribute on this module. I'm running under Python 3.6.6. ```>>> import tensorboardX >>> tensorboardX <module 'tensorboardX' from '/home/dave/.conda/envs/dk_env/lib/python3.6/site-packages/tensorboardX/__init__.py'> >>> tensorboardX.__version__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'tensorboardX' has no attribute '__version__'```
0easy
Title: support `accept` header for /status/{codes} endpoint Body: currently v 0.9.2 the `/status/{codes}` endpoints always returns `text/plain` content type. could you please provide new feature: if `accept: some-mime-type` defined in request then this mime-type should be returned in a response `content-type` header.
1medium
Title: The results are not the same each time Body: Over the past six months, I have been working with MMDetection, and I would like to seek advice from the community. I have observed that, despite using the same dataset and code, the results vary with each run. To address this, I have explicitly set randomness = dict(seed=0, deterministic=True) in my configuration. I have also experimented with multiple versions of MMDetection, including 3.0.0 and 2.28.2, but the issue persists: the results remain inconsistent across runs with identical datasets and code. Could anyone provide insights or suggestions to resolve this problem?
1medium
Title: Support "Let’s Encrypt"? Body: https://letsencrypt.org/pt-br/
1medium
Title: `train` parameter should be explained before `download`, `transform` and `target_transform` parameter Body: ### 📚 The doc issue In [the doc](https://pytorch.org/vision/stable/generated/torchvision.datasets.QMNIST.html) of `QMNIST()`, `train` parameter is located before `**kwargs` which are `download`, `transform` and `target_transform` parameter as shown below: > class torchvision.datasets.QMNIST(root: [Union](https://docs.python.org/3/library/typing.html#typing.Union)[[str](https://docs.python.org/3/library/stdtypes.html#str), [Path](https://docs.python.org/3/library/pathlib.html#pathlib.Path)], what: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[[str](https://docs.python.org/3/library/stdtypes.html#str)] = None, compat: [bool](https://docs.python.org/3/library/functions.html#bool) = True, train: [bool](https://docs.python.org/3/library/functions.html#bool) = True, **kwargs: [Any](https://docs.python.org/3/library/typing.html#typing.Any)) But `train` parameter is explained after `download`, `transform` and `target_transform` parameter as shown below: > Parameters: > ... > - compat ([bool](https://docs.python.org/3/library/functions.html#bool),optional) – A boolean that says whether the target for each example is class number (for compatibility with the MNIST dataloader) or a torch vector containing the full qmnist information. Default=True. > - download ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – If True, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again. > - transform (callable, optional) – A function/transform that takes in a PIL image and returns a transformed version. E.g, transforms.RandomCrop > - target_transform (callable, optional) – A function/transform that takes in the target and transforms it. > - train ([bool](https://docs.python.org/3/library/functions.html#bool),optional,compatibility) – When argument ‘what’ is not specified, this boolean decides whether to load the training set or the testing set. Default: True. ### Suggest a potential alternative/fix So, `train` parameter should be explained before `download`, `transform` and `target_transform` parameter as shown below: > Parameters: > ... > - compat ([bool](https://docs.python.org/3/library/functions.html#bool),optional) – A boolean that says whether the target for each example is class number (for compatibility with the MNIST dataloader) or a torch vector containing the full qmnist information. Default=True. > - train ([bool](https://docs.python.org/3/library/functions.html#bool),optional,compatibility) – When argument ‘what’ is not specified, this boolean decides whether to load the training set or the testing set. Default: True. > - download ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – If True, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again. > - transform (callable, optional) – A function/transform that takes in a PIL image and returns a transformed version. E.g, transforms.RandomCrop > - target_transform (callable, optional) – A function/transform that takes in the target and transforms it.
0easy
Title: [Client] Reconnection attemps won't stop even if a connection has successfully established. Body: ### Summary When a connection can't be made, it tries to reconnect multiple times at the same time. When a connection is finally established, the other recconectionn threads don't stop. ### My Code **wss.py** ```python import socketio import threading class WSSocket(threading.Thread): def __init__(self, wss, debug=False): super(WSSocket,self).__init__() self.sio = socketio.Client() if not debug else socketio.Client(engineio_logger=True, logger=True, reconnection_delay=3) self._wss = wss self._debug = debug def __callbacks(self): @self.sio.event def connect(): self.conn_event.set() print("### connect ###") @self.sio.event def disconnect(): print("### disconnect ###") @self.sio.event def message(data): print(f"[MSG] {data}") def loop(self): self.sio.wait() def setup(self): self.__callbacks() self.sio.connect(self._wss) def run(self): self.conn_event = threading.Event() self.setup() self.loop() ``` **main.py** ```python rom wss import WSSocket import threading if __name__ == "__main__": sck = WSSocket("wss://[REDACTED]/socket.io", debug=True) sck.start() if not sck.conn_event.wait(timeout=20): raise Exception("SERVER", "Can't connect") ``` ### Output ```code $ python main.py Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4 Polling connection accepted with {'sid': 'vQaHiton2LzcySvCACa_', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000} Engine.IO connection established Sending packet MESSAGE data 0{} Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4 WebSocket upgrade was successful WebSocket connection was closed, aborting Waiting for write loop task to end Exiting write loop task Engine.IO connection dropped Connection failed, new attempt in 2.65 seconds Exiting read loop task Exception in thread Thread-1: Traceback (most recent call last): File "/data/data/com.termux/files/usr/lib/python3.10/threading.py", line 1009, in _bootstrap_inner self.run() File "/data/data/com.termux/files/home/wss.py", line 40, in run self.setup() File "/data/data/com.termux/files/home/wss.py", line 36, in setup self.sio.connect(self._wss) File "/data/data/com.termux/files/home/env/lib/python3.10/site-packages/socketio/client.py", line 347, in connect raise exceptions.ConnectionError( socketio.exceptions.ConnectionError: One or more namespaces failed to connect Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4 Polling connection accepted with {'sid': 'E7Wv0W99sMf8VtaqACbp', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000} Engine.IO connection established Sending packet MESSAGE data 0{} Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4 WebSocket upgrade was successful WebSocket connection was closed, aborting Waiting for write loop task to end Exiting write loop task Engine.IO connection dropped Connection failed, new attempt in 3.45 seconds Exiting read loop task Connection failed, new attempt in 4.91 seconds Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4 Polling connection accepted with {'sid': '6Z2FO1Vc_-sfzHXkACb7', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000} Engine.IO connection established Sending packet MESSAGE data 0{} Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4 WebSocket upgrade was successful WebSocket connection was closed, aborting Waiting for write loop task to end Exiting write loop task Engine.IO connection dropped Connection failed, new attempt in 3.48 seconds Exiting read loop task Connection failed, new attempt in 4.97 seconds Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4 Polling connection accepted with {'sid': 'Bk3YO2X-VuhBf-nOACcR', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000} Engine.IO connection established Sending packet MESSAGE data 0{} Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4 WebSocket upgrade was successful WebSocket connection was closed, aborting Waiting for write loop task to end Exiting write loop task Engine.IO connection dropped Connection failed, new attempt in 3.44 seconds Exiting read loop task Connection failed, new attempt in 5.11 seconds Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4 Polling connection accepted with {'sid': 'D35HRBnfAn-I6EebACcq', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000} Engine.IO connection established Sending packet MESSAGE data 0{} Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4 WebSocket upgrade was successful WebSocket connection was closed, aborting Waiting for write loop task to end Exiting write loop task Engine.IO connection dropped Connection failed, new attempt in 3.05 seconds Exiting read loop task Connection failed, new attempt in 5.46 seconds Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4 Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4 Polling connection accepted with {'sid': 'RYiGKpFL2hG_3ltlACc3', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000} Engine.IO connection established Sending packet MESSAGE data 0{} Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4 Polling connection accepted with {'sid': 'Cly4nTqpc-Ldt-f4ACc8', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000} Engine.IO connection established Sending packet MESSAGE data 0{} Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4 WebSocket upgrade was successful Received packet MESSAGE data 0{"sid":"ZlH_lUWcSY3mLy3hACc9"} Namespace / is connected ### connect ### Bird Bot room code: Received packet MESSAGE data 0{"sid":"-sLjQelqoHWkyXyxACc-"} Reconnection successful WebSocket upgrade was successful Connection failed, new attempt in 4.63 seconds Reconnection successful Connection failed, new attempt in 5.22 seconds ^C Sending packet CLOSE data None Engine.IO connection dropped ### disconnect ### Exiting write loop task Exiting write loop task Connection failed, new attempt in 5.13 seconds Unexpected error decoding packet: "string index out of range", aborting Waiting for write loop task to end Exiting read loop task ```
1medium
Title: HTTPS Production Environment Support Body: ### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [x] I added a descriptive title and summary to this issue. ### Summary Streamlit [docs](https://docs.streamlit.io/develop/api-reference/configuration/config.toml) provide HTTPS support but don't recommend hosting in a prod environment due to lack of testing: "'DO NOT USE THIS OPTION IN A PRODUCTION ENVIRONMENT. It has not gone through security audits or performance tests." I would like to see these tests happen as the alternative would be to use a reverse proxy like nginx but with a SSL connection a lot of the diverse documentation online has recommended to disable CORS and the XSRF functionality in the config.toml file: --server.enableCORS=false --server.enableXsrfProtection=false. This seems like a greater security risk than just using the inhouse HTTPS Support provided [here](https://docs.streamlit.io/develop/concepts/configuration/https-support) and I would like to see this feature fully ready for an enterprise production grade application. ### Why? I've been greatly frustrated by the documentation to use a reverse proxy like nginx since it is all very diverse in how people configure there .conf files, a lot of the ubuntu documentation doesn't pair well with RHEL, and there's no clear explanation of the steps in why something should be set up the way it is in the examples I've seen. The need to disable CORS and Xsrf for SSL to work also worries me about whether it would be more secure to ignore the documentation warning and just use streamlits in house Server SSL features. Security should be of utmost importance and I wish Streamlit would put more eggs in the HTTPS basket. ### How? In the config.toml file of the .streamlit directory I would like to see more support for sslCertFile & sslKeyFile to be production grade ready. ### Additional Context Raising issue here as instructed from my discussion post https://discuss.streamlit.io/t/updates-on-streamlits-in-house-https-hosting/94710?u=oaklius
1medium
Title: Setting UMAP random seed seems to break the model results Body: I have a dataset of scientific abstracts. When I run the following code, I get ~65 topics: ``` sentence_model = SentenceTransformer('allenai/scibert_scivocab_cased') topic_model = BERTopic(embedding_model=sentence_model) topics, probs = topic_model.fit_transform(docs) ``` However, if I try defining a random seed in the UMAP model, I get only 2 topics and the outliers, no matter how I set the random seed: ``` umap_model = UMAP(random_state=1234) sentence_model = SentenceTransformer('allenai/scibert_scivocab_cased') topic_model = BERTopic(embedding_model=sentence_model, umap_model=umap_model) topics, probs = topic_model.fit_transform(docs) ``` This seems very weird to me; I wouldn't expect the random seed to have such a large effect on the model; but even if it does, I would expect different results when I change the value of the random seed, rather than the difference being between no random seed, and any random seed. Is this expected or is something weird going on? Thanks!
1medium
Title: Allow alt_response on MethodView Body: It would be a nice enhancement to allow the usage of the `alt_response`-Decorator on `MethodView`-classes. It could act as a shortcut to decorating every endpoint of the view with `alt_response`. An example use case would be a custom converter that rejects any pet_id not present in the pet database. Since its declared in the `route`-decorator, every endpoint will raise 404 if the pet was not found: ```python3 @blp.route('/<object_id(must_exist=True):pet_id>') @blp.alt_response(404, ErrorSchema) class PetsById(MethodView): @blp.response(200, PetSchema) def get(self, pet_id): """Get pet by ID""" return Pet.get_by_id(pet_id) @blp.response(204) def delete(self, pet_id): """Delete pet""" Pet.delete(pet_id) ```
1medium
Title: @dag imported from airflow.sdk fails with `AttributeError: 'DAG' object has no attribute 'get_run_data_interval'` Body: ### Apache Airflow version from main. 3.0.0b4 ### What happened? (This bug does not exist in beta3, only got it in the nightly from last night, can't get current main to run so could not test there) Tried to add this dag: ```python from airflow.decorators import task from pendulum import datetime from airflow.sdk import dag @dag( start_date=datetime(2025, 1, 1), schedule="@daily", catchup=False, ) def test_dag(): @task def test_task(): pass test_task() test_dag() ``` Getting the import error: ``` [2025-03-20T19:44:44.482+0000] {dag.py:1866} INFO - Sync 1 DAGs Traceback (most recent call last): File "/usr/local/bin/airflow", line 10, in <module> sys.exit(main()) ^^^^^^ File "/usr/local/lib/python3.12/site-packages/airflow/__main__.py", line 58, in main args.func(args) File "/usr/local/lib/python3.12/site-packages/airflow/cli/cli_config.py", line 49, in command return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/airflow/utils/cli.py", line 111, in wrapper return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/airflow/utils/providers_configuration_loader.py", line 55, in wrapped_function return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/airflow/utils/session.py", line 101, in wrapper return func(*args, session=session, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/airflow/cli/commands/remote_commands/dag_command.py", line 711, in dag_reserialize dag_bag.sync_to_db(bundle.name, bundle_version=bundle.get_current_version(), session=session) File "/usr/local/lib/python3.12/site-packages/airflow/utils/session.py", line 98, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/airflow/models/dagbag.py", line 649, in sync_to_db update_dag_parsing_results_in_db( File "/usr/local/lib/python3.12/site-packages/airflow/dag_processing/collection.py", line 326, in update_dag_parsing_results_in_db for attempt in run_with_db_retries(logger=log): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/tenacity/__init__.py", line 443, in __iter__ do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/tenacity/__init__.py", line 376, in iter result = action(retry_state) ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/tenacity/__init__.py", line 398, in <lambda> self._add_action_func(lambda rs: rs.outcome.result()) ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/usr/local/lib/python3.12/site-packages/airflow/dag_processing/collection.py", line 336, in update_dag_parsing_results_in_db DAG.bulk_write_to_db(bundle_name, bundle_version, dags, session=session) File "/usr/local/lib/python3.12/site-packages/airflow/utils/session.py", line 98, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/airflow/models/dag.py", line 1872, in bulk_write_to_db dag_op.update_dags(orm_dags, session=session) File "/usr/local/lib/python3.12/site-packages/airflow/dag_processing/collection.py", line 471, in update_dags last_automated_data_interval = dag.get_run_data_interval(last_automated_run) ^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'DAG' object has no attribute 'get_run_data_interval' ``` ### What you think should happen instead? The same dag works when using `from airflow.decorators import dag`. ### How to reproduce 1. Add the dag above 2. run airflow dags reserialize 3. see the error ### Operating System Mac M1 Pro 15.3.1 (24D70) ### Versions of Apache Airflow Providers None ### Deployment Other ### Deployment details Astro CLI ### Anything else? _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
1medium
Title: Application is_usable not checked when using bearer token in validate_bearer_token Body: **Describe the bug** We extended the provider application model and add enabled db and allowed_ips fields to it. ``` class MyApplication(oauth2_provider_models.AbstractApplication): enabled = models.BooleanField(default=True, help_text='False means that user on record has frozen account') allowed_ips = models.TextField('Allowed Ips', blank=True, null=True) def is_usable(self, request): """ Determines whether the application can be used - in this case we check if record is enabled. :param request: The HTTP request being processed. """ ip = request.META.get('REMOTE_ADDR') return self.enabled and ip in allowed_ips.split(" ") ``` When disabling application and make it unusable you can still use the access token. **To Reproduce** Create access token. Extend application and override is_usable. You can still use access token. You can hardcode is_usable and return false. validate_bearer_token valides if token is is_valid but not if application of a token is usable: https://github.com/jazzband/django-oauth-toolkit/blob/master/oauth2_provider/oauth2_validators.py#L405 is usable method that can be overriden using OAUTH2_PROVIDER_APPLICATION_MODEL: https://github.com/jazzband/django-oauth-toolkit/blob/master/oauth2_provider/models.py#L209 **Expected behavior** If application is not usable their token should not work. **Version** 2.2.0 <!-- Have you tested with the latest version and/or master branch? --> <!-- Replace '[ ]' with '[x]' to indicate that. --> - [X] I have tested with the latest published release and it's still a problem. - [X] I have tested with the master branch and it's still a problem.
1medium
Title: p2pd test nat traversal Body: ![image](https://user-images.githubusercontent.com/3491902/109007737-89ef7400-76bd-11eb-9320-959a0e20963b.png) Create a simple setup with 3 nodes where - node S1 starts first; it is available publicly; it is a full DHT node - nodes A and B are _bootstrap_-ed from S1 and - all nodes use QUIC with ~secio~ tls/noise TODO: - [x] check that nodes A and B can communicate directly if they are **not** behind NAT (localhost-only) - [x] check that nodes A and B can communicate if they **are** behind NAT (ping me if you need access to a VM for S1)
2hard
Title: Can anyone explain how the bbox is calculated? Body: I have read YOLO paper, and I just can not understand how this net can calculate som boxes from a new image!
1medium
Title: Allow to pass **kwargs to optimizers.get Body: https://github.com/keras-team/keras/blob/f6c4ac55692c132cd16211f4877fac6dbeead749/keras/src/optimizers/__init__.py#L72-L97 When dynamically getting an optimizer by using tf.keras.optimizers.get(<OPT_NAME>), it would be extremely useful if one could also pass extra arguments to the function, so that the optimizer gets initialized properly. See below a test example of the behavior I would like to see: ```python optimizer_name = 'adam' opt_params = {'learning_rate': 3e-3, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon' : 1e-07, 'amdsgrad': True} import tensorflow as tf opt = tf.keras.optimizers.get(optimizer_name, **opt_params) assert(opt.learning_rate == opt_params['learning_rate']), "Opt learning rate not being correctly initialized" ```
1medium
Title: Auto-generate API integration tests based on Swagger's API doc… Body: It seems like this should be the next logical step from auto-generating the doc with Swagger: auto-generate basic integration tests that check for input-output according to Swagger's schema (potentially with hooks/config to do more sophisticated testing). Is there any official example of such a thing? Maybe using Swagger Codegen or Dredd?
1medium
Title: Users keep getting repeated Activation Emails even though they are already active Body: Hi, I am using Djoser with the SimpleJWT plugin and for some reason my users get constant user Activation emails from my server even after they have activated in the past. If they click on the activation link again it just says they have already activated. Does it have something to do with updating a `User` that triggers this email again? I would think the only time an activation email should be sent out is when a user is created and is not active. Any ideas on why this would be happening? Thanks! Djoser settings: ```python DJOSER = { 'PASSWORD_RESET_CONFIRM_URL': 'users/reset-password/{uid}/{token}', 'USERNAME_RESET_CONFIRM_URL': 'users/reset-username/confirm/{uid}/{token}', 'ACTIVATION_URL': 'users/activate/{uid}/{token}', 'SEND_ACTIVATION_EMAIL': True, 'PASSWORD_CHANGED_EMAIL_CONFIRMATION': True, 'USER_CREATE_PASSWORD_RETYPE': True, 'SET_PASSWORD_RETYPE': True, 'TOKEN_MODEL': None, 'SERIALIZERS': { 'user': 'restapi.serializers.CustomUserSerializer', 'current_user': 'restapi.serializers.CustomUserSerializer', }, } ```
1medium
Title: Lite: Plotly doesn't work when installed along with altair Body: ### Describe the bug In the `outbreak_forecast` demo running on Lite, Plotly throws the following error. `plotly==6.0.0` was released and it depends on `narwhals>=1.15.0` (https://github.com/plotly/plotly.py/blob/v6.0.0/packages/python/plotly/recipe/meta.yaml#L28). However, installing `altair` leads to install `narwhals==1.10.0` **even after `narwhals>=1.15.0` is installed and the older version of `narwhals` overrides the already installed version.** (Pyodide provides `narwhals==1.10.0` [as a native package](https://pyodide.org/en/stable/usage/packages-in-pyodide.html), but `micropip.install("plotly")` installs `narwhals` from PyPI). Then, the error says Plotly calls non-existing API of `narwhals`. This poor dependency resolution is a known bug of micropip, but looks like it's not easy to introduce a fix, so we should add some workaround on our end. (Ref: https://github.com/pyodide/micropip/issues/103 ) ``` webworker.js:368 Python error: Traceback (most recent call last): File "/lib/python3.12/site-packages/gradio/queueing.py", line 625, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/gradio/route_utils.py", line 322, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/gradio/blocks.py", line 2044, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/gradio/blocks.py", line 1591, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<exec>", line 3, in mocked_anyio_to_thread_run_sync File "/lib/python3.12/site-packages/gradio/utils.py", line 883, in wrapper response = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "app.py", line 33, in outbreak fig = px.line(df, x="day", y=countries) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/plotly/express/_chart_types.py", line 270, in line return make_figure(args=locals(), constructor=go.Scatter) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/plotly/express/_core.py", line 2477, in make_figure args = build_dataframe(args, constructor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/plotly/express/_core.py", line 1727, in build_dataframe df_output, wide_id_vars = process_args_into_dataframe( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/plotly/express/_core.py", line 1343, in process_args_into_dataframe df_output[col_name] = to_named_series( ^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/plotly/express/_core.py", line 1175, in to_named_series x = nw.from_native(x, series_only=True, pass_through=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: from_native() got an unexpected keyword argument 'pass_through' ``` ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction Run the `outbreak_forecast` demo on Lite. ### Screenshot _No response_ ### Logs ```shell ``` ### System Info ```shell Lite ``` ### Severity I can work around it
1medium
Title: TypeError: Object of type timedelta is not JSON serializable Body: In newer versions of python the following line produces and error. To fix it convert it to string. https://github.com/anselal/antminer-monitor/blob/6f9803e891296c0c2807125128532f4d52024c0f/antminermonitor/blueprints/asicminer/asic_antminer.py#L115
0easy
Title: Geopackage Views seem not to work, or is the DateTime column? Body: Not sure if I do something wrong or if I hit an issue: Trying to create a scatter/line plot from a view from a Geopackage (Time-Value) where Time is a DateTime do not show up in DataPlotly: ![Screenshot-20200508163103-1269x910](https://user-images.githubusercontent.com/731673/81415923-5735b880-9149-11ea-8ae9-5fae96003dec.png) While the underlying data-table does this fine: ![Screenshot-20200508162937-1265x906](https://user-images.githubusercontent.com/731673/81415744-22c1fc80-9149-11ea-8999-896a11b375ad.png) **To Reproduce** Steps to reproduce the behavior: 1. From this zipped geopackage: [cloud3.zip](https://github.com/ghtmtt/DataPlotly/files/4599597/cloud3.zip) load both the data table and the view4 2. View4 is actually a view/join from Data and Grid 3. Try to create a scatterplot from Data: works fine: as you see Value are actually VERY small values (from 1E-10 till 10 or so...) 4. Then try to do this with the view4 layer: not sure what goes wrong, but the Y-axis is going from 0-4 but nothing is shown 5. Note that I actually wanted (as this is time related data) see a plot of one cell (this is air dispersion modelling).. Also @ghtmtt interest probably: https://github.com/qgis/QGIS/issues/36291 and https://github.com/qgis/QGIS/issues/26804 I created the view with 'OGC_FID' and then selection works \o/ 6. Note that is also seems that for the view the DateTime (Time) column seems not to be reckognized? **Desktop (please complete the following information):** - OS: Debian Testing - QGIS master - DataPlotly current
1medium
Title: [LoRA] loading LoRA into a quantized base model Body: Similar issues: 1. https://github.com/huggingface/diffusers/issues/10512 2. https://github.com/huggingface/diffusers/issues/10496 <details> <summary>Reproduction</summary> ```py import torch from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, FluxTransformer2DModel, FluxPipeline from huggingface_hub import hf_hub_download transformer_8bit = FluxTransformer2DModel.from_pretrained( "black-forest-labs/FLUX.1-dev", subfolder="transformer", quantization_config=DiffusersBitsAndBytesConfig(load_in_8bit=True), torch_dtype=torch.bfloat16, ) pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", transformer=transformer_8bit, torch_dtype=torch.bfloat16, ).to("cuda") pipe.load_lora_weights( hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"), adapter_name="hyper-sd" ) pipe.set_adapters("hyper-sd", adapter_weights=0.125) prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts." image = pipe( prompt=prompt, height=1024, width=1024, max_sequence_length=512, num_inference_steps=8, guidance_scale=50, generator=torch.Generator().manual_seed(42), ).images[0] image[0].save("out.jpg") ``` </details> Happens on `main` as well as `v0.31.0-release` branch as well. <details> <summary>Error</summary> ```bash Traceback (most recent call last): File "/home/sayak/diffusers/load_loras_flux.py", line 18, in <module> pipe.load_lora_weights( File "/home/sayak/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1846, in load_lora_weights self.load_lora_into_transformer( File "/home/sayak/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1948, in load_lora_into_transformer inject_adapter_in_model(lora_config, transformer, adapter_name=adapter_name, **peft_kwargs) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/mapping.py", line 260, in inject_adapter_in_model peft_model = tuner_cls(model, peft_config, adapter_name=adapter_name, low_cpu_mem_usage=low_cpu_mem_usage) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 141, in __init__ super().__init__(model, config, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 184, in __init__ self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 501, in inject_adapter self._create_and_replace(peft_config, adapter_name, target, target_name, parent, current_key=key) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 239, in _create_and_replace self._replace_module(parent, target_name, new_module, target) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 263, in _replace_module new_module.to(child.weight.device) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1340, in to return self._apply(convert) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 900, in _apply module._apply(fn) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 900, in _apply module._apply(fn) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 927, in _apply param_applied = fn(param) File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1333, in convert raise NotImplementedError( NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device. ``` </details> @BenjaminBossan any suggestions here?
2hard
Title: Easy way to build a nested dict from factory? Body: #### The problem My situation is similar to this one raised https://github.com/FactoryBoy/factory_boy/issues/68#issuecomment-363268477 I have nested factories that use SubFactory When I want to use factory.build to create a dict, the nested factory comes out as object rather than as a dict. #### Proposed solution Is there a way to improve with a `build_nested_dict` function or there's a workaround?
1medium
Title: PingAggregator returns inf for all servers that use relays Body: It seems that something is wrong with calling `rpc_ping()` for such servers (since https://health.petals.dev is able to reach such servers in 5 sec successfully).
1medium
Title: [BUG] print does not work well with DefaultDict Body: - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions. - [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md). **Describe the bug** `rich.print` inspects `__rich__` and `aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf` of object during printing. However, it does not inspect if the `__getattr__` of object will always returns something. `ipython` avoids this behaviour by checking `_ipython_canary_method_should_not_exist_` before inspecting the object. ```python from chanfig import Config from rich import print as pprint if __name__ == '__main__': config = Config(**{'hello': 'world'}) print('print', config) pprint('rich.print', config) print(config.__rich__) print(config.keys()) ``` ```bash rint Config(<class 'chanfig.config.Config'>, ('hello'): 'world' ) rich.print Config(<class 'chanfig.config.Config'>, ('hello'): 'world' ('__rich__'): Config(<class 'chanfig.config.Config'>, ) ('aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf'): Config(<class 'chanfig.config.Config'>, ) ) Config(<class 'chanfig.config.Config'>, ) dict_keys(['hello', '__rich__', 'aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf']) ``` ref: https://github.com/ZhiyuanChen/CHANfiG/issues/6
1medium
Title: Docs should use RTD theme for local dev builds Body: This should help avoiding issues like #683 where the default theme does not show these issues. https://sphinx-rtd-theme.readthedocs.io/en/stable/installing.html
1medium
Title: Add recursive option to `repository.tree(sha, recursive=False)` Body: # Overview GitHub Tree API allow to get tree recursively - https://developer.github.com/v3/git/trees/#get-a-tree-recursively # Ideas It should be pretty simple for now it works even like this (a hack): ```python repository.tree('sha?recursive=1') ``` <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/39848100-add-recursive-option-to-repository-tree-sha-recursive-false?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github). </bountysource-plugin>
1medium
Title: Error:no arguments have been passed Body: Hi, why do I report this error after I enter the api interface? ![image](https://user-images.githubusercontent.com/45706420/54551243-a91cfb80-49e8-11e9-81bd-4d8ea5e8efc8.png)
1medium
Title: Very strange behavior of two-number ocr Body: Hi, I'm trying to apply OCR to a captcha image and I'm getting some strange results to share. When I use `reader = easyocr.Reader(['en'])`, the original captcha image is not recognized at all, but the `255-image` is recognized just fine. Intuitively, this is very strange behavior. ![original](https://github.com/JaidedAI/EasyOCR/assets/26109705/3876c956-72ec-4780-9725-4ffc56e7fbb8) : original image ![inverted](https://github.com/JaidedAI/EasyOCR/assets/26109705/8bae1dac-2f4b-48c6-8be5-b3616cb506b2) : inverted image `reader.readtext(image, detail = 0, allowlist="0123456789")` returns `['27']`, which is accurate for inverted image but `[]`(nothing detected) for original image. Is there any guess for why this strange phenomena occurs?
1medium
Title: Bridge from R plotly visualization to Python Dash Body: I'm currently using ggplot2 and plotly in R for a project since a specific package that I am using is only available in that language. However, when attempting to create my app in dashR, I'm finding it extremely difficult to do so, specifically because the documentation of dashR seem to be outdated. So I'm trying to find a way to take the plotly graphs from R and use them in a Python environment using Dash. Either that or if I can take some dashR components and combine them with Python Dash components. However, I'm unsure if this is even an option for Dash and would like to confirm whether that is the case! Thank you!
1medium
Title: Having an error when executing vocoder_preprocess.py Body: I'm trying to train vocoder after training synthesizer. But I have this error when executing vocoder_preprocess.py. ![error](https://user-images.githubusercontent.com/63226383/113960047-cf779300-985e-11eb-991c-b36f6a010fb7.PNG) So I checked tacotron.py and I realized that the model returns 4 outputs. ![22](https://user-images.githubusercontent.com/63226383/113960401-69d7d680-985f-11eb-9e7e-9922912b528b.PNG) but "model" in run_synthesis.py is supposed to return 3 outputs(look at the first picture). I guess that the error is resulted from that part. How can I solve this problem?
1medium
Title: Python Crashing after starting demo_toolbox.py or demo_cli.py Body: _(Onlyone) C:\Users\Tomas\Documents\Ai For Work\Voice\Real Time Clone>python demo_cli.py C:\Users\Tomas\anaconda3\envs\Onlyone\lib\site-packages\h5py\__init__.py:40: UserWarning: h5py is running against HDF5 1.10.5 when it was built against 1.10.4, this may cause problems '{0}.{1}.{2}'.format(*version.hdf5_built_version_tuple) Warning! ***HDF5 library version mismatched error*** The HDF5 header files used to compile this application do not match the version used by the HDF5 library to which this application is linked. Data corruption or segmentation faults may occur if the application continues. This can happen when an application was compiled by one version of HDF5 but linked with a different version of static or shared HDF5 library. You should recompile the application or check your shared library related settings such as 'LD_LIBRARY_PATH'. You can, at your own risk, disable this warning by setting the environment variable 'HDF5_DISABLE_VERSION_CHECK' to a value of '1'. Setting it to 2 or higher will suppress the warning messages totally. Headers are 1.10.4, library is 1.10.5 SUMMARY OF THE HDF5 CONFIGURATION ================================= General Information: ------------------- HDF5 Version: 1.10.5 Configured on: 2019-03-04 Configured by: Visual Studio 15 2017 Win64 Host system: Windows-10.0.17763 Uname information: Windows Byte sex: little-endian Installation point: C:/Program Files/HDF5 Compiling Options: ------------------ Build Mode: Debugging Symbols: Asserts: Profiling: Optimization Level: Linking Options: ---------------- Libraries: Statically Linked Executables: OFF LDFLAGS: /machine:x64 H5_LDFLAGS: AM_LDFLAGS: Extra libraries: Archiver: Ranlib: Languages: ---------- C: yes C Compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe 19.16.27027.1 CPPFLAGS: H5_CPPFLAGS: AM_CPPFLAGS: CFLAGS: /DWIN32 /D_WINDOWS /W3 H5_CFLAGS: AM_CFLAGS: Shared C Library: YES Static C Library: YES Fortran: OFF Fortran Compiler: Fortran Flags: H5 Fortran Flags: AM Fortran Flags: Shared Fortran Library: YES Static Fortran Library: YES C++: ON C++ Compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe 19.16.27027.1 C++ Flags: /DWIN32 /D_WINDOWS /W3 /GR /EHsc H5 C++ Flags: AM C++ Flags: Shared C++ Library: YES Static C++ Library: YES JAVA: OFF JAVA Compiler: Features: --------- Parallel HDF5: OFF Parallel Filtered Dataset Writes: Large Parallel I/O: High-level library: ON Threadsafety: OFF Default API mapping: v110 With deprecated public symbols: ON I/O filters (external): DEFLATE DECODE ENCODE MPE: Direct VFD: dmalloc: Packages w/ extra debug output: API Tracing: OFF Using memory checker: OFF Memory allocation sanity checks: OFF Function Stack Tracing: OFF Strict File Format Checks: OFF Optimization Instrumentation: Bye..._ Is there a way to fix it?
1medium
Title: Failed to install imagededup on linux using pip install imagededup Body: Hi, I got the following errors: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. daal4py 2021.3.0 requires daal==2021.2.3, which is not installed. scikit-image 0.18.3 requires PyWavelets>=1.1.1, but you have pywavelets 1.0.3 which is incompatible. pyerfa 2.0.0 requires numpy>=1.17, but you have numpy 1.16.6 which is incompatible. pandas 1.3.4 requires numpy>=1.17.3, but you have numpy 1.16.6 which is incompatible. numba 0.54.1 requires numpy<1.21,>=1.17, but you have numpy 1.16.6 which is incompatible. bokeh 2.4.1 requires pillow>=7.1.0, but you have pillow 6.2.2 which is incompatible. astropy 4.3.1 requires numpy>=1.17, but you have numpy 1.16.6 which is incompatible. Any idea, help? Thanks
1medium
Title: Unable to change nginx.conf in the image Body: This may seem a little strange, but I am not able to change nginx.conf file inside /etc/nginx/conf.d/nginx.conf Here is what I did: ## Method1: Change in Dockerfile My Dockerfile looks like this: ``` FROM tiangolo/uwsgi-nginx-flask:flask COPY ./app /app COPY ./changes/nginx.conf /etc/nginx/conf.d/nginx.conf COPY ./changes/nginx.conf /app/ ``` ./changes/nginx.conf looks like this: ``` server { location /app1/ { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:///tmp/uwsgi.sock; } location /static { alias /app/static; } } ``` **Note the change in location in above server block from `location /` to `location /app1/`** After the image is built and I run the docker container, I exec into the running container `sudo docker exec -ti CONTAINER_ID /bin/bash` `cat /app/nginx.conf` shows presence of updated nginx.conf file (location changes from `/` to `/app1/` BUT `cat /etc/nginx/conf.d/nginx.conf` still shows the old conf file (location is still `/`) I thought maybe the second COPY line is not getting executed successfully and docker isn't throwing error on console (sudo?). So, I changed the conf file manually and did a docker commit - the second approach mentioned below. ## Method2: Docker commit After the docker container was up and running, I used exec to login into the container using `[vagrant@localhost]$ sudo docker exec -ti CONTAINER_ID /bin/bash` `[root@CONTAINER_ID]# vi /etc/nginx/conf.d/nginx.conf` Changing the file to reflect below: ``` server { location /app1/ { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:///tmp/uwsgi.sock; } location /static { alias /app/static; } } ``` Saved the file `wq!` and exit the container. After that I did `sudo docker commit CONTAINER_ID my_new_image` Starting a new container and re-logging into container running on my_new_image still gives below nginx.conf file inside /etc/nginx/conf.d/nginx.conf: ``` server { location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:///tmp/uwsgi.sock; } location /static { alias /app/static; } } ``` I can tell that the my_new_image has some changes because it is larger in size than tiangolo/uwsgi-nginx-flask-docker because I had installed vim to edit the file. But somehow file changes are not persisting inside /etc/nginx/conf.d/nginx.conf. Am I doing something wrong or is it some bug?
1medium
Title: Optimum Image size Body: * face_recognition version:1.0 * Python version:3.5.3 * Operating System:Windows ### Description Wanted to check what is the optimum size of the image. This should depend on the CNN initial input nodes used? also how will the number_of_times_to_upsample parameter can get tuned defaukt is one. But should be use 2,3,4,... Also while using dlib.shape_predictor("./shape_predictor_68_face_landmarks.dat") I can detect faces but when using the same Box to face_locations no boxes were detected ### What I Did ``` Paste the command(s) you ran and the output. If there was a crash, please include the traceback here. ```
1medium
Title: audio file as input parameter for model.transcribe works well but ndarray-typed parameter captured with sounddevice does not work Body: `model.transcribe` works well when I use an audio file as an input parameter. But when I use sounddevice to record a period of speech and save the speech result as ndarray and send it directly for `model.transcribe` , it cannot recognize speech. But I save the speech recorded by sounddevice as an audio file and then use this file as input paramter for `model.transcribe` , the speech can be recognized. What is the problem? Is there any specific format requirement for ndarray parameter?
1medium
Title: ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'> Body: ### Describe the bug Tried gr.load("models/black-forest-labs/FLUX.1-schnell").launch() and sometimes it throws this error. It is not consistent as sometimes it generates the image and sometimes throws this error. I tried both in a local docker container and huggingface private space. gradio==5.6.0 gradio_client==1.4.3 Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events response = await route_utils.call_process_api( File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api output = await app.get_blocks().process_api( File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2028, in process_api data = await self.postprocess_data(block_fn, result["prediction"], state) File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1834, in postprocess_data prediction_value = block.postprocess(prediction_value) File "/usr/local/lib/python3.10/site-packages/gradio/components/image.py", line 279, in postprocess saved = image_utils.save_image(value, self.GRADIO_CACHE, self.format) File "/usr/local/lib/python3.10/site-packages/gradio/image_utils.py", line 76, in save_image raise ValueError( ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'> ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr gr.load("models/black-forest-labs/FLUX.1-schnell").launch() ``` ### Screenshot <img width="786" alt="image" src="https://github.com/user-attachments/assets/b59938a0-b666-4975-a1bc-599134a79617"> ### Logs ```shell Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events response = await route_utils.call_process_api( File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api output = await app.get_blocks().process_api( File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2028, in process_api data = await self.postprocess_data(block_fn, result["prediction"], state) File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1834, in postprocess_data prediction_value = block.postprocess(prediction_value) File "/usr/local/lib/python3.10/site-packages/gradio/components/image.py", line 279, in postprocess saved = image_utils.save_image(value, self.GRADIO_CACHE, self.format) File "/usr/local/lib/python3.10/site-packages/gradio/image_utils.py", line 76, in save_image raise ValueError( ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'> ``` ### System Info ```shell gradio==5.6.0 gradio_client==1.4.3 ``` ### Severity Blocking usage of gradio
1medium
Title: SSL configuration in multisite istance Body: **Describe the bug** First subsite configuration in multi site configuration OK Second site, after manual configuration of SSL certs, error 400 (bad request) in subsite **To Reproduce** Sites created in MODE "Default" (MODE WHISTLEBLOWINGPA throw Internal server error (Unexpected)) After creation: [Sites management] -> Select 2nd subsite, [Network settings] -> Insert Hostname (save) -> Manual configuration, loading ssl private key, loading public CRT, press [Enable] -> redirection to site -> http 400 error **version** OS DEBIAN 11 GL version 4.10.10 **Additional info** We tried to delete all the sites and also recreate them in different order. Only the first inserted after installing the new fresh instance of GL always works, even if it is inserted as the second or third. Others throw error 400 after SSL configuration. There are no sub-site errors in the logs. thx
1medium
Title: Wrong Confusion Matrix Results when Training Yolov5 with a custom Dataset Body: ### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Hello I trained yolov5s on a custom dataset with 50 epochs and batch size of 16 but after training the model I evaluated its performance on the test set and noticed that the mAP was 94% which was a bit weird for me. So I checked the confusion matrix and noticed that the results were wrong. Here are my training specifications: 1. **dataset:** The dataset was mainly published for pose estimation but I did some preprocessing on RoboFlow to make it suitable for object detection (I am not sure if this can be an issue or not). The data contains a single class and is divided to 25000 images for training, 4600 images for validation and 1198 images for testing. 2. **Model Configuration:** I have used yolov5s model configuration 3. **Training Parameters:** I have used the default training parameters except for the number of classes I set it to 1 and the activation function. Concerning the weights, I started the training from the petrained weightsof ultralytics (yolov5s.pt) 4. **Activation Function:** For the activation function I have changed the SiLU() to LeakyReLU(0.1015625, inplace=True). The reason I have done this is because my model will deployed later on an FPGA board and the SiLU activation function is not supported by the board. 5. **Training Platform:** I trained the model on my laptop with RTX4060 GPU with size 8GB I hope you can help me fix this issue as I am confused what I can try to resolve the problem. Thank you very much in advance for guidance and support. ![confusion_matrix](https://github.com/ultralytics/yolov5/assets/63944119/114b55b3-02b0-4d25-a50f-a8963e8c8962) ### Additional _No response_
1medium
Title: 请教 如何把多个作者的视频文件直接下载保持到一个同一的文件夹 Body: 请教作者和各位前面, 如何把若干个视频作者的视频mp4文件下载到同一个文件夹,而不再区分子文件夹,比如 "root": "C:\\Users\\observer\\Desktop\\project2\\download", 不再保存到各自的路径,路径不再加uid C:\Users\observer\Desktop\project2\download\UID1799271955046775_rv1_发布作品 去掉路径中的“UID1799271955046775_xxx_发布作品” , 直接保存到 C:\Users\observer\Desktop\project2\download\rv1 这样方便统一处理一系列视频, 感谢指点
1medium
Title: CursorPage returns total always null Body: > from fastapi_pagination.cursor import CursorPage returns total always null. Is it that it can't return a coursor at all?
1medium
Title: TypeError: modelscope.msdatasets.utils.hf_datasets_util.load_dataset_with_ctx() got multiple values for keyword argument 'trust_remote_code' Body: 1.17版本(pip 当前最新版本) MsDataset.load()的 trust_remote_code=True 这个 arg 报错 估计是另一个issue #962 导致的:datasets 未引入到 modelscope 安装包的requires,导致需要手动pip install datasets,但这个手动安装版本是pip的latest版本,不一定与当前的 modelscope适配 环境 win10 python3.9-10 Thanks for your error report and we appreciate it a lot. **Checklist** * I have searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs) * I have searched related issues but cannot get the expected help. * The bug has not been fixed in the latest version. **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** * What command or script did you run? > A placeholder for the command. * Did you make any modifications on the code or config? Did you understand what you have modified? * What dataset did you use? **Your Environments (__required__)** * OS: `uname -a` * CPU: `lscpu` * Commit id (e.g. `a3ffc7d8`) * You may add addition that may be helpful for locating the problem, such as * How you installed PyTorch [e.g., pip, conda, source] * Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.) Please @ corresponding people according to your problem: Model related: @wenmengzhou @tastelikefeet Model hub related: @liuyhwangyh Dataset releated: @wangxingjun778 Finetune related: @tastelikefeet @Jintao-Huang Pipeline related: @Firmament-cyou @wenmengzhou Contribute your model: @zzclynn
1medium
Title: [ Gradio Client ] Handling a local audio file. ReadTimeout: The read operation timed out. Body: Hello, I'm following the tutorial to run the whisper example on the documentation with a local file. However, I'm having a couple of errors. I'm using gradio_client `1.5.4`, but I tried `1.5.0` and `1.4.3`. The same behaviour persists. Probably, this isn't a bug, but rather I'm using badly the API. Sorry if so. **A.** When following the example published [here](https://www.gradio.app/docs/python-client/introduction), it works when the file is hosted. However, when I try to the same with a local file I can't receive the correct output using a local file. ```python from gradio_client import Client, handle_file client = Client("abidlabs/whisper") results = client.predict( #audio=handle_file('https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav') audio=handle_file('test.wav') ) results ``` Output with local file ``` Loaded as API: https://abidlabs-whisper.hf.space/ ✔ ('/tmp/gradio/45665c644e65fc9edcad2a47be86e9a8c33c813652125c7cb039d4461a3c1168/test.wav',) ``` Output with github hosted file ``` Loaded as API: https://abidlabs-whisper.hf.space/ ✔ you ``` **B.** I created a more complex radio app I'm sharing from my PC. When I use it via the GUI on gradio.live, there's no problem. However, if I attempt to use it via `gradio_client`, I obtain the following ReadTimeout error. ``` --------------------------------------------------------------------------- ReadTimeout Traceback (most recent call last) [/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py](https://localhost:8080/#) in map_httpcore_exceptions() 100 try: --> 101 yield 102 except Exception as exc: 29 frames [/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py](https://localhost:8080/#) in handle_request(self, request) 249 with map_httpcore_exceptions(): --> 250 resp = self._pool.handle_request(req) 251 [/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection_pool.py](https://localhost:8080/#) in handle_request(self, request) 255 self._close_connections(closing) --> 256 raise exc from None 257 [/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection_pool.py](https://localhost:8080/#) in handle_request(self, request) 235 # Send the request on the assigned connection. --> 236 response = connection.handle_request( 237 pool_request.request [/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection.py](https://localhost:8080/#) in handle_request(self, request) 102 --> 103 return self._connection.handle_request(request) 104 [/usr/local/lib/python3.11/dist-packages/httpcore/_sync/http11.py](https://localhost:8080/#) in handle_request(self, request) 135 self._response_closed() --> 136 raise exc 137 [/usr/local/lib/python3.11/dist-packages/httpcore/_sync/http11.py](https://localhost:8080/#) in handle_request(self, request) 105 trailing_data, --> 106 ) = self._receive_response_headers(**kwargs) 107 trace.return_value = ( [/usr/local/lib/python3.11/dist-packages/httpcore/_sync/http11.py](https://localhost:8080/#) in _receive_response_headers(self, request) 176 while True: --> 177 event = self._receive_event(timeout=timeout) 178 if isinstance(event, h11.Response): [/usr/local/lib/python3.11/dist-packages/httpcore/_sync/http11.py](https://localhost:8080/#) in _receive_event(self, timeout) 216 if event is h11.NEED_DATA: --> 217 data = self._network_stream.read( 218 self.READ_NUM_BYTES, timeout=timeout [/usr/local/lib/python3.11/dist-packages/httpcore/_backends/sync.py](https://localhost:8080/#) in read(self, max_bytes, timeout) 125 exc_map: ExceptionMapping = {socket.timeout: ReadTimeout, OSError: ReadError} --> 126 with map_exceptions(exc_map): 127 self._sock.settimeout(timeout) [/usr/lib/python3.11/contextlib.py](https://localhost:8080/#) in __exit__(self, typ, value, traceback) 157 try: --> 158 self.gen.throw(typ, value, traceback) 159 except StopIteration as exc: [/usr/local/lib/python3.11/dist-packages/httpcore/_exceptions.py](https://localhost:8080/#) in map_exceptions(map) 13 if isinstance(exc, from_exc): ---> 14 raise to_exc(exc) from exc 15 raise # pragma: nocover ReadTimeout: The read operation timed out The above exception was the direct cause of the following exception: ReadTimeout Traceback (most recent call last) [<ipython-input-8-97f4f20faa6a>](https://localhost:8080/#) in <cell line: 0>() ----> 1 job.result() [/usr/local/lib/python3.11/dist-packages/gradio_client/client.py](https://localhost:8080/#) in result(self, timeout) 1512 >> 9 1513 """ -> 1514 return super().result(timeout=timeout) 1515 1516 def outputs(self) -> list[tuple | Any]: [/usr/lib/python3.11/concurrent/futures/_base.py](https://localhost:8080/#) in result(self, timeout) 447 raise CancelledError() 448 elif self._state == FINISHED: --> 449 return self.__get_result() 450 451 self._condition.wait(timeout) [/usr/lib/python3.11/concurrent/futures/_base.py](https://localhost:8080/#) in __get_result(self) 399 if self._exception: 400 try: --> 401 raise self._exception 402 finally: 403 # Break a reference cycle with the exception in self._exception [/usr/lib/python3.11/concurrent/futures/thread.py](https://localhost:8080/#) in run(self) 56 57 try: ---> 58 result = self.fn(*self.args, **self.kwargs) 59 except BaseException as exc: 60 self.future.set_exception(exc) [/usr/local/lib/python3.11/dist-packages/gradio_client/client.py](https://localhost:8080/#) in _inner(*data) 1130 1131 data = self.insert_empty_state(*data) -> 1132 data = self.process_input_files(*data) 1133 predictions = _predict(*data) 1134 predictions = self.process_predictions(*predictions) [/usr/local/lib/python3.11/dist-packages/gradio_client/client.py](https://localhost:8080/#) in process_input_files(self, *data) 1285 data_ = [] 1286 for i, d in enumerate(data): -> 1287 d = utils.traverse( 1288 d, 1289 partial(self._upload_file, data_index=i), [/usr/local/lib/python3.11/dist-packages/gradio_client/utils.py](https://localhost:8080/#) in traverse(json_obj, func, is_root) 998 """ 999 if is_root(json_obj): -> 1000 return func(json_obj) 1001 elif isinstance(json_obj, dict): 1002 new_obj = {} [/usr/local/lib/python3.11/dist-packages/gradio_client/client.py](https://localhost:8080/#) in _upload_file(self, f, data_index) 1357 with open(file_path, "rb") as f: 1358 files = [("files", (orig_name.name, f))] -> 1359 r = httpx.post( 1360 self.client.upload_url, 1361 headers=self.client.headers, [/usr/local/lib/python3.11/dist-packages/httpx/_api.py](https://localhost:8080/#) in post(url, content, data, files, json, params, headers, cookies, auth, proxy, follow_redirects, verify, timeout, trust_env) 302 **Parameters**: See `httpx.request`. 303 """ --> 304 return request( 305 "POST", 306 url, [/usr/local/lib/python3.11/dist-packages/httpx/_api.py](https://localhost:8080/#) in request(method, url, params, content, data, files, json, headers, cookies, auth, proxy, timeout, follow_redirects, verify, trust_env) 107 trust_env=trust_env, 108 ) as client: --> 109 return client.request( 110 method=method, 111 url=url, [/usr/local/lib/python3.11/dist-packages/httpx/_client.py](https://localhost:8080/#) in request(self, method, url, content, data, files, json, params, headers, cookies, auth, follow_redirects, timeout, extensions) 823 extensions=extensions, 824 ) --> 825 return self.send(request, auth=auth, follow_redirects=follow_redirects) 826 827 @contextmanager [/usr/local/lib/python3.11/dist-packages/httpx/_client.py](https://localhost:8080/#) in send(self, request, stream, auth, follow_redirects) 912 auth = self._build_request_auth(request, auth) 913 --> 914 response = self._send_handling_auth( 915 request, 916 auth=auth, [/usr/local/lib/python3.11/dist-packages/httpx/_client.py](https://localhost:8080/#) in _send_handling_auth(self, request, auth, follow_redirects, history) 940 941 while True: --> 942 response = self._send_handling_redirects( 943 request, 944 follow_redirects=follow_redirects, [/usr/local/lib/python3.11/dist-packages/httpx/_client.py](https://localhost:8080/#) in _send_handling_redirects(self, request, follow_redirects, history) 977 hook(request) 978 --> 979 response = self._send_single_request(request) 980 try: 981 for hook in self._event_hooks["response"]: [/usr/local/lib/python3.11/dist-packages/httpx/_client.py](https://localhost:8080/#) in _send_single_request(self, request) 1012 1013 with request_context(request=request): -> 1014 response = transport.handle_request(request) 1015 1016 assert isinstance(response.stream, SyncByteStream) [/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py](https://localhost:8080/#) in handle_request(self, request) 247 extensions=request.extensions, 248 ) --> 249 with map_httpcore_exceptions(): 250 resp = self._pool.handle_request(req) 251 [/usr/lib/python3.11/contextlib.py](https://localhost:8080/#) in __exit__(self, typ, value, traceback) 156 value = typ() 157 try: --> 158 self.gen.throw(typ, value, traceback) 159 except StopIteration as exc: 160 # Suppress StopIteration *unless* it's the same exception that [/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py](https://localhost:8080/#) in map_httpcore_exceptions() 116 117 message = str(exc) --> 118 raise mapped_exc(message) from exc 119 120 ReadTimeout: The read operation timed out ``` Thanks in advance. ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction This is the code for the first attempt with whisper: ```python from gradio_client import Client, handle_file client = Client("abidlabs/whisper") results = client.predict( audio=handle_file('test.wav') ) results ``` This is the code for my gradio local app. I'll keep it open for some time. ``` from gradio_client import Client, handle_file client = Client("https://bc3c5d74ec992db205.gradio.live") audio_file = handle_file("/content/test.wav") result = client.predict( audio_file=audio_file, # Local file named "audio.wav" model="base", # Whisper model: "tiny", "base", "small", "medium", "large", or "large-v2" task="transcribe", # Task: "transcribe" or "translate" language="auto", # Source language: "auto", "en", "es", etc. api_name="/process_audio" # The Gradio endpoint name ) ``` ### Screenshot _No response_ ### Logs ```shell ``` ### System Info ```shell It says `gradio_client` is not installed. But it's actually. Gradio Environment Information: ------------------------------ Operating System: Linux gradio version: 5.12.0 gradio_client version: 1.5.4 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 3.7.1 audioop-lts is not installed. fastapi: 0.115.6 ffmpy: 0.5.0 gradio-client==1.5.4 is not installed. httpx: 0.28.1 huggingface-hub: 0.27.1 jinja2: 3.1.5 markupsafe: 2.1.5 numpy: 1.26.4 orjson: 3.10.14 packaging: 24.2 pandas: 2.2.2 pillow: 11.1.0 pydantic: 2.10.5 pydub: 0.25.1 python-multipart: 0.0.20 pyyaml: 6.0.2 ruff: 0.9.2 safehttpx: 0.1.6 semantic-version: 2.10.0 starlette: 0.41.3 tomlkit: 0.13.2 typer: 0.15.1 typing-extensions: 4.12.2 urllib3: 2.3.0 uvicorn: 0.34.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.10.0 httpx: 0.28.1 huggingface-hub: 0.27.1 packaging: 24.2 typing-extensions: 4.12.2 websockets: 14.1 ``` ### Severity Blocking usage of gradio
1medium
Title: python 3.10+ MutableMapping ImportError Body: After python 3.9 collections migrate MutableMapping from collections to collections.abc which causes an import error in this library. I am currently running 3.11 and get the following error message: ![image](https://user-images.githubusercontent.com/77159128/184784938-54d83baa-9e59-4ed7-b1ff-9249170aa70a.png) When I roll my python version back to 3.9 this issue goes away.
1medium
Title: problem install with PostgreSQL database Body: Hello I'm trying using PostgreSQL database to make your example work. I have problem with logs error like this > Traceback (most recent call last): File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context context) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute cursor.execute(statement, parameters) psycopg2.ProgrammingError: column "password" cannot be cast automatically to type bytea HINT: You might need to specify "USING password::bytea". >The above exception was the direct cause of the following exception: > Traceback (most recent call last): File "/home/me/.virtualenvs/flask-noota-api/bin/invoke", line 11, in <module> sys.exit(program.run()) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/program.py", line 293, in run self.execute() File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/program.py", line 408, in execute executor.execute(*self.tasks) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/executor.py", line 114, in execute result = call.task(*args, **call.kwargs) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/tasks.py", line 114, in __call__ result = self.body(*args, **kwargs) File "/home/me/CODER/python/flask-noota-api/tasks/app/_utils.py", line 61, in wrapper return func(*args, **kwargs) File "/home/me/CODER/python/flask-noota-api/tasks/app/db.py", line 277, in init_development_data context.invoke_execute(context, 'app.db.upgrade') File "/home/me/CODER/python/flask-noota-api/tasks/__init__.py", line 73, in invoke_execute results = Executor(namespace, config=context.config).execute((command_name, kwargs)) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/executor.py", line 114, in execute result = call.task(*args, **call.kwargs) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/tasks.py", line 114, in __call__ result = self.body(*args, **kwargs) File "/home/me/CODER/python/flask-noota-api/tasks/app/_utils.py", line 61, in wrapper return func(*args, **kwargs) File "/home/me/CODER/python/flask-noota-api/tasks/app/db.py", line 163, in upgrade command.upgrade(config, revision, sql=sql, tag=tag) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/command.py", line 174, in upgrade script.run_env() File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/script/base.py", line 416, in run_env util.load_python_file(self.dir, 'env.py') File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file module = load_module_py(module_id, path) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/util/compat.py", line 68, in load_module_py module_id, path).load_module(module_id) File "<frozen importlib._bootstrap_external>", line 388, in _check_name_wrapper File "<frozen importlib._bootstrap_external>", line 809, in load_module File "<frozen importlib._bootstrap_external>", line 668, in load_module File "<frozen importlib._bootstrap>", line 268, in _load_module_shim File "<frozen importlib._bootstrap>", line 693, in _load File "<frozen importlib._bootstrap>", line 673, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 665, in exec_module File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed File "migrations/env.py", line 93, in <module> run_migrations_online() File "migrations/env.py", line 86, in run_migrations_online context.run_migrations() File "<string>", line 8, in run_migrations File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/runtime/environment.py", line 807, in run_migrations self.get_context().run_migrations(**kw) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/runtime/migration.py", line 321, in run_migrations step.migration_fn(**kw) File "/home/me/CODER/python/flask-noota-api/migrations/versions/36954739c63_.py", line 28, in upgrade existing_nullable=False) File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__ next(self.gen) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/operations/base.py", line 299, in batch_alter_table impl.flush() File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/operations/batch.py", line 57, in flush fn(*arg, **kw) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/ddl/postgresql.py", line 91, in alter_column existing_nullable=existing_nullable, File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/ddl/impl.py", line 118, in _exec return conn.execute(construct, *multiparams, **params) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 945, in execute return meth(self, multiparams, params) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection return connection._execute_ddl(self, multiparams, params) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1002, in _execute_ddl compiled File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context context) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception exc_info File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/util/compat.py", line 186, in reraise raise value.with_traceback(tb) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context context) File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) column "password" cannot be cast automatically to type bytea HINT: You might need to specify "USING password::bytea". [SQL: 'ALTER TABLE "user" ALTER COLUMN password TYPE BYTEA '] ------------------- tested with - Python 3.5.2 - (PostgreSQL) 9.4.10 installed dependencies > alembic==0.8.10 aniso8601==1.2.0 apispec==0.19.0 appdirs==1.4.2 arrow==0.8.0 bcrypt==3.1.3 cffi==1.9.1 click==6.7 colorlog==2.10.0 Flask==0.12 Flask-Cors==3.0.2 Flask-Login==0.4.0 flask-marshmallow==0.7.0 Flask-OAuthlib==0.9.3 flask-restplus==0.10.1 Flask-SQLAlchemy==2.2 invoke==0.15.0 itsdangerous==0.24 Jinja2==2.9.5 jsonschema==2.6.0 lockfile==0.12.2 Mako==1.0.6 MarkupSafe==0.23 marshmallow==2.13.1 marshmallow-sqlalchemy==0.12.0 oauthlib==2.0.1 packaging==16.8 passlib==1.7.1 permission==0.4.1 psycopg2==2.7 pycparser==2.17 pyparsing==2.1.10 python-dateutil==2.6.0 python-editor==1.0.3 pytz==2016.10 PyYAML==3.12 requests==2.13.0 requests-oauthlib==0.8.0 six==1.10.0 SQLAlchemy==1.1.5 SQLAlchemy-Utils==0.32.12 webargs==1.5.3 Werkzeug==0.11.15 any idea ?
1medium
Title: Local Calendar - Repeating Events on 1st Saturday of Month Body: ### The problem When creating a new event in the Local Calendar integration, using the Repeat Event - 1 Saturday (or any day) doesnt create the event on the first Saturday, it creates it on the 1st day of each month. ### What version of Home Assistant Core has the issue? core-2025.3.3 ### What was the last working version of Home Assistant Core? NA ### What type of installation are you running? Home Assistant OS ### Integration causing the issue local_calendar ### Link to integration documentation on our website https://www.home-assistant.io/integrations/local_calendar ### Diagnostics information _No response_ ### Example YAML snippet ```yaml ``` ### Anything in the logs that might be useful for us? ```txt ``` ### Additional information It's not clear if this is a bug or by design. Looking for guidance. WIll submit a feature request if this is working as designed.
1medium
Title: The work of versions is not clear! Body: First of all, I would like to thank the authors for this wonderful module...) **Describe the bug** I don't quite understand the behavior of this module specifically in my case. I write my API with the following architecture: ![Снимок экрана 2024-02-29 092940](https://github.com/tfranzel/drf-spectacular/assets/116059713/fc3dad36-1f50-4c47-b9c2-c75c616a8c05) The endpoints are available at the following addresses: - path('api/v1/', include('api.urls_v1')) - path('api/v2/', include('api.urls_v2')) **Main urls file** ``` urlpatterns = [ path('api/v1/', include('api.urls_v1')), path('api/v2/', include('api.urls_v2')), path('api/schema/', SpectacularAPIView.as_view(api_version='v1'), name='schema_v1'), path('api/doc/', SpectacularSwaggerView.as_view(url_name='schema_v1'), name='swagger'), path('admin/', admin.site.urls), path('api-auth/', include('rest_framework.urls')), path('token/', TokenObtainPairView.as_view(), name='token_obtain_pair'), path('token/refresh/', TokenRefreshView.as_view(), name='token_refresh'), ] ``` **My settings file** ``` REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.DjangoModelPermissionsOrAnonReadOnly', ), 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework_simplejwt.authentication.JWTAuthentication', ), 'DEFAULT_SCHEMA_CLASS': 'drf_spectacular.openapi.AutoSchema', 'DEFAULT_VERSIONING_CLASS': 'rest_framework.versioning.URLPathVersioning', } ``` And generated swagger scheme. ![Снимок экрана 2024-02-29 094551](https://github.com/tfranzel/drf-spectacular/assets/116059713/82d7771a-50c9-43b8-81a0-e311cb1c0e3c) **Expected behavior** I expect to get endpoints that are broken down by application. Now I have them all mixed up in a block with the name api (apparently this is the name of the project name)
1medium
Title: cannot set 3 different labels on the same image Body: I'm trying to label 3 different labels to the same image, but all of them appears to be right. <img width="932" alt="screen shot 2017-07-30 at 18 33 51" src="https://user-images.githubusercontent.com/985808/28754821-05bf570c-7556-11e7-94e2-f470104af3a6.png">
1medium