text
stringlengths
20
57.3k
labels
class label
4 classes
Title: Better, more accurate and more human-like translations using DeepL Body: **Is your feature request related to a problem? Please describe.** While I think the translations of GPT3 are correct in content, they often seem too edgy and clinical to me. Which somehow doesn't fit to a bot, which tries to get as close as possible to a human interlocutor. **Describe the solution you'd like** I would like to have the best possible and most accurate human-like translation. **Describe alternatives you've considered** DeepL is apparently currently the best translator. With really fascinating results. **Additional context** (Sorry for that copycat down there... Dx) - Unlike other services, the neural networks of DeepL can detect even the smallest nuances and reflect them in the translation. Not only does DeepL achieve record results in scientific benchmarks, but in blind tests translators also prefer the results of DeepL three times more often than those of the competition. * When using DeepL Pro, your data is protected with the highest security measures. We guarantee DeepL Pro subscribers that all texts are deleted immediately after the translation has been completed, and that the connection to our servers is always encrypted. This means that your texts are not used for any purposes other than your translation, nor can they be accessed by third parties. + With DeepL Pro, you can translate an entire document with one click. All fonts, images, and formatting remain in place, leaving you free to edit the translated document any way you like. - It can translate your Microsoft Word (.docx), PowerPoint (.pptx), PDF (.pdf), and text (.txt) files. Further formats coming soon! * If you sign up for the DeepL API plan, you will be able to integrate DeepL’s JSON-based REST API into your own products and platforms. This allows you to incorporate the world’s best machine translation technology into a variety of new applications. For example, a company could have their international service enquiries instantly translated by DeepL Pro, greatly simplifying business procedures and improving customer satisfaction. + Freelance translators, translation agencies, language service providers, or corporate language departments can all benefit from using DeepL Pro, the world’s best machine translation technology, in their CAT Tool. [DeepL - Translator](https://www.deepl.com/translator) [DeepL - Press](https://www.deepl.com/press) [DeepL - ProAPI](https://www.deepl.com/pro-api?cta=header-pro-api/) [Why using DeepL instead of other translators?](https://www.deepl.com/whydeepl/)
0easy
Title: When benchmark.py creates multiple projects that need `npx http-server` to run, all of them start up the "first" project that used npx http-server Body: ## Steps to reproduce - run benchmark.py - Allow it to run the generated projects (press enter when prompted) - wait for the second project that is run with `npx http-server` ## Expected Behavior The project runs ## Current Behavior The first `npx http-server` project runs
0easy
Title: Aroon doc string Body: Hi, First of all thanks for the great lib. I just saw some inconsistency in the doc string of Aroon indicator which can be acessed from link below: https://github.com/twopirllc/pandas-ta/blob/main/pandas_ta/trend/aroon.py The arguments are high, low, length=None etc... But in the help output Args are listed as: ' Args: close (pd.Series): Series of 'close's length (int): It's period. Default: 14 ' Maybe I'm missing something but shouldn't the args in the docstring high and low instead of close? Best wishes,
0easy
Title: [ENH]: Support avif as output format Body: ### Problem Since v3.6.0, Matplotlib supports `webp` output (yay!), but I couldn't find a ticket tracking Avif support, so here's a feature request for `savefig` to support saving as `.avif`. *relevant links:* - [CanIUse](https://caniuse.com/avif) - [Pillow pull request](https://github.com/python-pillow/Pillow/pull/5201) ### Proposed solution _No response_
0easy
Title: Add Support for Google Gemini pro api Body: ### 🚀 The feature Add Support for Google Gemini pro api ### Motivation, pitch Gemini pro has a great potential ### Alternatives _No response_ ### Additional context _No response_
0easy
Title: Remove LGTM badges from README, and remove LGTM comments from code Body: <img width="863" alt="Screenshot 2022-12-17 at 00 14 31" src="https://user-images.githubusercontent.com/350976/208211980-bb8add5f-5c49-4b47-ac1b-8f58e8e67dcb.png"> LGTM was a security checker we used for Piccolo. It has now been shut down, and replaced with Github code scanning. * Remove the badges from README.md (see attached image) * Search the codebase for any LGTM comments, and remove them (they start with `# lgtm`), for example `# lgtm [py/missing-equals]`.
0easy
Title: Oscar Indicator from TradingView Body: Per @dmitrievichh: > Could you add Oscar Indicator? > Source: [Oscar Oscillator by GenZai](https://www.tradingview.com/script/DMmuh5v5-OSCAR-Oscillator-by-GenZai-NNFX/)
0easy
Title: Set seed in Optuna Body: Optuna experiments have hard-coded seed value. Please set the seed value as in the AutoML constructor.
0easy
Title: partial_tucker: reorder arguments Body: Currently modes is a positional argument while rank is optional, this can be confusing and not consistent with other decompositions. https://github.com/tensorly/tensorly/blob/d8e90a600c3c26983777e971ce85ccba071cdfaf/tensorly/decomposition/_tucker.py#L16
0easy
Title: Patch Endpoint Body: Amazing Framework, really love it, please update or place the patch endpoint to update certain data that have been passed while leaving the unspecified data intact in the database..
0easy
Title: [ENH] add tests for html repr in `BaseObject` descendants Body: We should add a test for html repr of all `BaseObject` descendants, in `TestAllObjects`. This was apparently not tested, see #7043
0easy
Title: xfail if nltk data has not been downloaded (move postag to contrib) Body: Right now we have tests for `yellowbrick.text` that have an nltk depedency (the pos tag visualizer I believe, I don't think the others rely on it). We skip the tests if nltk is not installed, but we do not test if nltk data has been downloaded or is available. We should test this or wrap in a try/except and xfail to ensure that this doesn't fail tests for new users. Further, we could move nltk dependencies to the contrib directory, where we could more easily manage the test cases.
0easy
Title: CSS/HTML fix for top bar in documentation Body: For certain screen width (shorter than a monitor but larger than a phone), the top bar in the docs gets weird: ![Screen Shot 2021-10-15 at 11 43 33](https://user-images.githubusercontent.com/989250/137515655-cce817cc-20ce-4ddd-a2e0-ea5eae3af2a0.png) Source code: https://github.com/ploomber/ploomber/blob/master/doc/_templates/macros.html We're using [bootstrap](https://getbootstrap.com/)
0easy
Title: PPO indicator uses sma instead of ema for fast and slow series Body: From my understanding the PPO indicator should be similar to the MACD indicator. The main difference is that the PPO shows percentage results. That means, that for same fast, slow and signal parameters both indicators should have cross-sections between signal and macd line at the same time. In your current implementation the PPO cross-sections differ from the MACD indicator cross-sections. I've checked both indicators on the TradingView and TradingView shows the cross-sections for both indicators at the same time. When I investigated your code, I noticed that for MACD you are using ema for calculation fast and slow series, but for PPO indicator you are using sma. I think that the PPO should also use the ema.
0easy
Title: Tox/Poetry issue when running with pre-commit Body: ### Discussed in https://github.com/tox-dev/tox/discussions/2857 <div type='discussions-op-text'> <sup>Originally posted by **razy69** January 12, 2023</sup> ## Issue Hello everyone, I encounter a strange issue while using pre-commit with tox/poetry. I have to run pre-commit with PRE_COMMIT_COLOR=never to make it works. The issue appears with tox 4.0.12 and still not resolved in 4.2.8. ## Environment Provide at least: - OS: darwin - `pip list` of the host Python where `tox` is installed: ```console $ pip list Package Version -------------------- ----------- attrs 22.2.0 CacheControl 0.12.11 cachetools 5.2.1 certifi 2022.12.7 cffi 1.15.1 cfgv 3.3.1 chardet 5.1.0 charset-normalizer 2.1.1 cleo 2.0.1 colorama 0.4.6 crashtest 0.4.1 distlib 0.3.6 dulwich 0.20.50 filelock 3.9.0 html5lib 1.1 identify 2.5.12 idna 3.4 importlib-metadata 6.0.0 jaraco.classes 3.2.3 jsonschema 4.17.3 keyring 23.13.1 lockfile 0.12.2 more-itertools 9.0.0 msgpack 1.0.4 nodeenv 1.7.0 packaging 23.0 pexpect 4.8.0 pip 22.3.1 pkginfo 1.9.6 platformdirs 2.6.2 pluggy 1.0.0 poetry 1.3.2 poetry-core 1.4.0 poetry-plugin-export 1.2.0 pre-commit 2.21.0 ptyprocess 0.7.0 pycparser 2.21 pyproject_api 1.4.0 pyrsistent 0.19.3 PyYAML 6.0 rapidfuzz 2.13.7 requests 2.28.1 requests-toolbelt 0.10.1 setuptools 58.1.0 shellingham 1.5.0.post1 six 1.16.0 tomli 2.0.1 tomlkit 0.11.6 tox 4.2.8 trove-classifiers 2022.12.22 urllib3 1.26.13 virtualenv 20.17.1 webencodings 0.5.1 xattr 0.10.1 zipp 3.11.0 ``` python >= 3.10 tox > 4.0.11 poetry > 1.3.0 pre-commit >= 2.20.0 tox.ini ```ini [tox] isolated_build = true system_site_packages = false no_package = true env_list = {tests} [testenv] always_copy = true download = true base_python = python3.10 skip_install = true [testenv:tests] description = Run unit tests with pytest allowlist_externals = poetry commands_pre = poetry install -vvv --sync --with tests commands = poetry run pytest -v {posargs} ``` .pre-commit-config.yaml ```yaml fail_fast: true default_install_hook_types: [pre-commit] repos: - repo: local hooks: - id: tox-code-checks name: Run tox targets -- tests stages: [commit] language: system types: [python] pass_filenames: false verbose: true entry: tox -vvv -e tests ``` ## Minimal example Running tox directly: ```console $ tox -e tests tests: commands_pre[0]> poetry install -vvv --sync --with tests Loading configuration file /Users/jdacunha/Library/Preferences/pypoetry/config.toml Loading configuration file /Users/jdacunha/Library/Preferences/pypoetry/auth.toml Adding repository XXX (https://XXX/simple) Using virtualenv: /Users/jdacunha/Dev/project/.tox/tests Installing dependencies from lock file Finding the necessary packages for the current system Package operations: 0 installs, 0 updates, 0 removals, 71 skipped • Installing anyio (3.6.2): Pending... • Installing anyio (3.6.2): Skipped for the following reason: Already installed • Installing dogpile-cache (1.1.8): Pending... • Installing dogpile-cache (1.1.8): Skipped for the following reason: Already installed • Installing cffi (1.15.1): Pending... ... tests: commands[0]> poetry run pytest -v ================================================= test session starts ================================================= platform darwin -- Python 3.10.5, pytest-7.2.0, pluggy-1.0.0 -- /Users/jdacunha/Dev/project/.tox/tests/bin/python cachedir: .tox/tests/.pytest_cache rootdir: /Users/jdacunha/Dev/project, configfile: pyproject.toml plugins: recording-0.12.1, cov-4.0.0, anyio-3.6.2 collected 381 items ... ================================================= slowest 5 durations ================================================= 1.01s call tests/unit/common/a.py::TestA::a 1.01s call tests/unit/common/b.py::TestB::b 0.13s call tests/unit/common/c.py::TestC::c 0.11s call tests/unit/common/d.py::TestD::d 0.11s call tests/unit/common/e.py::TestE::e ========================================== 381 passed, 19 warnings in 10.19s ========================================== tests: OK (13.13=setup[0.06]+cmd[1.41,11.65] seconds) congratulations :) (13.21 seconds) ``` Running pre-commit: ```console $ pre-commit run tox-code-checks Run tox targets -- tests.......................Failed - hook id: tox-code-checks - duration: 1.71s - exit code: 1 ROOT: 193 D setup logging to NOTSET on pid 96007 [tox/report.py:221] tests: 258 I find interpreter for spec PythonSpec(major=3, minor=10) [virtualenv/discovery/builtin.py:56] tests: 258 D discover exe for PythonInfo(spec=CPython3.10.5.final.0-64, exe=/Users/jdacunha/Dev/project/.venv/bin/python3, platform=darwin, version='3.10.5 (main, Jan 10 2023, 15:36:10) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) in /Users/jdacunha/.pyenv/versions/3.10.5 [virtualenv/discovery/py_info.py:437] tests: 259 D filesystem is not case-sensitive [virtualenv/info.py:24] tests: 260 D got python info of /Users/jdacunha/.pyenv/versions/3.10.5/bin/python3.10 from /Users/jdacunha/Library/Application Support/virtualenv/py_info/1/ec7cb05918b9d40294b520615642683600e9927f06d12b706d7bab5356c36c9c.json [virtualenv/app_data/via_disk_folder.py:129] tests: 260 I proposed PythonInfo(spec=CPython3.10.5.final.0-64, system=/Users/jdacunha/.pyenv/versions/3.10.5/bin/python3.10, exe=/Users/jdacunha/Dev/project/.venv/bin/python3, platform=darwin, version='3.10.5 (main, Jan 10 2023, 15:36:10) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63] tests: 260 D accepted PythonInfo(spec=CPython3.10.5.final.0-64, system=/Users/jdacunha/.pyenv/versions/3.10.5/bin/python3.10, exe=/Users/jdacunha/Dev/project/.venv/bin/python3, platform=darwin, version='3.10.5 (main, Jan 10 2023, 15:36:10) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65] tests: 286 W commands_pre[0]> poetry install -vvv --sync --with tests [tox/tox_env/api.py:427] Loading configuration file /Users/jdacunha/Library/Preferences/pypoetry/config.toml Loading configuration file /Users/jdacunha/Library/Preferences/pypoetry/auth.toml Adding repository XXX (https://XXX/simple) Using virtualenv: /Users/jdacunha/Dev/project/.tox/tests Installing dependencies from lock file Finding the necessary packages for the current system Package operations: 0 installs, 0 updates, 0 removals, 71 skipped tests: 1669 C exit 1 (1.38 seconds) /Users/jdacunha/Dev/project> poetry install -vvv --sync --with tests pid=96008 [tox/execute/api.py:275] tests: FAIL code 1 (1.42=setup[0.04]+cmd[1.38] seconds) evaluation failed :( (1.48 seconds) ``` Running pre-commit with PRE_COMMIT_COLOR=never ```console $ PRE_COMMIT_COLOR=never pre-commit run tox-code-checks Run tox targets -- tests.......................Passed - hook id: tox-code-checks - duration: 13.32s ROOT: 192 D setup logging to NOTSET on pid 97155 [tox/report.py:221] tests: 257 I find interpreter for spec PythonSpec(major=3, minor=10) [virtualenv/discovery/builtin.py:56] tests: 257 D discover exe for PythonInfo(spec=CPython3.10.5.final.0-64, exe=/Users/jdacunha/Dev/project/.venv/bin/python3, platform=darwin, version='3.10.5 (main, Jan 10 2023, 15:36:10) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding _fs_io=utf-8-utf-8) in /Users/jdacunha/.pyenv/versions/3.10.5 [virtualenv/discovery/py_info.py:437] tests: 258 D filesystem is not case-sensitive [virtualenv/info.py:24] tests: 259 D got python info of /Users/jdacunha/.pyenv/versions/3.10.5/bin/python3.10 from /Users/jdacunha/Library/Application Support/virtualenv/py_info/1/ec7cb05918b9d40294b520615642683600e9927f06d12b706d7bab5356c36c9c.json [virtualen v/app_data/via_disk_folder.py:129] tests: 259 I proposed PythonInfo(spec=CPython3.10.5.final.0-64, system=/Users/jdacunha/.pyenv/versions/3.10.5/bin/python3.10, exe=/Users/jdacunha/Dev/project/.venv/bin/python3, platform=darwin, version='3.10.5 (main, Jan 10 2023, 15: 36:10) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63] tests: 260 D accepted PythonInfo(spec=CPython3.10.5.final.0-64, system=/Users/jdacunha/.pyenv/versions/3.10.5/bin/python3.10, exe=/Users/jdacunha/Dev/project/.venv/bin/python3, platform=darwin, version='3.10.5 (main, Jan 10 2023, 15: 36:10) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65] tests: 285 W commands_pre[0]> poetry install -vvv --sync --with tests [tox/tox_env/api.py:427] Loading configuration file /Users/jdacunha/Library/Preferences/pypoetry/config.toml Loading configuration file /Users/jdacunha/Library/Preferences/pypoetry/auth.toml Adding repository XXX (https://XXX/simple) Using virtualenv: /Users/jdacunha/Dev/project/.tox/tests Installing dependencies from lock file Finding the necessary packages for the current system Package operations: 0 installs, 0 updates, 0 removals, 71 skipped • Installing anyio (3.6.2): Skipped for the following reason: Already installed • Installing appdirs (1.4.4): Skipped for the following reason: Already installed ... tests: 13277 I exit 0 (11.62 seconds) /Users/jdacunha/Dev/project> poetry run pytest -v pid=97161 [tox/execute/api.py:275] tests: OK (13.03=setup[0.05]+cmd[1.35,11.62] seconds) congratulations :) (13.09 seconds) ``` Does anyone succeed to make them work together ? Thanks. </div>
0easy
Title: Set stacking time threshold depending on the best model time Body: Right now, stacking is not performed if only 60 seconds left. But for very small datasets and small time limit, 60 seconds is a lot of time. This time threshold should be set something like 2x best model train time.
0easy
Title: Prevent accidental skipping of tasks if input is not empty Body: Please, *please* add some kind of warning/confirmation when a user starts entering an assistant reply and then presses the Skip button. It's hard to express how painful it is when you try to compose a good reply for an hour or more and then accidentally press “Skip” and not only lose your work but also the chance to take the task again. I know it's not a frequent case (happened to me just a couple of times per 1K replies), but when it happens it hurts like hell. (I can explain how and why it happens, but not sure if it's worth your time.) Maybe it's even better (and easier?) to **disable** the Skip button while the reply field is not empty (like the Review button is disabled while the fild is empty). P.S. Please be indulgent: I've just registered on GitHub and have no idea how this thing works. (And I'm also struggling with my English.) A huge respect for all the devs here!
0easy
Title: Trainer with GPU based Model fails while creating Masks Body: ### Bug with GPU Model Currently, while using pruning methods like `TaylorFOWeight` Pruner, If I use a model on GPU for getting the metrics (as calculated for getting masks), it fails on [line](https://github.com/microsoft/nni/blob/cbac2c5c0f7606aca8ccf08fbd418ffe3adfe427/nni/algorithms/compression/v2/pytorch/pruning/tools/sparsity_allocator.py#L32) while creating masks. The reason why it fails is, ``` metrics - Dict[str, Tensor] where the tensor is on cuda wrapper.weight_mask - Tensor on cpu ``` **What would you like to be added**: ```python # something like this, not necessarily the right answer metric = metric.to(wrapper.weight_mask.device) ``` **Why is this needed**: Allow us to use GPU based fitting (therefore much faster) while calculating metrics for generating masks. **Without this feature, how does current nni work**: Fails when model is gpu based **Components that may involve changes**: All Sparsity Allocators under `nni/algorithms/compression/v2/pytorch/pruning/tools/sparsity_allocator.py`
0easy
Title: Enable @pytest.mark.limit_memory without having to specify --memray Body: ## Feature Request **Is your feature request related to a problem? Please describe.** I tried to use `pytest.mark.limit_memory` in a urllib3 test and was confused because pytest was not finding it because I only skimmed the README and missed the `--memray` option. (Additionally I was surprised that `limit_memory` was not in the pytest-memray README at all. I only found about it on the memray README.) **Describe the solution you'd like** I'd like to be able to use `pytest.mark.limit_memory` without passing `--memray` and without showing the memray report at the end. **Describe alternatives you've considered** The alternative is the status quo. memray is incredibly useful, this is only a small quality-of-life improvement. **Teachability, Documentation, Adoption, Migration Strategy** If we had a code snipped for `limit_memory` in the docs, users could simply copy/paste it without having to find out about `--memray`.
0easy
Title: pygments breaking a test Body: pygments 2.14 is breaking our CI. I added a [quickfix](https://github.com/ploomber/ploomber/blob/a9d7653c20d5d638461c869ba3a3182162569d34/tests/io_mod/test_terminalwriter.py#L306) since I verified that this failure is not affecting anything critical (it's just terminal output formatting). I think this is best than pinning pygments since pining the version is worst as it'll increase the complexity of installing the package. This is some ASCII code stuff, and I'm unsure why the pygments update is behaving in this way, here's the error: ``` ===================================================================================== FAILURES ====================================================================================== ________________________________________________________________ test_code_highlight[with markup and code_highlight] ________________________________________________________________ has_markup = True, code_highlight = True, expected = '{kw}assert{hl-reset} {number}0{hl-reset}\n' color_mapping = <class 'io_mod.test_terminalwriter.color_mapping.<locals>.ColorMapping'> @pytest.mark.parametrize( ("has_markup", "code_highlight", "expected"), [ pytest.param( True, True, "{kw}assert{hl-reset} {number}0{hl-reset}\n", id="with markup and code_highlight", ), pytest.param( True, False, "assert 0\n", id="with markup but no code_highlight", ), pytest.param( False, True, "assert 0\n", id="without markup but with code_highlight", ), pytest.param( False, False, "assert 0\n", id="neither markup nor code_highlight", ), ], ) def test_code_highlight(has_markup, code_highlight, expected, color_mapping): f = io.StringIO() tw = terminalwriter.TerminalWriter(f) tw.hasmarkup = has_markup tw.code_highlight = code_highlight tw._write_source(["assert 0"], lexer='py') > assert f.getvalue().splitlines(keepends=True) == color_mapping.format( [expected]) E AssertionError: assert ['\x1b[94mass...[39;49;00m\n'] == ['\x1b[94mass...[39;49;00m\n'] E At index 0 diff: '\x1b[94massert\x1b[39;49;00m \x1b[94m0\x1b[39;49;00m\x1b[90m\x1b[39;49;00m\n' != '\x1b[94massert\x1b[39;49;00m \x1b[94m0\x1b[39;49;00m\n' E Use -v to get more diff ```
0easy
Title: Replacement for ast_to_dict? Body: I have been using [this](https://gist.github.com/mixxorz/dc36e180d1888629cf33#file-graphene-py-L59) function in the past, but it seems that `ast_to_dict` has been removed. Is there a replacement for `ast_to_dict` or a new function that is similiar to the one of the gist?
0easy
Title: get_batch_stock_quotes stopped working Body: Hi! I was using get_batch_stock_quotes on regular basis and suddenly it stopped working (without updating av or any other python libraries). Calling the function for some list of stocks produces the error: File ".../site-packages/alpha_vantage/alphavantage.py", line 292, in _handle_api_call raise ValueError(json_response["Error Message"]) ValueError: This API function (BATCH_STOCK_QUOTES) does not exist. I used simple code: ts = TimeSeries(key='myPremiumKey', output_format='pandas', indexing_type='date') ts.get_batch_stock_quotes(['AAPL','MSFT','AMZN','GOOG']) Do you have any idea what might be a problem?
0easy
Title: [Feature request] Add apply_to_images to MotionBlur Body:
0easy
Title: [BUG] MonthStart string is not being coerced to "M" Body: **Describe the bug** Although the "ME" is being coerced to "M" since the merge from #6057, the "MS" (month start) string is not. This is causing some errors when using pandas 2.2.x. With pandas>=2.2, this is the error: ``` --> [941](.venv/lib/python3.11/site-packages/sktime/forecasting/base/_fh.py:941) return x.to_period(freq) ValueError: <MonthBegin> is not supported as period frequency ``` **To Reproduce** ```python from sktime.datasets import load_forecastingdata from sktime.forecasting.exp_smoothing import ExponentialSmoothing y, _ = load_forecastingdata("m1_monthly_dataset", return_type="pd_multiindex_hier") # Sort indexes for safety y = y.sort_index() # Forecast horizon fh = y.index.get_level_values(-1).unique() # Fit and predict model = ExponentialSmoothing(trend="add") model.fit(y) model.predict(fh=fh) # the error is raised here ``` **Expected behavior** The forecast from Exponential Smoothing. **Versions** <details> System: python: 3.11.11 (main, Dec 26 2024, 12:31:23) [Clang 16.0.0 (clang-1600.0.26.6)] executable: [.venv/bin/python](.venv/bin/python) machine: macOS-15.1-arm64-arm-64bit Python dependencies: pip: 24.3.1 sktime: 0.35.0 sklearn: 1.5.2 skbase: 0.12.0 numpy: 1.26.4 scipy: 1.14.1 pandas: 2.2.3 matplotlib: None joblib: 1.4.2 numba: None statsmodels: 0.14.4 pmdarima: None statsforecast: None tsfresh: None tslearn: None torch: None tensorflow: None </details>
0easy
Title: Support for default mutations Body: Are there any plans to add support for simple create / update / delete mutations? It seems like you could view the fields associated with a Mongoengine Document and automatically build the arguments to a graphene mutation, instead of needing to manually write one for each object. Thoughts on this?
0easy
Title: [ENH] add an expand_grid function Body: Recently watched a video by David Robinson on tidytuesday where he used the crossing function (a wrapper of the base r function expand_grid) from tidyr R library to do a Monte Carlo simulation. I think this would a great addition to pyjanitor. Don’t have the bandwidth at the moment to add it, but could in the near future. I did come across a stackoverflow solution to this https://stackoverflow.com/questions/12130883/r-expand-grid-function-in-python <sub>Sent with <a href="http://githawk.com">GitHawk</a></sub>
0easy
Title: [Feature]: specify model only in config.yaml Body: ### 🚀 The feature, motivation and pitch It would be nice not to need anything but a config file to start `serve`: ```sh vllm serve --config scenario.yaml ``` Right now, the `model_tag` appears to be required via the CLI. Why not allow this via a config file too? For example: ```yaml model: Qwen/Qwen2.5-Coder-7B # ... args ``` Furthermore, the web based docs for `vllm serve` don't list the `model_tag` as required: https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#vllm-serve and instead suggest I can use `--model` and thus based on the [config section](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#configuration-file) I assume that would map to `model: Qwen/Qwen2.5-Coder-7B`? This would be a great way to encapsulate everything I need for a particular serve "environment" into a config file to quickly start different scenarios. ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
0easy
Title: #354 fix causes IndexError: string index out of range Body: ## Classification: Bug ## Reproducibility: Just run this command through autokey. ``` python system.exec_command("fcitx-remote -s **fcitx-keyboard-us") ``` ## Version AutoKey version: 0.95.10 Used GUI (Gtk, Qt, or both): If the problem is known to be present in more than one version, please list all of those. Installed via: (PPA, pip3, …). PPA Linux Distribution: Linux Mint 20 ## Summary #354 adds some commands related to this problem. It causes the result. ## Expected Results Nothing happens. ## Actual Results ``` python Traceback (most recent call last): File "/usr/lib/python3/dist-packages/autokey/service.py", line 207, in __tryReleaseLock self.configManager.lock.release() RuntimeError: release unlocked lock 2020-11-29 23:28:13,811 ERROR - service - Script error Traceback (most recent call last): File "/usr/lib/python3/dist-packages/autokey/service.py", line 485, in execute exec(script.code, scope) File "<string>", line 2, in <module> File "/usr/lib/python3/dist-packages/autokey/scripting.py", line 497, in exec_command if output[-1] == "\n": IndexError: string index out of range ```
0easy
Title: JSON variable file support Body: ### Problem `robot --variablefile burek.json -i smoke ROBOT/` returns the following error: `failed: Importing variable file ’burek.json’ failed: Not a valid file or directory to import.` while `burek.py` containing the same JSON content seem to work just fine. `--variablefile (-V)` only seems to support Python and YAML file types. We would like to use JSON file as a variablefile to pass on certain variables that are used by other frameworks as well. Thanks for all the help!
0easy
Title: Add link to EDA in main report Body: In main `README.md` there should be a link to EDA if it is available. ***Detail desc***: - in the file `supervised/base_automl.py` there is created main `README.md` for `AutoML` in `select_and_save_best()` - at the end of this file we can add a link to `Automated Exploratory Data Analysis` if there is `EDA/README.md` file available in `self._results_path` - We can add in Markdown something like: ``` ## Automated Exploratory Data Analysis There is Automated EDA available in this [report](EDA/README.md). ```
0easy
Title: Pgbouncer exporter doesn't support metrics exposed by updated pgbouncer. Body: ### Apache Airflow version 2.10.5 ### If "Other Airflow 2 version" selected, which one? _No response_ ### What happened? PGBouncer has been updated and now exposes more metrics than before - the exporter no longer supports the full list of metrics and the following error can be seen in the logs of the exporter: > could not get store result: could not get stats: unexpected column: total_server_assignment_count The support has been added in upstream [pg_exporter](https://github.com/Vonng/pg_exporter/blob/main/pg_exporter.yml#L5441C12-L5441C12), supporting pgbouncer version 1.24 and up. The pgbouncer base image was updated to 1.24 in the [airflow-pgbouncer-2025.01.10-1.24.0](https://hub.docker.com/layers/apache/airflow/airflow-pgbouncer-2025.01.10-1.24.0/images/sha256-e8fd120604e8113082e9ad070e638b715cf512c279299782e76cc5ad431a25ad) docker image, however the exporter has not been updated to support it. The [defaults for the helm chart](https://github.com/apache/airflow/blob/main/chart/values.yaml#L116-L122) are currently; ``` pgbouncer: tag: airflow-pgbouncer-2025.01.10-1.24.0 pgbouncerExporter: tag: airflow-pgbouncer-exporter-2024.06.18-0.17.0 ``` Which are not compatible. ### What you think should happen instead? The exporter should be compatible with the version of pgbouncer deployed. ### How to reproduce Deploy the latest helm chart enabling pgbouncer, the logs for the `metrics-exporter` container in the pgbouncer pod will indicate the error > could not get store result: could not get stats: unexpected column: total_server_assignment_count ### Operating System Ubuntu 22.04 ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details pgbouncer enabled ### Anything else? _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
0easy
Title: Ability to register adapter in-process Body: Add an ability to register a new adapter in-process, along the lines of how [sqlalchemy allows registering new dialect in-process](https://docs.sqlalchemy.org/en/14/core/connections.html#registering-dialects-in-process). Currently it looks like the only way to add an adapter to shillelagh is [through an entrypoint](https://github.com/betodealmeida/shillelagh/blob/6082939fd08145164edf9ac5c53c4dfc034eb070/src/shillelagh/backends/apsw/db.py#L506).
0easy
Title: [GCP][Disk] Google Cloud Hyperdisk support Body: It appears that the `ultra` disk which maps to `pd-extreme` does not work with `A3 / A2 / G2` gpu machine types in GCP, so it cannot be used for accelerated ml serving workloads (`H200 / H100 / A100 / L4`) They support something called `hyperdisk` instead, but it also varies based on instance type with `hyperdisk-ml` having the broadest support: | instance | disk support | ref | |-|-|-| |`a3 mega, a3 high, a3 edge`| `hyperdisk-ml, hyperdisk-balanced, hyperdisk-extreme, hyperdisk-throughput, pd-ssd, pd-balanced` | https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-disks| |`a3 ultra`| `hyperdisk-balanced, hyperdisk-extreme`| https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-disks | |`a2`|`hyperdisk-ml, pd-ssd, pd-standard, pd-balanced`| https://cloud.google.com/compute/docs/accelerator-optimized-machines#a2-disks | |`g2`|`hyperdisk-ml, hyperdisk-throughput, pd-ssd, pd-balanced`| https://cloud.google.com/compute/docs/accelerator-optimized-machines#g2-disks | Pricing looks similar to `pd-extreme`: https://cloud.google.com/compute/disks-image-pricing?hl=en#tg1-t0 Is this something that can be added?
0easy
Title: Util to get data size in mongo per symbol Body: It's a fairly common request to get the size of data for each symbol in VersionStore and currently I just use a Mongo js script to get it, but it would be nice to have a util in VersionStore or otherwise to do the same.
0easy
Title: Static text is considered as dynamic element (former: is_visible returns True for non-visible control) Body: I have a dialog window with multiple tabs (TabControl). After I switch from tab A to tab B, tabA.Control.is_visible() still returns True, despite the control on tab A is not being visible anymore. The control also doesn't show up in print_control_identifiers(). I would expect False to be returned.
0easy
Title: Organizational Influence metric API Body: The canonical definition is here: https://chaoss.community/?p=3560
0easy
Title: Add ShrunkCovariance Estimator Body: The output of EmpiricalCovariance is regularized by a shrinkage value impacted by the overall mean of the data. The goal would be to implement this estimator with post-processing changes to the fitted empirical covariance. This is very similar to the OAS project and would combine into a medium project. When implemented in python re-using our EmpiricalCovariance estimator, this would be an easy project with a small time commitment. Implementing the super-computing distributed version using python would only work for distributed-aware frameworks. Extended goals would make this a hard difficulty, medium commitment project. This would require implementing the regularization in C++ in oneDAL both for CPU and GPU. Then this must be made available in Scikit-learn-intelex for making a new estimator. This would hopefully follow the design strategy used for our Ridge Regression estimator. https://scikit-learn.org/stable/modules/generated/sklearn.covariance.ShrunkCovariance.html
0easy
Title: [BUG] UI shows incorrect node state Body: ### Describe the bug UI shows incorrect node state when two dataframes join. ### To Reproduce ``` In [18]: df1 = md.read_csv('Downloads/ml-20m/movies.csv') In [19]: df2 = md.read_csv('Downloads/ml-20m/ratings.csv') In [20]: df1[['movieId', 'title']].merge(df2[['movieId', 'rating']], on="movieId").execute() ``` The UI shows, ![image](https://user-images.githubusercontent.com/109642806/220543837-285dab42-6414-43bd-bf37-c497c8786dd0.png) Those nodes without color should be green.
0easy
Title: [Umbrella] Revisit Ray dashboard API status code Body: ### Description Before https://github.com/ray-project/ray/pull/51417, the Ray dashboard APIs only returned 200 for success and 500 for errors; they didn't support status codes such as 404. Take #51417 as an example, it returns 404 when users try to kill a non-existent actor. ### Use case _No response_
0easy
Title: rebin for lazy signals change original signal chunks Body: When using `s.rebin` for lazy signals, the chunking of the original signal is changed. Tested in `RELEASE_next_minor`. Example: ```python import dask.array as da import hyperspy.api as hs s = hs.signals.Signal2D(da.zeros((256, 256, 256, 256))).as_lazy() print(s.data.chunksize) # (64, 64, 64, 64) s1 = s.rebin(scale=(2, 2, 2, 2)) print(s.data.chunksize) # (256, 32, 32, 32) ``` The original signal should not be changed with this type of operation. A workaround for this is using `rechunk=False`.
0easy
Title: Provide multi-class AUC ROC Body: Scikit-learn provides multi-class options for area under curve: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html We should provide the most common ones, such as the [OVO Macro averaging used by Auto-Gluon](https://github.com/awslabs/autogluon/blob/0b38dde5f698dbadfa1ce76aabda14505d9e3ead/core/src/autogluon/core/metrics/__init__.py#L443).
0easy
Title: [UX] Additional message for quota exceed during failover on GCP Body: There is an additional warning shown during the failover on GCP, for `quotaExceeded` error. We should get rid of it. <img width="724" alt="Image" src="https://github.com/user-attachments/assets/77220101-0078-4a5f-8e34-f1c5d108e0d6" />
0easy
Title: `@component_vue` with argument names shadowing python built-ins Body: Consider the following example of a tree view with `open` and `active` props synced to solara reactives: ```python import solara items = [ { "name": "Measurement_1", "icon": "", "children": [ { "name": "Fruit", "icon": "mdi-lightbulb", "children": [ {"name": "Node 1.1", "id": 5}, {"name": "Node 1.2", "id": 4, "icon": "mdi-delete"}, ], "id": 2, }, ], "id": 1, } ] @solara.component_vue("my_treeview.vue") def MyTreeview(items, value, active, open): pass act = solara.reactive(None) val = solara.reactive(None) open = solara.reactive(None) @solara.component def Page(): print("act", act.value) print("val", val.value) # requires selectable print("open", open.value) MyTreeview( items=items, value=val.value, on_value=val.set, active=act.value, on_active=act.set, open=open.value, on_open=open.set, ) solara.Button("set open", on_click=lambda: open.set([1, 2])) ``` my_treeview.vue ```js <template> <v-treeview :items="items" dense hoverable activatable selectable v-model="value" @update:active="value => active = value" :open="open" @update:open="value => open = value" > <template v-slot:prepend="{ item }"> <v-icon v-if="item.icon">{{ item.icon }}</v-icon> </template> </v-treeview> </template> ``` This gives the following traceback: ``` Traceback (most recent call last): File "C:\Users\jhsmi\pp\do-fret\.venv\lib\site-packages\reacton\core.py", line 364, in _create_widget widget = self.component.widget(**kwargs) File "C:\Users\jhsmi\pp\do-fret\.venv\lib\site-packages\ipyvue\VueTemplateWidget.py", line 146, in __init__ super().__init__(*args, **kwargs) File "C:\Users\jhsmi\pp\do-fret\.venv\lib\site-packages\ipywidgets\widgets\widget.py", line 506, in __init__ self.open() TypeError: 'list' object is not callable ``` probably because `open` argument shadows the python builtin `open` A workaround is to rename the argument `open` to something else, eg `open_`, the vue template then becomes: ```js :open="open_" @update:open="value => open_ = value" ``` The workaround might be considered the solution for the issue
0easy
Title: turn categorical column into strings Body:
0easy
Title: Improve indexing functionality Body: Currently, when indexes are combined together, their vector store is unpacked and combined together into one vector. This is not ideal, as it makes the individual data less prominent, just a mess of vectors collected together. Instead, when composing indexes, we should use llamaindex / langchain inbuilt functionalities to create a composite index object, where the individual indexes that were combined are distinct but still attached together. Moreover, the indexing functionality can use a revamp of its data loading tools, as it currently uses some old llama index loaders to load things like epubs, pdfs, etc, I'm sure there are better tools that have been released now.
0easy
Title: Running the App produces the error 'which python' returned non-zero exit status 1. Body: **Describe the bug** Running the App produces the error 'which python' returned non-zero exit status 1. Using windows 10, all dependencies are installed **To Reproduce** Steps to reproduce the behavior: 1. Launch the streamlit app 2. Under Examples click 'Language Translator **Expected behavior** The app should generate a streamlit application as shown in the demo **Issue** Running the App produces the error 'which python' returned non-zero exit status 1. **Desktop (please complete the following information):** - OS: Windows 10 - Browser: Chrome **Additional context** **Do I need to create a new environment, also please add other required packages in the requiremets.txt file**
0easy
Title: [Feature]: Ensure benchmark serving do not import vLLM Body: ### 🚀 The feature, motivation and pitch vLLM's benchmark serving script is expected to be a standalone inference client that only requires minimum dependencies. Currently, it still imports `vllm` conditionally. The task is as follows: 1. Clearly define a requirements txt for benchmark serving client ``` numpy pandas Pillow tqdm transformers datasets ``` 2. Add a CI test that create a new uv environment and execute the script. Ensure there is no vLLM present. This can be part of existing tests for benchmark scripts. https://github.com/vllm-project/vllm/blob/main/.buildkite/run-benchmarks.sh 3. Make sure the existing usage of vLLM is moved to inlining whatever utility method is required. ### Alternatives _No response_ ### Additional context See #14879 for discussion, cc @houseroad @ywang96 ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
0easy
Title: [Jobs] Fail to terminate jobs controller in abnormal state `INIT` even with `-p` Body: ``` NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND sky-jobs-controller-11d9a692 1 min ago 1x Kubernetes(4CPU--32GB, cpus=4+, mem=8x, disk_size=50) INIT - sky jobs launch test.yaml... ``` ``` sky down sky-jobs-controller-11d9a692 -p sky.exceptions.ClusterNotUpError: Failed to connect to jobs controller, please try again later. During handling of the above exception, another exception occurred: sky.exceptions.NotSupportedError: Tearing down the jobs controller while it is in INIT state is not supported (this means a job launch is in progress or the previous launch failed), as we cannot guarantee that all the managed jobs are finished. Please wait until the jobs controller is UP or fix it with sky start sky-jobs-controller-11d9a692. ```
0easy
Title: Expand populate.py to generate fake Custom Fields Body: populate.py which is used during development to generate fake data could be improved to add some Custom Fields so that we can test areas that use that code more easily.
0easy
Title: Improve Windows Support Body: Currently, the package does not support Windows completely. Major reason for that is UTF-8 is not extensively supported by `cmd`, `powershell` etc.
0easy
Title: 增加功能 Body: 1.希望加上时间排序2.希望能下载到不分享的3.批量下载指定人的喜欢收藏,比如自己的
0easy
Title: Allow to set batch size for db clean CLI Body: ### Body The db clean CLI was added in https://github.com/apache/airflow/pull/20838 There is issue of timeout if the running the CLI on a very large table(s), ideally we should allow running the command with some kind of batch size flag that will split the work and execute it batch after batch. ### Committer - [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
0easy
Title: Add Oracle Approximating Shrinkage Estimator (OAS) Body: The output of EmpiricalCovariance is regularized by a shrinkage value impacted by the overall mean of the data. The goal would be to implement this estimator with post-processing changes to the fitted empirical covariance. This project is very similar to the ShrunkCovariance project and would combine into a medium project. When implemented in python re-using our EmpiricalCovariance estimator, this would be an easy project with a small time commitment. Implementing the super-computing distributed version using python would only work for distributed-aware frameworks. Extended goals would make this a hard difficulty, medium commitment project. This would require implementing the regularization in C++ in oneDAL both for CPU and GPU. Then this must be made available in Scikit-learn-intelex for making a new estimator. This would hopefully follow the design strategy used for our Ridge Regression estimator. https://scikit-learn.org/stable/modules/generated/sklearn.covariance.OAS.html
0easy
Title: Add support for cell id targeting Body: Now that cell ids are part of the 4.5 spec, we should have the execution identifier look for cell_id matches when gathering cells to execute.
0easy
Title: Django management command to migrate existing users Body: We could add a django management command to handle user verification/status creation for old users when the dev team opts to use this package at mid-project. Also would be helpful having a specific flag for superuser and staff, making it easy to create a new staff or superuser without needing to verify by email or hardcoding on django shell.
0easy
Title: Perform a Security Audit of Globaleaks Body: ## Description: Security is a critical aspect of any software project, and Globaleaks is no exception. In this task, you will help improve the security of Globaleaks by performing a security audit. This involves testing the public demo instances (such as demo.globaleaks.org and try.globaleaks.org) to identify potential vulnerabilities and areas for improvement. You will be asked to focus on common security concerns such as authentication, data integrity, and potential exposure of sensitive information. Additionally, Globaleaks has a [Security Policy](https://github.com/globaleaks/globaleaks-whistleblowing-software/security/policy) in place for reporting security issues responsibly. Please make sure to review it before submitting any findings. If you're interested in performing a full security audit, you can refer to previous **penetration tests** and **security audits** to help guide your testing. You may also have the option to officially publish your security audit report if it’s thorough and meets the required standards. ## Steps: 1. **Explore the Demo Instances:** - Test the [demo.globaleaks.org](https://demo.globaleaks.org/) and [try.globaleaks.org](https://try.globaleaks.org/) instances to review their security measures. Look for potential issues such as: - Authentication vulnerabilities - Data leakage - Encryption issues - Any other potential security flaws such as cross-site scripting (XSS), SQL injection, or improper access control. 2. **Perform Security Testing:** - Explore common attack vectors such as: - Brute-force login attempts. - Session fixation or hijacking. - CSRF or XSS vulnerabilities. - Document any vulnerabilities or areas for improvement that you discover. 3. **Review the Security Resources:** - Familiarize yourself with the following security-related resources to better understand the design and security measures of Globaleaks: - [Threat Model](https://docs.globaleaks.org/en/stable/security/ThreatModel.html): An overview of the potential threats that Globaleaks aims to mitigate. - [Application Security Specification](https://docs.globaleaks.org/en/stable/security/ApplicationSecurity.html): A detailed specification of security features and best practices used in Globaleaks. - [Encryption Protocol](https://docs.globaleaks.org/en/stable/security/EncryptionProtocol.html): Information on the encryption protocols used to ensure data confidentiality and integrity. 4. **Review Existing Security Audits:** - Globaleaks has undergone previous security audits. These documents can help inform your audit approach: - [Security Audits](https://docs.globaleaks.org/en/stable/security/PenetrationTests.html) - If you are interested in providing a **full security audit**, the reports from these previous tests are available, and you may also have the option to officially publish your own findings following the same process. 5. **Review the Security Policy:** - Before reporting any findings, make sure to review the official [Globaleaks Security Policy](https://github.com/globaleaks/globaleaks-whistleblowing-software/security/policy) to understand the proper process for reporting security issues. - Follow the guidelines in the policy for responsible disclosure. 6. **Report Your Findings:** - If you find any security issues, report them responsibly following the process outlined in the [Security Policy](https://github.com/globaleaks/globaleaks-whistleblowing-software/security/policy). - Provide clear details about the vulnerabilities, including how they were found and potential impact. - If no critical issues are found, provide general feedback on improving the security posture of Globaleaks. 7. **Submit a Pull Request (Optional):** - If you have identified and fixed minor security-related issues (such as updating dependencies, improving security headers, etc.), submit a pull request with your changes. - Ensure your pull request is based on the latest code version to avoid conflicts. ## Prerequisites: - **Basic Understanding of Web Security:** Familiarity with common web security vulnerabilities (e.g., XSS, SQL Injection, CSRF, etc.) is helpful. - **Knowledge of Security Testing Tools:** You can use tools like Burp Suite, OWASP ZAP, or manual testing methods for identifying security flaws. - **No Prior Security Experience Required:** While a basic understanding of security concepts is helpful, this task is designed to introduce you to security auditing and give you hands-on experience. ## Why it's a Great Contribution: - Contributing to a security audit is a high-impact task that helps ensure the safety and integrity of the Globaleaks platform. - Your work will help protect both the data of users and the overall trustworthiness of the project. - This is a great opportunity to gain experience in security auditing and become familiar with the best practices for secure software development. ## Helpful Links: - [Globaleaks GitHub Repository](https://github.com/globaleaks/globaleaks-whistleblowing-software) - [Globaleaks Security Policy](https://github.com/globaleaks/globaleaks-whistleblowing-software/security/policy) - [Threat Model](https://docs.globaleaks.org/en/stable/security/ThreatModel.html) - [Application Security Specification](https://docs.globaleaks.org/en/stable/security/ApplicationSecurity.html) - [Encryption Protocol](https://docs.globaleaks.org/en/stable/security/EncryptionProtocol.html) - [Security Audits](https://docs.globaleaks.org/en/stable/security/PenetrationTests.html)
0easy
Title: DOC: Add an example showing how to modify the colorbar Body: It would be helpful to have an example that shows how to extract the matplotlib colorbar from the display so the user can modify the size, font, etc. We show how to do this in the [SAIL example](https://arm-development.github.io/sail-xprecip-radar/radar-precip/plot-method-description-figure.html), but this should be in our main documentation.
0easy
Title: SECURITY: bad regex pattern in 'gensim/corpora/wikicorpus.py' maybe cause 'ReDos' security problem. Body: #### Problem description i found two bad regex pattern in 'gensim/corpora/wikicorpus.py' ``` RE_P7 = re.compile(r'\n\[\[[iI]mage(.*?)(\|.*?)*\|(.*?)\]\]', re.UNICODE) """Keep description of images.""" RE_P8 = re.compile(r'\n\[\[[fF]ile(.*?)(\|.*?)*\|(.*?)\]\]', re.UNICODE) """Keep description of files.""" ``` those pattern will cause 'ReDos' security problem, proof of code like below ``` import re RE_P8 = re.compile(r'\n\[\[[fF]ile(.*?)(\|.*?)*\|(.*?)\]\]', re.UNICODE) re.findall(RE_P8, "\n[[file"+"|a"*1000+"|]") ``` run the above code, cpu utilization will be 100% in a very long period. more detail about 'ReDos' please see [owasp](https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS). #### effect of this security problem because i did not see anywhere use 'RE_P7' and 'RE_P8' pattern, and not familiar with gensim api, so i can not decide what's the effect of this security problem.
0easy
Title: Disable workflow save button if there were no changes since last save Body: **Is your feature request related to a problem? Please describe.** Its just a minor frontend/ui change Save workflow button in upper right corner is always clickable and changes workflow version, even if there were no changes since last version **Describe the solution you'd like** If there were no changes, button should be grayed out and not clickable **Describe alternatives you've considered** Live with it, tbh not a big problem **Additional context** https://github.com/user-attachments/assets/13a92952-00bc-48a5-89ee-b2298ca7f0a6
0easy
Title: "apache-airflow-providers-samba" not working with Windows shares due to fixed use of forward slahes Body: ### Apache Airflow version 2.10.5 ### If "Other Airflow 2 version" selected, which one? _No response_ ### What happened? When I connect to a Windows share using the samba provider I am receiving a STATUS_INVALID_PARAMETER error when I try to use the "listdir" function. This is due to the fact that in https://github.com/apache/airflow/blob/main/providers/samba/src/airflow/providers/samba/hooks/samba.py line 88 the path is created using // at the beginning and / as a separator. When using the smbclient python package directly and typing backward slashes for joining a path, it works without any errors. ### What you think should happen instead? There should be a way to configure the samba provider to use backward slashes instead of forward slashes for joining the path. ### How to reproduce You will need a Windows computer with a network share which can be accessed using a username and password. E.g. computer name "windowsSharer", share "windowsShare", user "WindowsUser", ... The connection can then be configured in airflow. Create a simple dag where the SambaHook is created and the listdir function is called, e.g.: `hook = SambaHook(samba_conn_id=samba_conn_id) files_and_dirs = hook.listdir("windowsDirectory") print(f"{files_and_dirs}")` The listdir function will fail. However, using smbclient directly like this: `from smbclient import listdir, mkdir, register_session, rmdir, scandir` `from smbclient.path import isdir` `pathBackSlash = r"\\windowsSharer\windowsShare\windowsDirectory"` `for filename in listdir(pathBackSlash, username=r"WindowsUser@Domain", password="secret"):` `print(f"{filename}")` Will work, and when you change to forward slashes you get the same "STATUS_INVALID_PARAMETER" message like with the airflow provider. ### Operating System Ubuntu 24.04.2 LTS (in wsl) ### Versions of Apache Airflow Providers apache-airflow-providers-samba==4.9.1 ### Deployment Virtualenv installation ### Deployment details _No response_ ### Anything else? _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
0easy
Title: docs: add documentation on FastAPI swagger UI with BaseDoc Body: # Context We want to add documentation on how to create a fast API app with DocArray and get a swagger UI with a BaseDoc
0easy
Title: Drop `scipy` from dev dependencies Body: ## 🚀 Feature The scipy package is just used in one place. So, we can figure out how to test the case without it ``` $ git grep 'scipy' docker/Dockerfile: /opt/conda/bin/conda install -y python=$PYTHON_VERSION numpy pyyaml scipy ipython mkl mkl-include cython typing && \ requirements/requirements-dev.txt:scipy test/losses/test_hd.py:from scipy.ndimage import convolved ``` https://github.com/kornia/kornia/blob/216aa9d2f10f9300bffe952b1583f03b7f833be5/test/losses/test_hd.py#L49 ## Motivation reduce the number of dependencies at kornia
0easy
Title: [UX] Additional message from OCI even though not enabled Body: <!-- Describe the bug report / feature request here --> I am getting the following message while `sky launch -c test` with `pip install -e .[all]` and no OCI credentials ``` INFO:oci.circuit_breaker:Default Auth client Circuit breaker strategy enabled ``` We should get rid of this. <!-- If relevant, fill in versioning info to help us troubleshoot --> _Version & Commit info:_ * `sky -v`: PLEASE_FILL_IN * `sky -c`: PLEASE_FILL_IN
0easy
Title: Deleting form submission redirects to first listing page Body: <!-- Found a bug? Please fill out the sections below. 👍 --> ### Issue Summary <!-- A summary of the issue. --> When you delete a form submission (on a later pagination page) then after confirming, you are redirected to the first pagination page. ### Steps to Reproduce 1. Load a fresh Wagtail Bakerydemo 2. Visit the "contact us" page. 3. Submit the form more than 22 times (so that there is more than 1 item on the second pagination page) 4. Visit the submission listing screen for the "contact us" page. 5. Navigate to the second pagination page. 6. Select a submission for deletion and confirm the deletion. 7. You are redirected. Notice that you now are on the first pagination page again. 8. Navigate to the second pagination page to confirm that there are still submissions listed there. Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead? - I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (**yes** / no) ### Technical details - Python version: 3.11.4 - Django version: 4.2.13 - Wagtail version: 6.1.2 - Browser version: _irrelevant_ ### Working on this <!-- Do you have thoughts on skills needed? Are you keen to work on this yourself once the issue has been accepted? Please let us know here. --> Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start. This could be a good first issue. Can probably be solved with a `next` parameter or similar on the delete confirm view and using the "current URL" as next on the listing page showing the delete action.
0easy
Title: Add the missing docstrings to the `string_evaluator.py` file Body: Add the missing docstrings to the [string_evaluator.py](https://github.com/scanapi/scanapi/blob/main/scanapi/evaluators/string_evaluator.py) file [Here](https://github.com/scanapi/scanapi/wiki/First-Pull-Request#7-make-your-changes) you can find instructions of how we create the [docstrings](https://www.python.org/dev/peps/pep-0257/#what-is-a-docstring). Child of https://github.com/scanapi/scanapi/issues/411
0easy
Title: [BUG] build and docs badge in README failing Body: The build and docs badges in the README.md show "failing", even though build and docs are not failing - what is going on here, and how to fix this? ![image](https://github.com/user-attachments/assets/cfe900d4-0064-4681-97b5-5fccec987b33)
0easy
Title: Refactor `App` Class in `app.py` for Streamlined Code and Maintenance Body: ### GitHub Issue: Refactoring the `App` Class in `app.py` for Enhanced Readability and Maintainability #### Overview The `App` class in `app.py` is central to our Nextpy application but currently handles an extensive range of functionalities, leading to complexity and potential maintainability issues. This issue aims to refactor `App` by effectively leveraging existing modules (`admin.py`, `event.py`, `state.py`) and improving code organization. #### Current State - The `App` class is multifaceted, combining numerous functionalities which complicates the codebase. #### Objective - Streamline the `App` class to efficiently utilize `admin.py`, `event.py`, and `state.py`. - Reduce redundancy and enhance code clarity. #### Proposal Details **1. Integration with Existing Modules:** - Thoroughly review and integrate `admin.py`, `event.py`, and `state.py` to offload respective functionalities from `App`. - Eliminate duplicate implementations in `App` that are already handled by these modules. **2. Streamlining Event Handling:** - Refine event handling in `App` using the structured approach defined in `event.py`. - Create a more intuitive interface between `App` and the event module for cleaner and more maintainable code. **3. State Management Refinement:** - Centralize state management responsibilities in `state.py`, and modify `App` to interact seamlessly with this module. - Simplify and clarify the state management processes within `App`. **4. Admin Dashboard Integration Enhancement:** - Extract and relocate admin dashboard setup (e.g., `setup_admin_dash`) to a dedicated class within `admin.py`. - Ensure this class handles all admin-related functionalities, providing a clean interface with `App`. **5. Code Cleanup and Optimization:** - Identify and refactor complex or redundant sections in `App`. - Focus on enhancing readability and execution efficiency. # TODO: - [ ] **Step 1: Module Integration Review** - Assess overlaps and interactions between `App` and the modules (`admin.py`, `event.py`, `state.py`), documenting the findings. - [ ] **Step 2: Event Handling Refinement** - Revise `App`'s event handling, aligning and integrating changes with `event.py`. - [ ] **Step 3: State Management Enhancement** - Overhaul `App`'s state management in coordination with modifications in `state.py`. - [ ] **Step 4: Admin Dashboard Integration Refinement** - Reorganize admin dashboard functionality from `App` to `admin.py`. - [ ] **Step 5: Code Cleanup and Refactoring** - Execute comprehensive code refinement in `App`, focusing on simplification and optimization. - [ ] **Step 6: Testing and Validation** - Confirm that refactoring retains existing functionality. - Implement extensive testing to verify new code structures and performance. #### Expected Outcomes: - A more streamlined and readable `App` class. - Reduced code redundancy and improved integration with existing modules. - A well-organized and efficient codebase, facilitating future development and maintenance. #### Additional Notes: - Ensure backward compatibility and maintain core functionality post-refactoring. --- This issue aims to significantly enhance the Nextpy framework's `app.py` file, focusing on structural improvements and efficient use of existing resources. The end goal is a more maintainable, clear, and efficient codebase that continues to deliver robust functionality.
0easy
Title: Create a runnable example notebook and test for Mocking Requests Library Body: Create a runnable notebook and test file for the documentation example: https://testbook.readthedocs.io/en/latest/examples/index.html#mocking-requests-library
0easy
Title: pdf output with code Body: ### Description I am hoping to be able to make pdf output with code, while using `mo.mpl.interactive` to display plots. Currently when I use the menu to do pdf output, it tells me to press `Crtil+.` to switch to app mode before making the pdf output. All my plots output fine, but the code is hidden. ### Suggested solution I can think of 2 1. allow pdf output from notebook mode (I know there were correctness issues, so those would need to be fixed). 2. allow app mode to show code, just not editable. (Is this already possible?) ### Alternative _No response_ ### Additional context My use case is that I want a hard copy of "I did this analysis this way and got this result" that is immutable. I prefer pdf to html, as some features don't work in html output. Eg the "dataframe viewer" doesn't seem to work in html output since it no longer has access to the dataframe. I just want a copy of what was on the screen (and would have been if I scrolled) when I said to export a copy, and pdf seems best matched to that.
0easy
Title: Constructors for auto generated classes? Body: We're using sgqlc to interact with a GQL service we've built and love the removal of dict-of-stuff navigation in the response objects and field selections. We can't quite figure out how to do the same for complex argument/input values though. As an example: ```python class FilterDateInput(sgqlc.types.Input): start = sgqlc.types.Field(String, graphql_name='start') end = sgqlc.types.Field(String, graphql_name='end') class FilterValueInput(sgqlc.types.Input): value = sgqlc.types.Field(String, graphql_name='value') class FilterInput(sgqlc.types.Input): date = sgqlc.types.Field(FilterDateInput, graphql_name='date') id = sgqlc.types.Field(FilterInput, graphql_name='id') ``` Ideally we would be able to generate a filter by: ```python gql_filter = FilterInput( date=FilterDateInput(start='04-12-2012', end='04-15-2012'), id=FilterValueInput(value='A17') ) operation.queryThings(filter=gql_filter, user="bill") ``` This would benefit from tool tips and autocomplete where as the current method we're aware of is: ```python gql_filter = { 'date': {'start':'04-12-2012', 'end':'04-12-2012'}, 'id':{'value':'A17'} } operation.queryThings(filter=gql_filter, user="bill") ``` Which as the schema grows become increasingly unwieldy. Is there a way that we are missing to do this or is there a way to generate classes that will act in this way using introspection of the sgqlc’s autogenerated classes?
0easy
Title: Make use of `LiteralString` for raw queries Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> [PEP 675](https://www.python.org/dev/peps/pep-0675/) is introducing a new type, `LiteralString`, this will mean that the type checker can help prevent SQL injection attacks! The following will then raise an error when type checking: ```py await client.query_raw( f'SELECT * FROM User WHERE name = {user_name}' ) ``` ## Suggested solution <!-- A clear and concise description of what you want to happen. --> It should be noted that although the PEP hasn't been accepted yet, it has been implemented in pyright. We should annotate raw SQL arguments using `LiteralString`, support for this has just been added to pyright. ## Additional context <!-- Add any other context or screenshots about the feature request here. --> https://github.com/microsoft/pyright/issues/3023
0easy
Title: Leave logging configuration to client code Body: **Description** The `import vaex` statement enforces an opinionated logging configuration. This prevents the client code to set a different logging configuration. For example: ``` import logging import vaex logging.basicConfig(level=logging.INFO, format="%(levelname)s --> %(message)s") logger = logging.getLogger(__name__) if __name__ == "__main__": logger.info("Logging config is usually left to client code.") ``` does output: ``` INFO:MainThread:__main__:Logging config is usually left to client code. ``` instead of: ``` INFO --> Logging config is usually left to client code. ``` **Source code responsible for this behavior** https://github.com/vaexio/vaex/blob/master/packages/vaex-core/vaex/__init__.py#L626 **Suggested solution** Encapsulate the logging configuration *recommended* by Vaex in a function that is not executed at module import. This leaves the user freedom to choose to use the Vaex logging config, or their own logging config. For example: ``` import vaex vaex.set_logging_basic_config() ``` **Thanks** Thank you for this awesome and exciting project!
0easy
Title: Unstable ETag because of hash rantomisation Body: Python `hash()` function is salted by default (see [__hash__](https://docs.python.org/3.8/reference/datamodel.html#object.__hash__) docs). Therefore the hash of the same object doesn't normally produce the same value in different processes. As a result, because of the way ETags are produced, the cache gets invalidated when the service restarts. Or, if the service is deployed with more than one backend behind a load balancer, every backend will return a different etag for the same resource, making it useless. https://github.com/long2ice/fastapi-cache/blob/91ba6d75524b1f6a17ae6a3300ab51b9aa1fcf71/fastapi_cache/decorator.py#L201 As a workaround, setting `PYTHONHASHSEED` to a fixed value disables hash randomization.
0easy
Title: Add suffix "acion" to Spanish SnowballStemmer Body: Hi, I'm a native Spanish speaker and I've noticed words with suffix "acion" don't get stemmed. (You got right the plural version "aciones", but not the singular one) Please, add suffix "acion" to Spanish SnowballStemmer Thank you for your work!
0easy
Title: Empty completion list for file in `aws s3 cp` Body: Typing on Debian: ```xsh pip install awscli $XONSH_TRACE_COMPLETIONS=True aws s3 cp <press tab to insert filename> s3://path/ ``` Result <img width="139" alt="Image" src="https://github.com/user-attachments/assets/411ccc3e-492e-43c1-af1a-8315e97f5e3d" /> Trace (one result with empty space `' '`): ```xsh TRACE COMPLETIONS: Getting completions with context: TRACE COMPLETIONS: Got 1 results from exclusive completer 'bash': CompletionContext(command=CommandContext(args=(CommandArg(value='aws', opening_quote='', closing_quote='', is_io_redir=False), CommandArg(value='s3', opening_quote='', closing_quote='', is_io_redir=False), CommandArg(value='cp', opening_quote='', closing_quote='', is_io_redir=False), CommandArg(value='s3://path/', opening_quote='', closing_quote='', is_io_redir=False)), arg_index=3, prefix='track', suffix='', opening_quote='', closing_quote='', is_after_closing_quote=False, subcmd_opening=''), python=PythonContext('aws s3 cp track s3://path/', 15, is_sub_expression=False)) [(RichCompletion(' ', prefix_len=5, append_closing_quote=False, append_space=True), 5)] ``` ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
0easy
Title: [DOC] FCNClassifier publication reference is incorrect Body: #### Describe the issue linked to the documentation The [FCNClassifier](https://www.sktime.net/en/latest/api_reference/auto_generated/sktime.classification.deep_learning.FCNClassifier.html) references the work of Zhao et al. 2017 [1], while it is actually an implementation of Wang et al. 2017 [2]. Also see the [underlying implementation](https://github.com/hfawaz/dl-4-tsc/blob/master/classifiers/fcn.py). [1] Zhao, B., Lu, H., Chen, S., Liu, J., & Wu, D. (2017). Convolutional neural networks for time series classification. Journal of Systems Engineering and Electronics, 28(1), 162–169. https://doi.org/10.21629/JSEE.2017.01.18 [2] Wang, Z., Yan, W., & Oates, T. (2017). Time series classification from scratch with deep neural networks: A strong baseline. 2017 International Joint Conference on Neural Networks (IJCNN), 1578–1585. https://doi.org/10.1109/IJCNN.2017.7966039 #### Suggest a potential alternative/fix Put the right reference
0easy
Title: [ENH] Integrate pypots for Partially Observed Time Series (POTS) as Imputers in sktime Body: **Is your feature request related to a problem? Please describe.** Handling Partially Observed Time Series (POTS) is a common challenge in real-world scenarious. Currently **Sktime** lacks the native support for models specifically designed to handle POTS The **pypots** library offers specialized models for imputation and forecasting on partially observed time series. Integrating **pypots** into sktime would extend its capabilities, for users dealing with incomplete datasets, and provide more robust solutions for missing data.. **Describe the solution you'd like** Integrate **pyplots** into sktime as part of **series-to-series** transformers network, specifically under **imputers**. Since many pypost models can also be used for forecasting, the integration should consider the following: - Designing the interface so that pypots models can act as imputers while enabling potential forecasting use cases. - Investigating if common neural network components, in pypots can be factored out to create multi-purpose models (usable for both imputation and forecasting) - Evaluating whether pypots is compatible with imputation-by-forecasting strategies, and if so, aligning the integration with sktime's forecasting API, which will serve the users with dual features.. For context, you can checkout this official documentation of `PyPots` : https://pypots.com/ Also you can checkout their CodeBase: https://github.com/WenjieDu/PyPOTS Additionally, you can checkout the BaseImputer class (which needs to interfaced) : https://github.com/WenjieDu/PyPOTS/blob/8c89c1a19ef0d7d7e8bcbbba594c0956fa2ea81e/pypots/imputation/base.py#L4
0easy
Title: Feature: Broker event handlers Body: **Is your feature request related to a problem? Please describe.** If multiple brokers are used user might need to stop processing messages if depends on other broker, **Describe the solution you'd like** Handle connected/disconnected/reconnected events. Probably, multiple FastStream app instances might be instantiated. As i understand, other brokers that not referred by any FastStream app are not re-connecting automatically in case of network failures or something else. Probably reconnect feature could be enabled for a broker that not referenced by any app. **Feature code example** ```python from faststream import FastStream kafka_broker = KafkaBroker(bootstrap_servers= "127.0.0.1:9092") rabbit_broker = RabbitBroker("amqp://127.0.0.1:5672", virtualhost="/") @kafka_broker.on_disconnect async def kafka_disconnected_cb(): pass @rabbit_broker.on_disconnect async def rabbit_disconnected_cb(): pass ... ``` ref1: https://github.com/airtai/faststream/issues/526
0easy
Title: Log Uniform distribution Body: How about a Log Uniform distribution, i.e., the log of a variable is uniformly distributed. Implementation: ``` class LogUniform(dist.Uniform): def sample(self, key, sample_shape=()): shape = sample_shape + self.batch_shape sample = jax.random.uniform(key, shape=shape, minval=jnp.log(self.low), maxval=jnp.log(self.high)) return jnp.exp(sample) ``` A more polished version would check that low and high are both > 0. This is rather simple for users to do, but quite convenient.
0easy
Title: Update TSNE docs to clarify what is passed in on instantiation or fit Body: **Describe the bug** Currently, the TSNE docs [show an example](http://www.scikit-yb.org/en/latest/api/text/tsne.html#t-sne-corpus-visualization) where a variable named "labels" is passed into the `fit` method; however this variable actually stores `y` (the target), not the labels (which the `fit` method can derive from `y`). This has [lead to some confusion](https://github.com/DistrictDataLabs/yellowbrick/pull/659) about what values should and can be passed in when and where. **Proposed Fix** Update the TSNE docs to something like: ```python from yellowbrick.text import TSNEVisualizer from sklearn.feature_extraction.text import TfidfVectorizer # Load the data and create document vectors corpus = load_corpus('hobbies') tfidf = TfidfVectorizer() X = tfidf.fit_transform(corpus.data) y = corpus.target tsne = TSNEVisualizer() tsne.fit(X, y) tsne.poof() ``` We should also add another example to illustrate how a user can pass in the labels as a list on instantiation, e.g.: ```python ... labels = corpus.categories tsne = TSNEVisualizer(labels=labels) tsne.fit(X, y) tsne.poof() ``` Sincere thanks to @jeromemassot for alerting us to this point of confusion!
0easy
Title: open_datatree `group` parameter does not support Iterable[str] Body: ### What is your issue? The annotation typing for the `group` parameter in the `open_datatree` function receives `str | Iterable[str] | Callable | None = None` for all backends. for example https://github.com/pydata/xarray/blob/f01096fef402485092c7132dfd042cc8f467ed09/xarray/backends/zarr.py#L1285 However, the current implementation does not support `Iterable[str]`; Therefore, it should be removed in all backends
0easy
Title: [Core/UX] Improve the display of returncode for multi-node Body: <!-- Describe the bug report / feature request here --> When a user's job is running on multiple nodes and one node fails with a return code, e.g. 1, SkyPilot will kill the processes on the other nodes, with a return code 137. It is confusing to users to see a list of return code like the following: `ERROR: Job 1 failed with return code list: [1, 137, 137]` Instead, we should show message like the following: ``` ERROR: Job 1 failed with returncode: 1 on one node worker-2, SkyPilot cleaned the processes on other nodes with returncode 137 ``` <!-- If relevant, fill in versioning info to help us troubleshoot --> _Version & Commit info:_ * `sky -v`: PLEASE_FILL_IN * `sky -c`: PLEASE_FILL_IN
0easy
Title: The line break character in the name of the conversation file causes the OSError error Body: ### Describe the bug During the neural network response, a situation may occur in which the first few words used to generate the file name contain the \n character, which is not removed during the check in the interpreter\core\core.py file (284 line) and an OSError will be generated. The easiest solution I see is to add the character \n to the list of characters on line 272. Now: '<>:"/\\|?*!' Will be: '<>:"/\\|?*!\n' ### Reproduce You just need to compose a query where the neural network uses a line break in the first few characters. ### Expected behavior Successful creation of the file was expected. ### Screenshots ![image](https://github.com/user-attachments/assets/a80dadf8-5cc9-44cc-9a2c-0ff4720e5038) ### Open Interpreter version 0.3.7 ### Python version 3.10.11 ### Operating System name and version Windows 11 ### Additional context _No response_
0easy
Title: About fine-tuning SlowFast on my own dataset Body: I followed the script in https://gluon-cv.mxnet.io/build/examples_action_recognition/finetune_custom.html#id1, and It worked. But I only changed the model to slowfast_4x16_resnet50_custom, the error raised: ``` Traceback (most recent call last): File "finetune_custom.py", line 207, in <module> pred = net(X) File "/DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/gluon/block.py", line 548, in __call__ out = self.forward(*args) File "/DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/gluon/block.py", line 925, in forward return self.hybrid_forward(ndarray, x, *args, **params) File "/DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/gluoncv/model_zoo/action_recognition/slowfast.py", line 261, in hybrid_forward slow_input = F.slice(x, begin=(None, None, self.fast_frames, None, None), end=(None, None, self.fast_frames + self.slow_frames, None, None)) File "<string>", line 86, in slice File "/DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py", line 92, in _imperative_invoke ctypes.byref(out_stypes))) File "/DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [13:36:28] src/operator/tensor/./matrix_op-inl.h:688: Check failed: b < len (32 vs. 32) : slicing with begin[2]=32 exceeds limit of input dimension[2]=32 Stack trace: [bt] (0) /DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x4a16ab) [0x7f0adcbd06ab] [bt] (1) /DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2403ac5) [0x7f0adeb32ac5] [bt] (2) /DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2405cd6) [0x7f0adeb34cd6] [bt] (3) /DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::imperative::SetShapeType(mxnet::Context const&, nnvm::NodeAttrs const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, mxnet::DispatchMode*)+0x1fb1) [0x7f0adee29ca1] [bt] (4) /DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::Imperative::Invoke(mxnet::Context const&, nnvm::NodeAttrs const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&)+0x1db) [0x7f0adee3394b] [bt] (5) /DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2602da9) [0x7f0aded31da9] [bt] (6) /DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/mxnet/libmxnet.so(MXImperativeInvokeEx+0x6f) [0x7f0aded3239f] [bt] (7) /DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/lib-dynload/../../libffi.so.6(ffi_call_unix64+0x4c) [0x7f0b0be2aec0] [bt] (8) /DATA/disk1/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/lib-dynload/../../libffi.so.6(ffi_call+0x22d) [0x7f0b0be2a87d] ``` Please tell me how to slove it. Looking forword your reply.
0easy
Title: How To Test TMO Indicator Body: I want to first give a big thanks to @luisbarrancos and @twopirllc for working on the indicator. I have been trying to backtest it but am having trouble. I keep getting an error when inputting the first part of the code: ```python from numpy import broadcast_to, isnan, nan, nansum, newaxis, pad, sign, zeros from numpy.lib.stride_tricks import sliding_window_view from pandas import DataFrame, Series from pandas_ta._typing import DictLike, Int from pandas_ta.ma import ma from pandas_ta.utils import ( v_bool, v_mamode, v_offset, v_pos_default, v_series ) ``` I get this error when running it: ModuleNotFoundError: No module named 'pandas_ta._typing' Do I need to run the code on another platform I am currently using Jupyter?
0easy
Title: Most plotter instance methods should be static methods Body: https://github.com/mithi/hexapod-robot-simulator/blob/5c1f8a187e7497a37e9b2b5d66ec2fe72b3cc61f/hexapod/plotter.py#L13 ``` def _draw_hexapod(self, fig, hexapod) def _draw_scene(self, fig, hexapod) def change_camera_view(self, fig, camera) ``` See also: [Introduction to Static Methods for Dummies](https://realpython.com/instance-class-and-static-methods-demystified/)
0easy
Title: ColorMap.getByIndex() returns wrong colors Body: <!-- In the following, please describe your issue in detail! --> <!-- If some sections do not apply, just remove them. --> ### Short description <!-- This should summarize the issue. --> ColorMap.getByIndex() returns wrong colors. ### Code to reproduce <!-- Please provide a minimal working example that reproduces the issue in the code block below. Ideally, this should be a full example someone else could run without additional setup. --> ```python In [1]: import pyqtgraph as pg In [2]: cm = pg.ColorMap([0.0, 1.0], [(0,0,0), (255,0,0)]) In [3]: cm.getByIndex(0) Out[3]: PySide6.QtGui.QColor.fromRgbF(0.000000, 0.000000, 0.000000, 0.003922) In [4]: cm.getByIndex(1) Out[4]: PySide6.QtGui.QColor.fromRgbF(0.003922, 0.000000, 0.000000, 0.003922) ``` ### Tested environment(s) * PyQtGraph version: 0.13.2.dev0
0easy
Title: Latest Modin as a data source for pandas-ai Body: ### 🚀 The feature Hey folks, I just saw you had added support for Modin as a data source for pandas-ai in https://github.com/Sinaptik-AI/pandas-ai/pull/907. This is a great addition to the other data sources like pandas and polars, for instance, that has already been introduced much earlier. I am a little concerned about the Modin version pandas-ai currently supports, namely, [0.18.1](https://github.com/Sinaptik-AI/pandas-ai/blob/66dea557956a6e378856472013340c2ced9215c4/poetry.lock#L3369). It is too old and does not contain latest perfomance features and optimizations. Would it be possible to support a latest Modin version? ### Motivation, pitch All libraries are advancing and making great progress with respect to ultimate performance on latest versions and pandas with Modin are not exceptions. To speed up chatting with data through pandas-ai and reduce users' waiting time it would be great to support latest versions of Modin, pandas, etc. ### Alternatives _No response_ ### Additional context _No response_
0easy
Title: Better, easier, and integrated support for color bars Body: Add better support for easier and integrated color bars! Usually this is achieved with `layout.coloraxis.colorbar` but I'm not sure how easy it is for users to manually integrate this with our ridgeplots. At least add an example to the docs? [(REF1)](https://plotly.com/python/colorscales/#hiding-or-customizing-the-plotly-express-color-bar) Refs: - https://plotly.com/python/colorscales/#customizing-tick-text-on-discrete-color-bars - https://plotly.com/python/colorscales/#using-label-aliases-on-colorbars - https://plotly.com/python/colorscales/#positioning-colorbars
0easy
Title: Add fork me on GitHub info/wording Body: We could use some more obvious copy on the website for info on how to fork on GitHub: https://awesomedjango.org/ This can go in `_includes/header.html` or `_includes/footer.html`.
0easy
Title: add convenience drop_sweeps function to radar class Body: Per conversation on the [mailing list](https://groups.google.com/forum/#!topic/pyart-users/DKH1eYdIPuM), it might be nice to have a convenient way to remove sweeps from a radar volume. This may also be of interest in Issue #649 ?
0easy
Title: [Feature request] Add apply_to_images to PlasmaBrightnessContrast Body:
0easy
Title: Store name for each dimension Body: It would be useful to be able to give a name to a dimension, for plotting and other bookkeeping purposes mainly. We could also use it in https://github.com/scikit-optimize/scikit-optimize/blob/master/skopt/utils.py#L322 and friends. I'd propose the following API: `Dimension(..., name=None)` so that existing code keeps working. I'd also not allow people setting this name via the "inferring things" interface.
0easy
Title: Util function to provide the CG dashboard Deepdrills URL Body: Define a function that will be taking the KPI id and the dashboard id as the input and will return the URL of the Deepdrill analysis for that KPI inside the given dashboard
0easy
Title: Reproduce the error described in PR #104 and review the fix Body: - [ ] Reproduce the error found in #104. - [ ] Review the pull request and proposed fix
0easy
Title: [FEATURE] Automatically load fixups for FastAPI Body: **Is your feature request related to a problem? Please describe.** Sometimes users are not aware of some incompatibilities with FastAPI so some fixups are required. **Describe the solution you'd like** Detect FastAPI based on the `app` type inside loaders like `specs.openapi.loaders.from_dict` and automatically load the proper fixup. This should decrease the friction for FastAPI users The implementation could check all the classes inside MRO by their name (as we know what class is used by Fast API app)
0easy
Title: Feature: logging configuration from file Body: **Is your feature request related to a problem? Please describe.** I would like to use `--log-config` argument in FastStream CLI to automatically load my logging configuration from file. This functionality is present in uvicorn and gunicorn. - https://www.uvicorn.org/settings/#logging - https://docs.gunicorn.org/en/stable/settings.html#logconfig **Describe the solution you'd like** FastStream CLI could load configuration file based on extension and use either `logging.fileConfig` or `logging.dictConfig` (with proper parsing). **Feature code example** This is the example in gunicorn source code: https://github.com/benoitc/gunicorn/blob/903792f152af6a27033d458020923cb2bcb11459/gunicorn/glogging.py#L243 **Describe alternatives you've considered** I'm currently using my own class that does the same thing but I thought it would be handy to have it implemented in FastStream. **Additional context** This functionality in gunicorn uses `logging.dictConfig` and `logging.fileConfig`. Most likely because `logging.fileConfig` does not support JSON files.
0easy
Title: `logging` module log level is not restored after execution Body: Hi, It seems like that the robot handler is changing the root logger log level via ``set_level`` function (``robot.output.pyloggingconf``) but the original root logger level is not restored back after the end of the ``robot.running.model.TestSuite.run`` method or ``robot.run`` module. The original context manager: ```python @contextmanager def robot_handler_enabled(level): root = logging.getLogger() if any(isinstance(h, RobotHandler) for h in root.handlers): yield return handler = RobotHandler() old_raise = logging.raiseExceptions root.addHandler(handler) logging.raiseExceptions = False set_level(level) try: yield finally: root.removeHandler(handler) logging.raiseExceptions = old_raise ``` Would it be necessary to restore the log level after changing it, in case the test script or any other third-party tool has already modified it for any reason? ```python @contextmanager def robot_handler_enabled(level): root = logging.getLogger() if any(isinstance(h, RobotHandler) for h in root.handlers): yield return handler = RobotHandler() old_raise = logging.raiseExceptions * -> old_level = logging.getLevelName(root.level) root.addHandler(handler) logging.raiseExceptions = False set_level(level) try: yield finally: root.removeHandler(handler) logging.raiseExceptions = old_raise * -> set_level(old_level) ```
0easy
Title: Add version for Mock Generator Body: /kind feature We should add new instruction to [Makefile](https://github.com/kubeflow/katib/blob/master/Makefile) to run [`mockgen.sh`](https://github.com/kubeflow/katib/blob/master/scripts/mockgen.sh) script with specific mock version. That will help us to keep files consistent. /good-first-issue
0easy
Title: 在全局搜索页面时,有时输入框需要输入两次才能执行搜索 Body: 在执行一次搜索后,输入关键词执行第二次搜索时会被第一次的输入内容覆盖,需要再输入一遍才能执行搜索。疑似 streamlit 懒策略的问题?
0easy