text
stringlengths
20
57.3k
labels
class label
4 classes
Title: How to build a СNN on BFV? Body: Hello! I am doing research related to neural networks and homomorphic encryption. Your library is amazing and allows you to explore the TFHE scheme! And as I understand it, the example of the convolutional neural network works with this scheme. However, a question arises, in one of the Issue I saw that you have support for several homomorphic encryption schemes, including the BFV scheme. Therefore, I have a question, how to build a CNN with this scheme? If I am wrong and the CNN already works on BFV, is it possible to build it on TFHE? P.S. (This is about an example docs/advanced_examples/ConvolutionalNeuralNetwork.ipynb)
2hard
Title: [Bug] pip install -e .[all,dev,notebooks] giving an error Body: ### Describe the bug In the instructions under "Install TTS", there is command to install extras that is causing me an error when I run it: `pip install -e .[all,dev,notebooks] # Select the relevant extras` ### To Reproduce I tried each one of those options: [all, dev, notebooks] ``` `pip install -e . all` `pip install -e . dev` `pip install -e . notebooks` ``` They all came back with an error something like this: ``` steve@gpu2:~/workspace/TTS$ pip3.10 install -e . dev Defaulting to user installation because normal site-packages is not writeable Obtaining file:///home/steve/workspace/TTS Installing build dependencies ... done Checking if build backend supports build_editable ... done Getting requirements to build editable ... done Preparing editable metadata (pyproject.toml) ... done ERROR: Could not find a version that satisfies the requirement dev (from versions: none) ERROR: No matching distribution found for dev ``` Am I entering this command in wrong? Notice that there is no space between . and [ in the original command listed in the read.me. I am assuming there is a space in between. Not putting there results in the following error: ``` steve@gpu2:~/workspace/TTS$ pip3.10 install -e .all Defaulting to user installation because normal site-packages is not writeable ERROR: .all is not a valid editable requirement. It should either be a path to a local project or a VCS URL (beginning with bzr+http, bzr+https, bzr+ssh, bzr+sftp, bzr+ftp, bzr+lp, bzr+file, git+http, git+https, git+ssh, git+git, git+file, hg+file, hg+http, hg+https, hg+ssh, hg+static-http, svn+ssh, svn+http, svn+https, svn+svn, svn+file). ``` It's been some time since I have worked in linux, so forgive me if this is obvious. Thanks! ### Expected behavior I am not expecting an error message. ### Logs _No response_ ### Environment ```shell I don't even have a bin directory yet. ``` ### Additional context _No response_
0easy
Title: Failure messages Body: We are trying a simple test. while verifying the response i got error in verify ``` raise TestFailError("Test '{:s}' failed:\n{:s}".format(self.name, self._str_errors()), failures=self.errors) vern.util.exceptions.TestFailError: Test '23_03_Post P1 V1' failed: E - Value mismatch in body: Type of returned data was different than expected (expected["0"]["biosStartAddress"] = '3022853549', actual["0"]["biosStartAddress"] = '3022853549') ############################################################################## ================================================= test session starts ================================================= platform win32 -- Python 3.5.0, pytest-4.0.0, py-1.7.0, pluggy-0.8.0 -- c:\python35-32\python.exe cachedir: .pytest_cache rootdir: E:\Tavern\BIOS\AttestationMeasurementService, inifile: plugins: tavern-0.20.0 collected 1 item test_Test.tavern.yaml::Test_23_ Put B1 P1 V1 and B1 P2 V1 should be allowed FAILED [100%] ====================================================== FAILURES ======================================================= E:\Tavern\BIOS\AttestationMeasurementService\test_Test.tavern.yaml::Test_23_ Put B1 P1 V1 and B1 P2 V1 should be allowed c:\python35-32\lib\site-packages\_pytest\runner.py:211: in __init__ self.result = func() c:\python35-32\lib\site-packages\_pytest\runner.py:193: in <lambda> lambda: ihook(item=item, **kwds), c:\python35-32\lib\site-packages\pluggy\hooks.py:284: in __call__ return self._hookexec(self, self.get_hookimpls(), kwargs) c:\python35-32\lib\site-packages\pluggy\manager.py:67: in _hookexec return self._inner_hookexec(hook, methods, kwargs) c:\python35-32\lib\site-packages\pluggy\manager.py:61: in <lambda> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False, c:\python35-32\lib\site-packages\pluggy\callers.py:208: in _multicall return outcome.get_result() c:\python35-32\lib\site-packages\pluggy\callers.py:80: in get_result raise ex[1].with_traceback(ex[2]) c:\python35-32\lib\site-packages\pluggy\callers.py:187: in _multicall res = hook_impl.function(*args) c:\python35-32\lib\site-packages\_pytest\runner.py:121: in pytest_runtest_call item.runtest() c:\python35-32\lib\site-packages\tavern\testutils\pytesthook.py:431: in runtest run_test(self.path, self.spec, self.global_cfg) c:\python35-32\lib\site-packages\tavern\core.py:145: in run_test run_stage_(sessions, stage, tavern_box, test_block_config) c:\python35-32\lib\site-packages\tavern\core.py:180: in run_stage saved = v.verify(response) c:\python35-32\lib\site-packages\tavern\_plugins\rest\response.py:207: in verify raise TestFailError("Test '{:s}' failed:\n{:s}".format(self.name, self._str_errors()), failures=self.errors) E tavern.util.exceptions.TestFailError: Test '23_03_Post P1 V1' failed: E - Value mismatch in body: Type of returned data was different than expected (expected["0"]["biosStartAddress"] = '3022853549', actual["0"]["biosStartAddress"] = '3022853549') -------------------------------------------------- Captured log call -------------------------------------------------- base.py 37 ERROR Value mismatch in body: Type of returned data was different than expected (expected["0"]["biosStartAddress"] = '3022853549', actual["0"]["biosStartAddress"] = '3022853549') =============================================== short test summary info =============================================== FAIL test_Test.tavern.yaml::Test_23_ Put B1 P1 V1 and B1 P2 V1 should be allowed ============================================== 1 failed in 1.70 seconds =============================================== ```
1medium
Title: Doesn't work with recent version of pytorch-crf Body: Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first. Please enter all the information below, otherwise your issue may be closed without a warning. **DeepPavlov version** (you can look it up by running `pip show deeppavlov`): 1.0.0 **Python version**: 3.9.5 **Operating system** (ubuntu linux, windows, ...): Windows 11 **Issue**: Error when trying a modified example from the readme. **Content or a name of a configuration file**: See below **Command that led to error**: ``` model = build_model(deeppavlov.configs.ner.ner_collection3_bert, download=True) ``` **Error (including full traceback)**: ``` 2022-11-10 18:35:28.686 INFO in 'deeppavlov.download'['download'] at line 138: Skipped http://files.deeppavlov.ai/v1/ner/ner_rus_bert_coll3_torch.tar.gz download because of matching hashes [nltk_data] Downloading package punkt to [nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data... [nltk_data] Package punkt is already up-to-date! [nltk_data] Downloading package stopwords to [nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data... [nltk_data] Package stopwords is already up-to-date! [nltk_data] Downloading package perluniprops to [nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data... [nltk_data] Package perluniprops is already up-to-date! [nltk_data] Downloading package nonbreaking_prefixes to [nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data... [nltk_data] Package nonbreaking_prefixes is already up-to-date! 2022-11-10 18:35:31.569 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 112: [loading vocabulary from C:\Users\Ellsel\.deeppavlov\models\ner_rus_bert_coll3_torch\tag.dict] Traceback (most recent call last): File "c:\Users\Ellsel\Desktop\Automation\conversation.py", line 4, in <module> model = build_model(deeppavlov.configs.ner.ner_collection3_bert, download=True) File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\commands\infer.py", line 53, in build_model component = from_params(component_config, mode=mode) File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\common\params.py", line 92, in from_params obj = get_model(cls_name) File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\common\registry.py", line 74, in get_model return cls_from_str(_REGISTRY[name]) File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\common\registry.py", line 42, in cls_from_str return getattr(importlib.import_module(module_name), cls_name) File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 855, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\models\torch_bert\torch_transformers_sequence_tagger.py", line 28, in <module> from deeppavlov.models.torch_bert.crf import CRF File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\models\torch_bert\crf.py", line 4, in <module> from torchcrf import CRF as CRFbase ModuleNotFoundError: No module named 'torchcrf' ``` `pip install pytorch-crf==0.4.0` needed.
0easy
Title: Native Openhab support Body: https://community.openhab.org/t/amazon-dash-button-things-wont-come-online-initializing/34438/60
2hard
Title: How to generate non nullable queries? Body: This is my model and schema: ```python class AccountRegion(models.Model): name = models.CharField(_('name'), max_length=128) class AccountRegionType(DjangoObjectType): class Meta: model = AccountRegion class Query(graphene.ObjectType): account_regions = graphene.List(AccountRegionType) def resolve_account_regions(self, info): return AccountRegion.objects.all() ``` When generating the GraphQL schema using the `graphql_schema` management command, I get this output: ```graphql schema { query: Query } type AccountRegionType { id: String! name: String! } type Query { accountRegions: [AccountRegionType] } ``` What I need is to generate the query so it looks like this (notice the double `!`): ```graphql ... type Query { accountRegions: [AccountRegionType!]! } ``` If I modify my query like this: ```python class Query(graphene.ObjectType): account_regions = graphene.List(AccountRegionType, required=True) ... ``` I'm able to generate this schema: ```graphql ... type Query { accountRegions: [AccountRegionType]! } ``` But I'm not sure how to specify that within the `accountRegions` result array, the full `AccountRegionType` object will be present.
1medium
Title: Example Code Doesn't Work in Python 3.6.1 Body: Installed alpha_vantage from pip. `from alpha_vantage.timeseries import TimeSeries import matplotlib.pyplot as plt ts = TimeSeries(key='my key was here', output_format='pandas') data, meta_data = ts.get_intraday(symbol='MSFT',interval='1min', outputsize='full') data['close'].plot() plt.title('Intraday Times Series for the MSFT stock (1 min)') plt.show()` "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\Scripts\python.exe" "C:/Users/Doug/OneDrive/family/doug/work in progress/alphavantage/rolling returns/alpha_play.py" Traceback (most recent call last): File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\indexes\base.py", line 2525, in get_loc return self._engine.get_loc(key) File "pandas\_libs\index.pyx", line 117, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\hashtable_class_helper.pxi", line 1265, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas\_libs\hashtable_class_helper.pxi", line 1273, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'close' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/Users/Doug/OneDrive/family/doug/work in progress/alphavantage/rolling returns/alpha_play.py", line 6, in <module> data['close'].plot() File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\frame.py", line 2139, in __getitem__ return self._getitem_column(key) File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\frame.py", line 2146, in _getitem_column return self._get_item_cache(key) File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\generic.py", line 1842, in _get_item_cache values = self._data.get(item) File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\internals.py", line 3843, in get loc = self.items.get_loc(item) File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\indexes\base.py", line 2527, in get_loc return self._engine.get_loc(self._maybe_cast_indexer(key)) File "pandas\_libs\index.pyx", line 117, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\hashtable_class_helper.pxi", line 1265, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas\_libs\hashtable_class_helper.pxi", line 1273, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'close' Process finished with exit code 1
1medium
Title: TYP: `timedelta64.__divmod__` incorrect inference Body: ### Describe the issue: Using `divmod` widens generic type of timedelta64. The last overload should probably use `Self` instead of `timedelta64`, or possible add an overload for the timedelta case. https://github.com/numpy/numpy/blob/6bc905859780c44193942ea2d0d297abcd691330/numpy/__init__.pyi#L4472-L4477 ### Reproduce the code example: ```python from datetime import timedelta as TD from typing import assert_type import numpy as np td = np.timedelta64(1, "D") assert_type(td, np.timedelta64[TD]) # ✅ n, remainder = divmod(td, td) assert_type(remainder, np.timedelta64[TD]) # ❌ timedelta64[timedelta | int | None] ``` ### Python and NumPy Versions: 2.2.2 3.13.1 (main, Dec 4 2024, 08:54:14) [GCC 11.4.0] ### Type-checker version and settings: mypy 1.4.1 pyright 1.1.393
1medium
Title: __init__() missing 2 required positional arguments: 'schema_name_resolver' and 'spec' Body: Hello, I'm getting the error below. Am I missing anything? ``` ../../../env36/lib64/python3.6/site-packages/flask_base/app.py:1: in <module> from flasgger import Swagger, LazyString, LazyJSONEncoder ../../../env36/lib64/python3.6/site-packages/flasgger/__init__.py:8: in <module> from .base import Swagger, Flasgger, NO_SANITIZER, BR_SANITIZER, MK_SANITIZER, LazyJSONEncoder # noqa ../../../env36/lib64/python3.6/site-packages/flasgger/base.py:37: in <module> from .utils import extract_definitions ../../../env36/lib64/python3.6/site-packages/flasgger/utils.py:22: in <module> from .marshmallow_apispec import SwaggerView ../../../env36/lib64/python3.6/site-packages/flasgger/marshmallow_apispec.py:13: in <module> openapi_converter = openapi.OpenAPIConverter(openapi_version='2.0') E TypeError: __init__() missing 2 required positional arguments: 'schema_name_resolver' and 'spec' ```
1medium
Title: Feature Request: Automated Task run at User Login Body: **Is your feature request related to a problem? Please describe.** I have a Task that reads the Logonserver of the Agent and writes it to a Custom Field, when there is not User logged it, the Script dont "posts an answer" to the Custom Field, so the Field is empty and not visible at the Summary Tab. Actually the Script runs every Day at 12:00 - on Clients there is normally a User logged in, so this is OK, but when the Script runs at an Server Agent the Field is gone. **Describe the solution you'd like** I wish there is an Option to run Tasks automaticaly when an User signs in. I think it should be possible - the Automated Tasks are running with the Task Sheduler of Windows, so there is an option for it!?
1medium
Title: using CycleGAN for Chinese characters style transfer Body: Hi, thank you for sharing the code and this is a very good work. Now, I want to know if the CycleGAN can be used for Chinese characters style transfer. As I know, zi2zi used the pix2pix for this task. I need some suggestions. Thank you~^_^
3misc
Title: input shape of deformable convilution OP must be fixed?If my input shape of w is not fixed. how can i use deformable convilution function? Body: ![image](https://user-images.githubusercontent.com/17508662/79410167-3fe42f00-7fd2-11ea-93b6-bff9fab695a6.png) ![image](https://user-images.githubusercontent.com/17508662/79410199-4ffc0e80-7fd2-11ea-8055-573836a74d52.png)
2hard
Title: LOG.old files Body: ## ❓Question I am using an [aimlflow](https://github.com/aimhubio/aimlflow) watcher to sync aim with mlflow every minute and I found out that the repository size get's quite big (1gb for a run with ~2e5 logged metrics) because of an abundance of LOG.old files inside the meta/chunks/run_id folder. Are these necessary, can I remove them or prevent them from being stored?
1medium
Title: Bad support for PIL images for `crop_image` APIs Body: `box.crop_image(image)` doesn't support for PIL images. Current method requires manual conversion `box.crop_image(np.array(image))` beforehand.
1medium
Title: Using Vectorizer Model After Updating Topics (after reducing outliers) Body: Hello, Do you have any advice how to repeat text preprocessing steps after updating topic models (following outlier reduction technique)? I was able to get clean representative words using topic_model.visualize_barchart before using topic_model.update_topics. However, all my specifications in vectorizer_model for the original model do not transfer to updated one. Are there any steps I am missing? Basically, my visualization of top words includes mainly stopwords after updating topics, which needs to be cleaned again I assume. <img width="976" alt="Screenshot 2023-06-16 at 7 49 07 PM" src="https://github.com/MaartenGr/BERTopic/assets/127628938/5c178954-aa72-4ac7-920a-c2742c535c3c">
1medium
Title: An important question about pre-instructions ("below is an instruction...") Body: Every training example starts with a pre-instruction prompt: "Below is an instruction that describes a task. Write a response that appropriately completes the request." (or with the +input version of the above.) I would like to understand why and where does it format come from (InstructGPT paper?) Since there are already instructions in the dataset, what function does it serve to prepend this additional layer which is exactly the same for each example?
3misc
Title: [Bug] IE browser Remote code execution Body: ### Product Version v4.3.1 ### Product Edition - [X] Community Edition - [ ] Enterprise Edition - [ ] Enterprise Trial Edition ### Installation Method - [X] Online Installation (One-click command installation) - [ ] Offline Package Installation - [ ] All-in-One - [ ] 1Panel - [ ] Kubernetes - [ ] Source Code ### Environment Information Ubuntu 22.04 ### 🐛 Bug Description The internal firewall blocks the jumpserver website via the IPS (intrusion prevention system) and reports the following message: HTTP Microsoft Internet Explorer Code Execution (CVE-2018-8373). Microsoft Edge is used as the browser. ### Recurrence Steps Open the webinterface with microsoft edge and have a internal firewall ips on. ### Expected Behavior _No response_ ### Additional Information _No response_ ### Attempted Solutions Disable IPS on the firewall for http-s traffic to the jumphost and it works.
2hard
Title: local tests broken because of pydantic Body: <!-- Provide a general summary of the bug in the title above. --> <!--- This template is entirely optional and can be removed, but is here to help both you and us. --> <!--- Anything on lines wrapped in comments like these will not show up in the final text. --> ## Describe the Bug Currently it isn't possible to test locally according to the contributing guide because pydantic errors crash the pytest sessions ## System Information - Operating system: archlinux - Strawberry version (if applicable): 0.209.2 ## Additional ContextA ``` ______________________________________________________________________________________________ ERROR collecting test session _______________________________________________________________________________________________ .venv/lib/python3.11/site-packages/_pytest/config/__init__.py:641: in _importconftest mod = import_path(conftestpath, mode=importmode, root=rootpath) .venv/lib/python3.11/site-packages/_pytest/pathlib.py:567: in import_path importlib.import_module(module_name) /usr/lib/python3.11/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) <frozen importlib._bootstrap>:1204: in _gcd_import ??? <frozen importlib._bootstrap>:1176: in _find_and_load ??? <frozen importlib._bootstrap>:1147: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:690: in _load_unlocked ??? .venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module self.loader.exec_module(module) .venv/lib/python3.11/site-packages/_pytest/assertion/rewrite.py:178: in exec_module exec(co, module.__dict__) tests/http/conftest.py:47: in <module> @pytest.fixture(params=_get_http_client_classes()) .venv/lib/python3.11/site-packages/_pytest/fixtures.py:1312: in fixture params=tuple(params) if params is not None else None, tests/http/conftest.py:30: in _get_http_client_classes importlib.import_module(f".{module}", package="tests.http.clients"), /usr/lib/python3.11/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) <frozen importlib._bootstrap>:1204: in _gcd_import ??? <frozen importlib._bootstrap>:1176: in _find_and_load ??? <frozen importlib._bootstrap>:1147: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:690: in _load_unlocked ??? .venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module self.loader.exec_module(module) <frozen importlib._bootstrap_external>:940: in exec_module ??? <frozen importlib._bootstrap>:241: in _call_with_frames_removed ??? tests/http/clients/starlite.py:9: in <module> from starlite import Request, Starlite <frozen importlib._bootstrap>:1176: in _find_and_load ??? <frozen importlib._bootstrap>:1147: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:690: in _load_unlocked ??? .venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module self.loader.exec_module(module) .venv/lib/python3.11/site-packages/starlite/__init__.py:1: in <module> from starlite.app import Starlite <frozen importlib._bootstrap>:1176: in _find_and_load ??? <frozen importlib._bootstrap>:1147: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:690: in _load_unlocked ??? .venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module self.loader.exec_module(module) .venv/lib/python3.11/site-packages/starlite/app.py:6: in <module> from pydantic_openapi_schema import construct_open_api_with_schema_class <frozen importlib._bootstrap>:1176: in _find_and_load ??? <frozen importlib._bootstrap>:1147: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:690: in _load_unlocked ??? .venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module self.loader.exec_module(module) .venv/lib/python3.11/site-packages/pydantic_openapi_schema/__init__.py:1: in <module> from . import v3_1_0 <frozen importlib._bootstrap>:1176: in _find_and_load ??? <frozen importlib._bootstrap>:1147: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:690: in _load_unlocked ??? .venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module self.loader.exec_module(module) .venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/__init__.py:9: in <module> from .components import Components <frozen importlib._bootstrap>:1176: in _find_and_load ??? <frozen importlib._bootstrap>:1147: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:690: in _load_unlocked ??? .venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module self.loader.exec_module(module) .venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/components.py:7: in <module> from .header import Header <frozen importlib._bootstrap>:1176: in _find_and_load ??? <frozen importlib._bootstrap>:1147: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:690: in _load_unlocked ??? .venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module self.loader.exec_module(module) .venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/header.py:8: in <module> class Header(Parameter): .venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/header.py:19: in Header name: Literal[""] = Field(default="", const=True) .venv/lib/python3.11/site-packages/pydantic/fields.py:757: in Field raise PydanticUserError('`const` is removed, use `Literal` instead', code='removed-kwargs') E pydantic.errors.PydanticUserError: `const` is removed, use `Literal` instead E E For further information visit https://errors.pydantic.dev/2.3/u/removed-kwargs ```
2hard
Title: Azure OAuth CSRF State Not Equal Error Body: If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide any extra information that may be useful Responsible disclosure: We want to keep Flask-AppBuilder safe for everyone. If you've discovered a security vulnerability please report to [email protected]. ### Environment Flask-Appbuilder version: pip freeze output: Flask-Appbuilder version==4.1.4 ### Describe the expected results We are currently running Airflow 2.4.3 on Kubernetes with the Airflow Community helm chart version 8.6.1 (located here: https://github.com/airflow-helm/charts). We have enabled Azure OAuth authentication for our webserver. This should bring up our webserver with an "login with azure" button and we should be able to click it and log in just fine. This is our webserver_config that we are using: ``` from flask_appbuilder.security.manager import AUTH_OAUTH from airflow.www.security import AirflowSecurityManager import logging from typing import Dict, Any, List, Union import os import sys #Add this as a module to pythons path sys.path.append('/opt/airflow') log = logging.getLogger(__name__) log.setLevel(os.getenv("AIRFLOW__LOGGING__FAB_LOGGING_LEVEL", "DEBUG")) class AzureCustomSecurity(AirflowSecurityManager): # In this example, the oauth provider == 'azure'. # If you ever want to support other providers, see how it is done here: # https://github.com/dpgaspar/Flask-AppBuilder/blob/master/flask_appbuilder/security/manager.py#L550 def get_oauth_user_info(self, provider, resp): # Creates the user info payload from Azure. # The user previously allowed your app to act on their behalf, # so now we can query the user and teams endpoints for their data. # Username and team membership are added to the payload and returned to FAB. if provider == "azure": log.debug("Azure response received : {0}".format(resp)) id_token = resp["id_token"] log.debug(str(id_token)) me = self._azure_jwt_token_parse(id_token) log.debug("Parse JWT token : {0}".format(me)) return { "name": me.get("name", ""), "email": me["upn"], "first_name": me.get("given_name", ""), "last_name": me.get("family_name", ""), "id": me["oid"], "username": me["oid"], "role_keys": me.get("roles", []), } # Adding this in because if not the redirect url will start with http and we want https os.environ["AIRFLOW__WEBSERVER__ENABLE_PROXY_FIX"] = "True" WTF_CSRF_ENABLED = False CSRF_ENABLED = False AUTH_TYPE = AUTH_OAUTH AUTH_ROLES_SYNC_AT_LOGIN = True # Checks roles on every login # Make sure to replace this with the path to your security manager class FAB_SECURITY_MANAGER_CLASS = "webserver_config.AzureCustomSecurity" # a mapping from the values of `userinfo["role_keys"]` to a list of FAB roles AUTH_ROLES_MAPPING = { "airflow_dev_admin": ["Admin"], "airflow_dev_op": ["Op"], "airflow_dev_user": ["User"], "airflow_dev_viewer": ["Viewer"] } # force users to re-auth after 30min of inactivity (to keep roles in sync) PERMANENT_SESSION_LIFETIME = 1800 # If you wish, you can add multiple OAuth providers. OAUTH_PROVIDERS = [ { "name": "azure", "icon": "fa-windows", "token_key": "access_token", "remote_app": { "client_id": "CLIENT_ID", "client_secret": 'AZURE_DEV_CLIENT_SECRET', "api_base_url": "https://login.microsoftonline.com/TENANT_ID", "request_token_url": None, 'request_token_params': { 'scope': 'openid email profile' }, "access_token_url": "https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/token", "access_token_params": { 'scope': 'openid email profile' }, "authorize_url": "https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/authorize", "authorize_params": { 'scope': 'openid email profile', }, 'jwks_uri':'https://login.microsoftonline.com/common/discovery/v2.0/keys', }, }, ] ``` ### Describe the actual results Instead, we are getting this error after we click the Azure button: [2022-11-28 22:04:58,744] {views.py:659} ERROR - Error authorizing OAuth access token: mismatching_state: CSRF Warning! State not equal in request and response. airflow-web [2022-11-28 22:04:58,744] {views.py:659} ERROR - Error authorizing OAuth access token: mismatching_state: CSRF Warning! State not equal in request and response. ### Steps to reproduce Running Airflow 2.4.3 on Kubernetes with the Airflow Community helm chart version 8.6.1 and using the webserver_config file like above. When the webserver is running, you click on the "login to azure" button. ### Additional Comments I already posted an issue like this in the Airflow repo, and they said this could more then likely be a Flask problem, which is why I am making this issue here. If any other information is needed please let me know
1medium
Title: Issue accessing assets when deploying Skyvern via Helm on Kubernetes Body: Hello, I am trying to deploy Skyvern via Helm in my Kubernetes cluster. The installation runs in a Chromium headless environment. (ubuntu server without GUI) I have other applications in this cluster, and they are accessible using a context like https://mydns/myapp. I would like to access Skyvern at https://mydns/skyvern. When I visit https://mydns/skyvern, I receive a 200 response on this URL, but a 404 error for https://mydns/assets/index-BrsxbjwQ.js. It seems that the skyvern prefix is removed so that the application tries to access resources at the root (thanks to [StripPrefix middleware in Traefik](https://doc.traefik.io/traefik/middlewares/http/stripprefix/)). Do you have any idea how to resolve this issue? Thanks!
1medium
Title: tensorly on large-scale tensor? Body: Hi team, Thanks for this nice repo! I'm wondering if tensorly actually supports decomposition on large tensors? I'm trying to run parafac on a (N,N,2) tensor, N is as large as 10k. It can run with rank 2, but more than that I won't have enough memory. Is it because of tensorly does all the computation in the dense format so it is hard to scale up? Any thoughts on how I can run parafac on large tensors? Thanks!
2hard
Title: [fix] string to PydanticObjectId converting Body: Details: ```python bar = await Product.get("608da169eb9e17281f0ab2ff") # not working, bar = await Product.get(PydanticObjectId("608da169eb9e17281f0ab2ff")) # working. ```
1medium
Title: Can't parse ".serverless/requirements/xlrd/biffh.py" unless encoding is latin Body: <!--- Provide a general summary of the issue in the Title above --> ## Context when detect_flask is called during zappa init, it fails on an encoding issue because of the commented out block at the head of xlrd/biffh.py <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> <!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 --> ## Expected Behavior <!--- Tell us what should happen --> It should not error out. ## Actual Behavior <!--- Tell us what happens instead --> It fails with an encoding exception at f.readlines() ## Possible Fix <!--- Not obligatory, but suggest a fix or reason for the bug --> just add encoding='latin' to the open call ## Steps to Reproduce <!--- Provide a link to a live example, or an unambiguous set of steps to --> <!--- reproduce this bug include code to reproduce, if relevant --> 1. have xlrd as a dependency 2. call zappa init 3. ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Zappa version used: * Operating System and Python version: OSX, python 3 * The output of `pip freeze`: * Link to your project (optional): * Your `zappa_settings.py`:
1medium
Title: session.unfollow_users do have option for white_list users to not unfollow them when execute with unfollow all Body: ## InstaPy configuration I see in unfollow_util.py have a likes of code when is set white_list to don`t unfollow user account if its in it. How to use it in sample code session.unfollow_users(white_list = ['user', 'user', 'user'] If its for that Thank you. TypeError: unfollow_users() got an unexpected keyword argument 'white_list'
1medium
Title: How to disable Bayesian optimization? Body: Is there a way to disable the Bayesian optimization subroutine when fitting on a new dataset? I am curious about how the performance would be different when there is no such fine-tune. Thanks!
1medium
Title: 登录失败报错,网页版可正常登录 Body: - 之前一直正常使用,定期重启 - 扫码之后,等待时间很长 - 之前出现过一次,手机上清理了缓存及一批好友之后正常 ## Log ``` Getting uuid of QR code. Downloading QR code. Please scan the QR code to log in. Please press confirm on your phone. Loading the contact, this may take a little while. Traceback (most recent call last): File "test.py", line 4, in <module> bot = Bot() File "/usr/local/lib/python3.5/site-packages/wxpy/api/bot.py", line 86, in __init__ loginCallback=login_callback, exitCallback=logout_callback File "/usr/local/lib/python3.5/site-packages/itchat/components/register.py", line 35, in auto_login loginCallback=loginCallback, exitCallback=exitCallback) File "/usr/local/lib/python3.5/site-packages/itchat/components/login.py", line 67, in login self.get_contact(True) File "/usr/local/lib/python3.5/site-packages/itchat/components/contact.py", line 284, in get_contact seq, batchMemberList = _get_contact(seq) File "/usr/local/lib/python3.5/site-packages/itchat/components/contact.py", line 280, in _get_contact j = json.loads(r.content.decode('utf-8', 'replace')) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/__init__.py", line 319, in loads return _default_decoder.decode(s) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/decoder.py", line 339, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/decoder.py", line 357, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ```
1medium
Title: Add devices by IP address range Body: Hey, first off, thanks for antminer-monitor! It's uncomplicated software that does its job well! Second, we were curious about adding devices by IP address range, for example: 192.168.69.101-200: antminer S9 192.168.70.1-250: antminer L3 Or, is there a way to add these devices in a range using the command line that I just don't know about yet? We might be able to add this functionality and submit a PR, if you can point us in the right direction. Thanks!
1medium
Title: Implement multiple shadows Body: - Implement multiple shadows on planes, using the projection algorithm from #191. ![Screen Shot 2021-12-15 at 3 40 59 PM](https://user-images.githubusercontent.com/9458157/146147924-c904144f-2a27-4b44-a672-c83c834506ac.png) ![Screen Shot 2021-12-15 at 3 30 14 PM](https://user-images.githubusercontent.com/9458157/146147969-64cdad41-2142-434f-8a4f-110016092cae.png) - But there are more to do, such as clip points which are inside the plane. ![Screen Shot 2021-12-15 at 4 17 25 PM](https://user-images.githubusercontent.com/9458157/146149740-51ea21df-ee68-4c6d-a31f-0f7be83db347.png)
2hard
Title: 请问能提供更多的动态图预训练的bert模型吗?或者静态图转动态图的脚本也行 Body: https://github.com/PaddlePaddle/models/tree/release/1.8/dygraph/bert 目前只有: ![image](https://user-images.githubusercontent.com/26199465/88629546-f8496500-d0e1-11ea-9167-7a05bdb3d7ae.png) 能正常加载使用,但是没有中文的bert 能放出转换脚本吗?
1medium
Title: Auto create commandline args from yamls with fairseq-hydra-train Body: ## 🚀 Feature Request Though Fairseq is such a powerful framework, I got some problems when I used fairseq-hydra-train. If I want to define an arg in the config yaml file, I have to define it in some dataclass first. This is cumbersome and slow compared with the hydra's ability of auto-creating commandline args. Furthermore, this makes it impossible to use hydra's ability of multi-level nested parameters. How can I access the full ability of hydra with fairseq?
1medium
Title: inital list of Requests dont all get handled? Body: request queue not being handled. So when i have a list of Requests with different labels it will only handle the labels of the first request it sees. Am I doing something wrong or is this a bug? Here in the logs I queue up 4 things ``` DEBUG Added 4 requests to the queue, response: processed_requests=[ProcessedRequest(id='WCJHwnKoF1xWGYF', unique_key='https://URL1', was_already_present=False, was_already_handled=False), ProcessedRequest(id='ntWUsKSPofbfOU2', unique_key='https://URL2', was_already_present=False, was_already_handled=False), ProcessedRequest(id='xMI8mB7yETk8KJz', unique_key='https://URL3', was_already_present=False, was_already_handled=False), ProcessedRequest(id='2iT82Knl0Rr4qEi', unique_key='https://en.wikipedia.org/wiki/cgroups', was_already_present=False, was_already_handled=False)] unprocessed_requests=[] [crawlee.memory_storage_client._memory_storage_client] DEBUG Storage was already purged on start. [crawlee._autoscaling.autoscaled_pool] DEBUG Starting the pool ``` but then it only process the first 2 with label=JSON ``` │ requests_finished │ 2 │ │ requests_failed │ 0 │ │ retry_histogram │ [2] │ │ request_avg_failed_duration │ None │ │ request_avg_finished_duration │ 1.623642 │ │ requests_finished_per_minute │ 63 │ │ requests_failed_per_minute │ 0 │ │ request_total_duration │ 3.247283 │ │ requests_total │ 2 │ │ crawler_runtime │ 1.919938 │ ``` Heres my list i sent to get queued.NOw the strange thing is if i comment out the first two Request the other two work. and when i put the HTML lable on top ```python [ Request.from_url( url="https://sightmap.com/app/api/v1/8epml7q1v6d/sightmaps/80524", label="JSON", user_data={"building_id": 1}, ), Request.from_url( url="https://sightmap.com/app/api/v1/60p7q39nw7n/sightmaps/397", label="JSON", user_data={"building_id": 2}, ), Request.from_url( url="https://www.windsorcommunities.com/properties/windsor-on-the-lake/floorplans/", label="HTML", user_data={"building_id": 3}, ), Request.from_url( url="https://en.wikipedia.org/wiki/Cgroups", label="HTML", user_data={"building_id": 3}, ), ] ``` Heres my router.py handlers ```python @router.default_handler async def default_handler(context: BeautifulSoupCrawlingContext) -> None: """Default request handler.""" building_id = context.request.user_data.model_extra.get("building_id") logger.info("PASSING", url={context.request.url}, building_id=building_id) @router.handler("HTML") async def html_handler(context: BeautifulSoupCrawlingContext) -> None: """Default request handler.""" building_id = context.request.user_data.model_extra.get("building_id") logger.info("Handling", url={context.request.url}, building_id=building_id) http_response = context.http_response content = http_response.read() if http_response else None if content: try: content_str = content.decode("utf-8") # BeautifulSoup will fixes invalid HTML if content_str == str(BeautifulSoup(content_str, "html.parser")): logger.error("Invalid HTML content.") raise Exception("Invalid HTML content.") else: logger.debug("Valid HTML content.") except Exception as e: logger.error( "An error occurred while parsing HTML content.", error=str(e), url=context.request.url, ) raise e else: # Not sure if none content is already handled by crawlee doesn't hurt to have it here logger.error("No content fetched.", url=context.request.url) raise Exception("No content fetched.") await save_scrape_response(context, content_str) @router.handler("JSON") async def json_handler(context: BeautifulSoupCrawlingContext) -> None: """Default request handler.""" building_id = context.request.user_data.model_extra.get("building_id") logger.info("Handling", url={context.request.url}, building_id=building_id) http_response = context.http_response try: json_content = json.load(http_response) except json.JSONDecodeError: json_content = None logger.error("Invalid JSON content.", url=context.request.url) # We should save invalid page for debugging? # They get saved in the logs maybe future we pump them to a bad_responses bucket? await save_scrape_response(context, json_content) ``` <details><summary>Logs</summary> <p> ``` DEBUG Added 4 requests to the queue, response: processed_requests=[ProcessedRequest(id='WCJHwnKoF1xWGYF', unique_key='https://URL1', was_already_present=False, was_already_handled=False), ProcessedRequest(id='ntWUsKSPofbfOU2', unique_key='https://URL2', was_already_present=False, was_already_handled=False), ProcessedRequest(id='xMI8mB7yETk8KJz', unique_key='https://URL3', was_already_present=False, was_already_handled=False), ProcessedRequest(id='2iT82Knl0Rr4qEi', unique_key='https://en.wikipedia.org/wiki/cgroups', was_already_present=False, was_already_handled=False)] unprocessed_requests=[] [crawlee.memory_storage_client._memory_storage_client] DEBUG Storage was already purged on start. [crawlee._autoscaling.autoscaled_pool] DEBUG Starting the pool [crawlee.beautifulsoup_crawler._beautifulsoup_crawler] INFO Current request statistics: ┌───────────────────────────────┬──────────┐ │ requests_finished │ 0 │ │ requests_failed │ 0 │ │ retry_histogram │ [0] │ │ request_avg_failed_duration │ None │ │ request_avg_finished_duration │ None │ │ requests_finished_per_minute │ 0 │ │ requests_failed_per_minute │ 0 │ │ request_total_duration │ 0.0 │ │ requests_total │ 0 │ │ crawler_runtime │ 0.013797 │ └───────────────────────────────┴──────────┘ [crawlee._autoscaling.autoscaled_pool] INFO current_concurrency = 0; desired_concurrency = 2; cpu = 0; mem = 0; event_loop = 0.0; client_info = 0.0 [crawlee._utils.system] DEBUG Calling get_cpu_info()... [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Awaiting listener task... [crawlee.statistics._statistics] DEBUG Persisting state of the Statistics (event_data=is_migrating=False). [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Listener task completed. [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Removing listener task from the set... [crawlee.storages._request_queue] DEBUG Queue head still returned requests that need to be processed (or that are locked by other clients) [crawlee._autoscaling.autoscaled_pool] DEBUG Scheduling a new task [crawlee.storages._request_queue] DEBUG There are still ids in the queue head that are pending processing ({"queue_head_ids_pending": 4}) [crawlee._autoscaling.autoscaled_pool] DEBUG Scheduling a new task [crawlee.storages._request_queue] DEBUG There are still ids in the queue head that are pending processing ({"queue_head_ids_pending": 4}) [crawlee._autoscaling.autoscaled_pool] DEBUG Not scheduling new tasks - already running at desired concurrency [httpx] DEBUG load_ssl_context verify=True cert=None trust_env=True http2=False [httpx] DEBUG load_verify_locations cafile='/home/vscode/.cache/pypoetry/virtualenvs/bs-crawler-7DgAT4g4-py3.12/lib/python3.12/site-packages/certifi/cacert.pem' [httpcore.connection] DEBUG connect_tcp.started host='sightmap.com' port=443 local_address=None timeout=5.0 socket_options=None [httpcore.connection] DEBUG connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0xffff8df70980> [httpcore.connection] DEBUG start_tls.started ssl_context=<ssl.SSLContext object at 0xffff8e1b9550> server_hostname='sightmap.com' timeout=5.0 [crawlee._utils.system] DEBUG Calling get_memory_info()... [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Awaiting listener task... [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Awaiting listener task... [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Listener task completed. [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Removing listener task from the set... [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Listener task completed. [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Removing listener task from the set... [httpcore.connection] DEBUG start_tls.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0xffff8df73800> [httpcore.http2] DEBUG send_connection_init.started request=<Request [b'GET']> [httpcore.http2] DEBUG send_connection_init.complete [httpcore.http2] DEBUG send_request_headers.started request=<Request [b'GET']> stream_id=1 [hpack.hpack] DEBUG Adding (b':method', b'GET') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 2 with 7 bits [hpack.hpack] DEBUG Adding (b':authority', b'sightmap.com') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 1 with 6 bits [hpack.hpack] DEBUG Encoding 9 with 7 bits [hpack.hpack] DEBUG Adding (b':scheme', b'https') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 7 with 7 bits [hpack.hpack] DEBUG Adding (b':path', b'/app/api/v1/8epml7q1v6d/sightmaps/80524') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 4 with 6 bits [hpack.hpack] DEBUG Encoding 28 with 7 bits [hpack.hpack] DEBUG Adding (b'accept-encoding', b'gzip, deflate, br') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 16 with 6 bits [hpack.hpack] DEBUG Encoding 13 with 7 bits [hpack.hpack] DEBUG Adding (b'accept', b'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 19 with 6 bits [hpack.hpack] DEBUG Encoding 101 with 7 bits [hpack.hpack] DEBUG Adding (b'accept-language', b'en-US,en;q=0.9') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 17 with 6 bits [hpack.hpack] DEBUG Encoding 11 with 7 bits [hpack.hpack] DEBUG Adding (b'user-agent', b'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 58 with 6 bits [hpack.hpack] DEBUG Encoding 92 with 7 bits [hpack.hpack] DEBUG Encoded header block to b'\x82A\x89A\xa6\x9d4\x8e\xb5\xc8z\x7f\x87D\x9c`u\xd6\xc0\xeb3\x1d\xc2\xc3\xc5\xae\x9a\x1d\xec\x1e\xeeH\xc2\r4\xe9\xa4u\xa1\x87\x80\xd8\x9aP\x8d\x9b\xd9\xab\xfaRB\xcb@\xd2_\xa5#\xb3S\xe5I|\xa5\x89\xd3M\x1fC\xae\xba\x0cA\xa4\xc7\xa9\x8f3\xa6\x9a?\xdf\x9ah\xfa\x1du\xd0b\r&=Ly\xa6\x8f\xbe\xd0\x01w\xfe\x8dH\xe6+\x03\xeei~\x8dH\xe6+\x1e\x0b\x1d\x7fF\xa4s\x15\x81\xd7T\xdf_,|\xfd\xf6\x80\x0b\xbd\xf4:\xeb\xa0\xc4\x1aLz\x98A\xa6\xa8\xb2,_$\x9cuL_\xbe\xf0F\xcf\xdfh\x00\xbb\xbfQ\x8b-Kp\xdd\xf4Z\xbe\xfb@\x05\xdfz\xdc\xd0\x7ff\xa2\x81\xb0\xda\xe0S\xfa\xd02\x1a\xa4\x9d\x13\xfd\xa9\x92\xa4\x96\x854\x0c\x8aj\xdc\xa7\xe2\x81\x04Aj \xffjC]t\x17\x91c\xccd\xb0\xdb.\xae\xcb\x8a\x7fY\xb1\xef\xd1\x9f\xe9J\r\xd4\xaab):\x9f\xfbR\xf4\xf6\x1e\x92\xb0\xebk\x81v]t\x0b\x85\xa1)\xb8r\x8e\xc30\xdb.\xae\xcb\x9f' [httpcore.http2] DEBUG send_request_headers.complete [httpcore.http2] DEBUG send_request_body.started request=<Request [b'GET']> stream_id=1 [httpcore.http2] DEBUG send_request_body.complete [httpcore.http2] DEBUG receive_response_headers.started request=<Request [b'GET']> stream_id=1 [httpcore.http2] DEBUG receive_remote_settings.started [httpcore.http2] DEBUG receive_remote_settings.complete return_value=<RemoteSettingsChanged changed_settings:{ChangedSetting(setting=3, original_value=None, new_value=128), ChangedSetting(setting=4, original_value=65535, new_value=65536), ChangedSetting(setting=5, original_value=16384, new_value=16777215)}> [httpcore.http2] DEBUG send_request_headers.started request=<Request [b'GET']> stream_id=3 [hpack.hpack] DEBUG Adding (b':method', b'GET') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 2 with 7 bits [hpack.hpack] DEBUG Adding (b':authority', b'sightmap.com') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 67 with 7 bits [hpack.hpack] DEBUG Adding (b':scheme', b'https') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 7 with 7 bits [hpack.hpack] DEBUG Adding (b':path', b'/app/api/v1/60p7q39nw7n/sightmaps/397') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 4 with 6 bits [hpack.hpack] DEBUG Encoding 27 with 7 bits [hpack.hpack] DEBUG Adding (b'accept-encoding', b'gzip, deflate, br') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 66 with 7 bits [hpack.hpack] DEBUG Adding (b'accept', b'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 65 with 7 bits [hpack.hpack] DEBUG Adding (b'accept-language', b'en-US,en;q=0.9') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 64 with 7 bits [hpack.hpack] DEBUG Adding (b'user-agent', b'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36') to the header table, sensitive:False, huffman:True [hpack.hpack] DEBUG Encoding 63 with 7 bits [hpack.hpack] DEBUG Encoded header block to b'\x82\xc3\x87D\x9b`u\xd6\xc0\xeb3\x1d\xc2\xc3\x80\xad\xde\xcc\xbfW\x87ja\x06\x9at\xd2:\xd0\xc3/\xbb\xc2\xc1\xc0\xbf' [httpcore.http2] DEBUG send_request_headers.complete [httpcore.http2] DEBUG send_request_body.started request=<Request [b'GET']> stream_id=3 [httpcore.http2] DEBUG send_request_body.complete [httpcore.http2] DEBUG receive_response_headers.started request=<Request [b'GET']> stream_id=3 [hpack.hpack] DEBUG Decoding b' \x88a\x96\xdc4\xfd( \xa9|\xa4P@\x13J\x05\xfb\x80\r\xc1>\xa6-\x1b\xff_\x8b\x1du\xd0b\r&=LtA\xea\x00\x85Al\xee[?\x84\xaacU\xe7\x00\x04vary\x8b\x84\x84-i[\x05D<\x86\xaao\x00\x89 \xc99V!\xeaM\x87\xa3\x8c\xa8\xeb\x10d\x9c\xbfJWa\xbb\x8d%\x00\x8b!\xeaIjJ\xc5\xa8\x87\x90\xd5M\x83\x9b\xd9\xab' [hpack.hpack] DEBUG Decoded 0, consumed 1 bytes [hpack.table] DEBUG Resizing header table to 0 from 4096 [hpack.hpack] DEBUG Decoded 8, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b':status', b'200'), consumed 1 [hpack.hpack] DEBUG Decoded 33, consumed 1 bytes [hpack.hpack] DEBUG Decoded 22, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b'date', b'Sat, 21 Dec 2024 19:01:29 GMT'), total consumed 24 bytes, indexed True [hpack.hpack] DEBUG Decoded 31, consumed 1 bytes [hpack.hpack] DEBUG Decoded 11, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b'content-type', b'application/json'), total consumed 13 bytes, indexed True [hpack.hpack] DEBUG Decoded 5, consumed 1 bytes [hpack.hpack] DEBUG Decoded 4, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b'server', b'nginx'), total consumed 12 bytes, indexed False [hpack.hpack] DEBUG Decoded 4, consumed 1 bytes [hpack.hpack] DEBUG Decoded 11, consumed 1 bytes [hpack.hpack] DEBUG Decoded (<memory at 0xffff8dfd0c40>, b'Accept-Encoding'), total consumed 18 bytes, indexed False [hpack.hpack] DEBUG Decoded 9, consumed 1 bytes [hpack.hpack] DEBUG Decoded 12, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b'cache-control', b'no-cache, private'), total consumed 24 bytes, indexed False [hpack.hpack] DEBUG Decoded 11, consumed 1 bytes [hpack.hpack] DEBUG Decoded 3, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b'content-encoding', b'gzip'), total consumed 17 bytes, indexed False [httpcore.http2] DEBUG receive_response_headers.complete return_value=(200, [(b'date', b'Sat, 21 Dec 2024 19:01:29 GMT'), (b'content-type', b'application/json'), (b'server', b'nginx'), (b'vary', b'Accept-Encoding'), (b'cache-control', b'no-cache, private'), (b'content-encoding', b'gzip')]) [httpx] INFO HTTP Request: GET https://URL1 "HTTP/2 200 OK" [httpcore.http2] DEBUG receive_response_body.started request=<Request [b'GET']> stream_id=1 [hpack.hpack] DEBUG Decoding b'\x88a\x96\xdc4\xfd( \xa9|\xa4P@\x13J\x05\xfb\x80\r\xc1>\xa6-\x1b\xff_\x8b\x1du\xd0b\r&=LtA\xea\x00\x85Al\xee[?\x84\xaacU\xe7\x00\x04vary\x8b\x84\x84-i[\x05D<\x86\xaao\x00\x89 \xc99V!\xeaM\x87\xa3\x8c\xa8\xeb\x10d\x9c\xbfJWa\xbb\x8d%\x00\x8b!\xeaIjJ\xc5\xa8\x87\x90\xd5M\x83\x9b\xd9\xab' [hpack.hpack] DEBUG Decoded 8, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b':status', b'200'), consumed 1 [hpack.hpack] DEBUG Decoded 33, consumed 1 bytes [hpack.hpack] DEBUG Decoded 22, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b'date', b'Sat, 21 Dec 2024 19:01:29 GMT'), total consumed 24 bytes, indexed True [hpack.hpack] DEBUG Decoded 31, consumed 1 bytes [hpack.hpack] DEBUG Decoded 11, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b'content-type', b'application/json'), total consumed 13 bytes, indexed True [hpack.hpack] DEBUG Decoded 5, consumed 1 bytes [hpack.hpack] DEBUG Decoded 4, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b'server', b'nginx'), total consumed 12 bytes, indexed False [hpack.hpack] DEBUG Decoded 4, consumed 1 bytes [hpack.hpack] DEBUG Decoded 11, consumed 1 bytes [hpack.hpack] DEBUG Decoded (<memory at 0xffff8ddf4580>, b'Accept-Encoding'), total consumed 18 bytes, indexed False [hpack.hpack] DEBUG Decoded 9, consumed 1 bytes [hpack.hpack] DEBUG Decoded 12, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b'cache-control', b'no-cache, private'), total consumed 24 bytes, indexed False [hpack.hpack] DEBUG Decoded 11, consumed 1 bytes [hpack.hpack] DEBUG Decoded 3, consumed 1 bytes [hpack.hpack] DEBUG Decoded (b'content-encoding', b'gzip'), total consumed 17 bytes, indexed False [httpcore.http2] DEBUG receive_response_body.complete [httpcore.http2] DEBUG response_closed.started stream_id=1 [httpcore.http2] DEBUG receive_response_headers.complete return_value=(200, [(b'date', b'Sat, 21 Dec 2024 19:01:29 GMT'), (b'content-type', b'application/json'), (b'server', b'nginx'), (b'vary', b'Accept-Encoding'), (b'cache-control', b'no-cache, private'), (b'content-encoding', b'gzip')]) [httpx] INFO HTTP Request: GET https://URL2 "HTTP/2 200 OK" [httpcore.http2] DEBUG receive_response_body.started request=<Request [b'GET']> stream_id=3 [httpcore.http2] DEBUG response_closed.complete [httpcore.http2] DEBUG receive_response_body.complete [httpcore.http2] DEBUG response_closed.started stream_id=3 [httpcore.http2] DEBUG response_closed.complete [crawlee.storages._request_queue] DEBUG There are still ids in the queue head that are pending processing ({"queue_head_ids_pending": 2}) [crawlee._autoscaling.autoscaled_pool] DEBUG Not scheduling new tasks - already running at desired concurrency {"url": "{'https://URL2'}", "building_id": 2, "message": "Handling", "time": "2024-12-21T19:01:29.725513Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/routes.py", "line": "99", "function": "routes:json_handler"}} [urllib3.util.retry] DEBUG Converted retries value: 3 -> Retry(total=3, connect=None, read=None, redirect=None, status=None) [urllib3.connectionpool] DEBUG Starting new HTTPS connection (1): oauth2.googleapis.com:443 [urllib3.connectionpool] DEBUG https://oauth2.googleapis.com:443 "POST /token HTTP/11" 200 None [urllib3.connectionpool] DEBUG Starting new HTTPS connection (1): storage.googleapis.com:443 [urllib3.connectionpool] DEBUG https://storage.googleapis.com:443 "POST /upload/storage/v1/b/scraper-responses/o?uploadType=multipart HTTP/11" 200 968 {"destination_blob_name": "0193ea98-99ff-8c6e-b37f-cfd9e7568bb1.json", "building_id": 2, "message": "String content uploaded", "time": "2024-12-21T19:01:30.056656Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/utils/bucket_utils.py", "line": "32", "function": "bucket_utils:upload_string_to_gcs"}} 2024-12-21 19:01:30,068 INFO sqlalchemy.engine.Engine BEGIN (implicit) [sqlalchemy.engine.Engine] INFO BEGIN (implicit) ({"message": "BEGIN (implicit)", "asctime": "2024-12-21 19:01:30,068"}) 2024-12-21 19:01:30,079 INFO sqlalchemy.engine.Engine INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at [sqlalchemy.engine.Engine] INFO INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at ({"message": "INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at", "asctime": "2024-12-21 19:01:30,079"}) 2024-12-21 19:01:30,079 INFO sqlalchemy.engine.Engine [generated in 0.00081s] (UUID('0193ea98-99ff-8c6e-b37f-cfd9e7568bb1'), 'https://URL2', 'https://URL2', 2, 0) [sqlalchemy.engine.Engine] INFO [generated in 0.00081s] (UUID('0193ea98-99ff-8c6e-b37f-cfd9e7568bb1'), 'https://URL2', 'https://URL2', 2, 0) ({"message": "[generated in 0.00081s] (UUID('0193ea98-99ff-8c6e-b37f-cfd9e7568bb1'), 'https://URL2', 'https://URL2', 2, 0)", "asctime": "2024-12-21 19:01:30,079"}) {"url": "{'https://URL1'}", "building_id": 1, "message": "Handling", "time": "2024-12-21T19:01:30.081275Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/routes.py", "line": "99", "function": "routes:json_handler"}} [urllib3.connectionpool] DEBUG https://storage.googleapis.com:443 "POST /upload/storage/v1/b/scraper-responses/o?uploadType=multipart HTTP/11" 200 968 {"destination_blob_name": "0193ea98-9b69-8046-96af-dc9893ff15c6.json", "building_id": 1, "message": "String content uploaded", "time": "2024-12-21T19:01:30.332999Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/utils/bucket_utils.py", "line": "32", "function": "bucket_utils:upload_string_to_gcs"}} [crawlee.storages._request_queue] DEBUG There are still ids in the queue head that are pending processing ({"queue_head_ids_pending": 2}) [crawlee._autoscaling.autoscaled_pool] DEBUG Not scheduling new tasks - system is overloaded [crawlee._utils.system] DEBUG Calling get_cpu_info()... 2024-12-21 19:01:30,408 INFO sqlalchemy.engine.Engine COMMIT [sqlalchemy.engine.Engine] INFO COMMIT ({"message": "COMMIT", "asctime": "2024-12-21 19:01:30,408"}) [crawlee._utils.system] DEBUG Calling get_memory_info()... [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Awaiting listener task... [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Awaiting listener task... [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Listener task completed. [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Removing listener task from the set... [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Listener task completed. [crawlee.events._event_manager] DEBUG LocalEventManager.on.listener_wrapper(): Removing listener task from the set... {"url": "{'https://URL2'}", "building_id": 2, "file_id": "UUID('0193ea98-99ff-8c6e-b37f-cfd9e7568bb1')", "message": "Scrape response saved to GCP.", "time": "2024-12-21T19:01:30.448841Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/routes.py", "line": "42", "function": "routes:save_scrape_response"}} [crawlee._autoscaling.autoscaled_pool] DEBUG Worker task finished [crawlee.storages._request_queue] DEBUG There are still ids in the queue head that are pending processing ({"queue_head_ids_pending": 2}) [crawlee.beautifulsoup_crawler._beautifulsoup_crawler] INFO The crawler has reached its limit of 1 requests per crawl. All ongoing requests have now completed. Total requests processed: 1. The crawler will now shut down. [crawlee._autoscaling.autoscaled_pool] DEBUG `is_finished_function` reports that we are finished [crawlee._autoscaling.autoscaled_pool] DEBUG Terminating - waiting for tasks to complete 2024-12-21 19:01:30,764 INFO sqlalchemy.engine.Engine BEGIN (implicit) [sqlalchemy.engine.Engine] INFO BEGIN (implicit) ({"message": "BEGIN (implicit)", "asctime": "2024-12-21 19:01:30,764"}) 2024-12-21 19:01:30,765 INFO sqlalchemy.engine.Engine INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at [sqlalchemy.engine.Engine] INFO INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at ({"message": "INSERT INTO scrape_responses (file_id, requested_url, loaded_url, building_id, retry_count) VALUES ($1::UUID, $2::VARCHAR, $3::VARCHAR, $4::INTEGER, $5::INTEGER) RETURNING scrape_responses.scrape_page_id, scrape_responses.created_at", "asctime": "2024-12-21 19:01:30,765"}) 2024-12-21 19:01:30,766 INFO sqlalchemy.engine.Engine [cached since 0.6874s ago] (UUID('0193ea98-9b69-8046-96af-dc9893ff15c6'), 'https://URL1', 'https://URL1', 1, 0) [sqlalchemy.engine.Engine] INFO [cached since 0.6874s ago] (UUID('0193ea98-9b69-8046-96af-dc9893ff15c6'), 'https://URL1', 'https://URL1', 1, 0) ({"message": "[cached since 0.6874s ago] (UUID('0193ea98-9b69-8046-96af-dc9893ff15c6'), 'https://URL1', 'https://URL1', 1, 0)", "asctime": "2024-12-21 19:01:30,766"}) 2024-12-21 19:01:30,909 INFO sqlalchemy.engine.Engine COMMIT [sqlalchemy.engine.Engine] INFO COMMIT ({"message": "COMMIT", "asctime": "2024-12-21 19:01:30,909"}) {"url": "{'https://URL1'}", "building_id": 1, "file_id": "UUID('0193ea98-9b69-8046-96af-dc9893ff15c6')", "message": "Scrape response saved to GCP.", "time": "2024-12-21T19:01:30.946939Z", "severity": "INFO", "logging.googleapis.com/sourceLocation": {"file": "/workspaces/AustinRent/scraper/scraper/routes.py", "line": "42", "function": "routes:save_scrape_response"}} [crawlee._autoscaling.autoscaled_pool] DEBUG Worker task finished [crawlee._autoscaling.autoscaled_pool] DEBUG Worker tasks finished [crawlee._autoscaling.autoscaled_pool] INFO Waiting for remaining tasks to finish [crawlee._autoscaling.autoscaled_pool] DEBUG Pool cleanup finished [crawlee.statistics._statistics] DEBUG Persisting state of the Statistics (event_data=is_migrating=False). [crawlee.beautifulsoup_crawler._beautifulsoup_crawler] INFO Final request statistics: ┌───────────────────────────────┬──────────┐ │ requests_finished │ 2 │ │ requests_failed │ 0 │ │ retry_histogram │ [2] │ │ request_avg_failed_duration │ None │ │ request_avg_finished_duration │ 1.623642 │ │ requests_finished_per_minute │ 63 │ │ requests_failed_per_minute │ 0 │ │ request_total_duration │ 3.247283 │ │ requests_total │ 2 │ │ crawler_runtime │ 1.919938 │ └───────────────────────────────┴──────────┘ ``` </p> </details>
1medium
Title: PC Stability Body: Hello. its me again, I am a little but concern about my PC. My PC configurations are:- MODEL:- HP Pavailion 15 PROCESSOR:- Intel(R) Core i7-9750H @ 2.60 GHz RAM:- 8 GB DDR4 GRAPHICS CARD:- 4 GB NVIDIA GEFORCE GTX 1650 OPERATING SYSTEM:- Windows 10 x64 bit. I am using SAEHD for training I mean the GPU for training. The problem is that whenever I am running the Trainer Module[(5.XSeg) train.bat]:- 1. My LAPTOP's temperature is somewhat rising a little bit after sometime. Say after 1 hour or so.! Is this fine? 2. The trainer module is taking near about 17 hours to mask "266 segmented" images is this normal? cause my fan speed is rising exponentially. Please Help.
1medium
Title: I got this error running demo_cli.py. Assistance would be appreciated. Body: Traceback (most recent call last): File "c:\Users\----\Downloads\Real-Time-Voice-Cloning-master\demo_cli.py", line 80, in <module> encoder.embed_utterance(np.zeros(encoder.sampling_rate)) File "c:\Users\----\Downloads\Real-Time-Voice-Cloning-master\encoder\inference.py", line 144, in embed_utterance frames = audio.wav_to_mel_spectrogram(wav) File "c:\Users\----\Downloads\Real-Time-Voice-Cloning-master\encoder\audio.py", line 58, in wav_to_mel_spectrogram frames = librosa.feature.melspectrogram( TypeError: melspectrogram() takes 0 positional arguments but 2 positional arguments (and 2 keyword-only arguments) were given Here is the code that the error occurred on: def wav_to_mel_spectrogram(wav): """ Derives a mel spectrogram ready to be used by the encoder from a preprocessed audio waveform. Note: this not a log-mel spectrogram. """ frames = librosa.feature.melspectrogram( wav, sampling_rate, n_fft=int(sampling_rate * mel_window_length / 1000), hop_length=int(sampling_rate * mel_window_step / 1000), n_mels=mel_n_channels ) return frames.astype(np.float32).T Update: I believe the issue is that I am on the wrong version of librosa, would anyone know the version used here?
1medium
Title: Issue when emitting in threads Body: Hi, I am currently facing issues when emitting an event in a separate thread. In short: * Main app runs as usual * When task is open, I start a thread in the background * In the background thread, I use *flask_socketio.emit* to emit events * In an Angular app I react to those events In short, * events from all around the app are detected * events from the worker thread do not work I have this issue when running the app via *socketio.run* or *eventlet.wsgi.server*. When using *flask run* or *gunicorn* I have no issues. Any clue on the why? I can provide a minimal example if needed.
1medium
Title: docker: host does not create user container (invalid reference format) Body: _From @inkrement on January 24, 2018 7:43_ I basically, tried to run the [Jupyterhub/docker-demo](https://github.com/jupyterhub/jupyterhub-deploy-docker), but upgraded to the newest docker-versions. The server itself and Github-OAuth work fine, but when I get redirected from Github (right after authentication) I get the following error (and an uncaught exception): ``` jupyterhub | [I 2018-01-24 07:20:32.789 JupyterHub log:124] 302 GET /user/inkrement/ -> /hub/user/inkrement/ (@::ffff:137.208.40.78) 1.14ms jupyterhub | [I 2018-01-24 07:20:32.888 JupyterHub dockerspawner:373] Container 'jupyter-inkrement' is gone jupyterhub | [E 2018-01-24 07:20:32.911 JupyterHub user:427] Unhandled error starting inkrement's server: 500 Server Error: Internal Server Error ("invalid reference format") jupyterhub | [I 2018-01-24 07:20:32.918 JupyterHub dockerspawner:373] Container 'jupyter-inkrement' is gone jupyterhub | [W 2018-01-24 07:20:32.918 JupyterHub dockerspawner:344] container not found ``` I checked all running and stoped docker containers, but there is no container named "jupyter-inkrement". It seems like it was not able to spawn the docker container, but I do not know what to do. Any suggestions? The container-docker is linked to the host-docker via volume as in the demo and I am using a quite new docker version: 17.05.0-ce, build 89658be _Copied from original issue: jupyterhub/jupyterhub#1630_
1medium
Title: [BUG] emoji overlaps with text Body: **Describe the bug** I want to wrap text with emojis on both sides. Adding emoji on the right side/after a text works fine, but adding an emoji before text causes both to overlap and I need to add manually spaces. Adding spaces is not ideal because I am adding multiple emojis at once, and I need to create separate variables depending on whether the emojis appear before or after a piece of text. It also looks a bit uneven visually. Here is an example: ![image](https://github.com/Textualize/rich/assets/16746370/5ec7dbd1-34c9-4f0a-b782-420c1a060448) **Platform** Ubuntu 22.04.2 LTS - terminal <details> <summary>Click to expand</summary> ``` python -m rich.diagnose $ python -m rich.diagnose ╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮ │ A high level console interface. │ │ │ │ ╭──────────────────────────────────────────────────────────────────────────────╮ │ │ │ <console width=211 ColorSystem.TRUECOLOR> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ color_system = 'truecolor' │ │ encoding = 'utf-8' │ │ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'> │ │ height = 53 │ │ is_alt_screen = False │ │ is_dumb_terminal = False │ │ is_interactive = True │ │ is_jupyter = False │ │ is_terminal = True │ │ legacy_windows = False │ │ no_color = False │ │ options = ConsoleOptions( │ │ size=ConsoleDimensions(width=211, height=53), │ │ legacy_windows=False, │ │ min_width=1, │ │ max_width=211, │ │ is_terminal=True, │ │ encoding='utf-8', │ │ max_height=53, │ │ justify=None, │ │ overflow=None, │ │ no_wrap=False, │ │ highlight=None, │ │ markup=None, │ │ height=None │ │ ) │ │ quiet = False │ │ record = False │ │ safe_box = True │ │ size = ConsoleDimensions(width=211, height=53) │ │ soft_wrap = False │ │ stderr = False │ │ style = None │ │ tab_size = 8 │ │ width = 211 │ ╰──────────────────────────────────────────────────────────────────────────────────╯ ╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮ │ Windows features available. │ │ │ │ ╭───────────────────────────────────────────────────╮ │ │ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │ │ ╰───────────────────────────────────────────────────╯ │ │ │ │ truecolor = False │ │ vt = False │ ╰───────────────────────────────────────────────────────╯ ╭────── Environment Variables ───────╮ │ { │ │ 'TERM': 'xterm-256color', │ │ 'COLORTERM': 'truecolor', │ │ 'CLICOLOR': None, │ │ 'NO_COLOR': None, │ │ 'TERM_PROGRAM': None, │ │ 'COLUMNS': None, │ │ 'LINES': None, │ │ 'JUPYTER_COLUMNS': None, │ │ 'JUPYTER_LINES': None, │ │ 'JPY_PARENT_PID': None, │ │ 'VSCODE_VERBOSE_LOGGING': None │ │ } │ ╰────────────────────────────────────╯ platform="Linux" ``` ``` pip freeze | grep rich rich==13.5.2 rich-argparse==1.1.1 ``` </details>
1medium
Title: Move cleanup to disconnect event handler in Windows backends Body: From code review comment: https://github.com/hbldh/bleak/pull/450#discussion_r597064014
1medium
Title: Convert google slide URLs to PDFs (as with powerpoint etc) Body: **Is your feature request related to a problem? Please describe.** People increasingly add presentations as links to google slides. This is bad as these URLs can expire/be deleted in which case the content is lost. Of course, we can ask speakers to remember to also update a pdf version, but since indico can autoconvert other file formats, it would be great if it could do it for google slide URLs as well. **Describe the solution you'd like** Use google API to convert to pdf ``` GET https://www.googleapis.com/drive/v3/files/{fileId}/export ``` as described [here](https://developers.google.com/drive/api/reference/rest/v3/files/export). where the `{field}` can be extracted from a URL e.g. ``` https://docs.google.com/presentation/d/1kbmidlabdSPHUAgS2sZOCGaMqvCmjSM6Kk2p9LSH3Oo/edit#slide=id.p ``` and the `mimeType` is obviously `application/pdf` e.g. using the URL above: ``` https://developers.google.com/drive/api/reference/rest/v3/files/export?apix_params=%7B%22fileId%22%3A%221kbmidlabdSPHUAgS2sZOCGaMqvCmjSM6Kk2p9LSH3Oo%22%2C%22mimeType%22%3A%22application%2Fpdf%22%7D ```
1medium
Title: Error parsing boolean in DeploymentScheduleCreate Body: ### Bug summary I get the following error when trying to have a parameterized value for the active field in schedule. Error message ``` 1 validation error for DeploymentScheduleCreate active Input should be a valid boolean, unable to interpret input For further information visit https://errors.pydantic.dev/2.10/v/bool_parsing ``` get-schedule-isactive.sh ``` #!/bin/sh echo "false" ``` prefect.yaml ``` definitions: work_pools: docker_work_pool: &docker_work_pool name: docker-pool work_queue_name: "{{ get-work-pool.stdout }}" schedules: every_hour: &every_hour cron: "0 0 * * *" timezone: "America/Chicago" active: "{{ get-schedule-isactive.stdout }}" actions: docker_build: &docker_build - prefect.deployments.steps.run_shell_script: id: get-commit-hash script: git rev-parse --short HEAD stream_output: false - prefect.deployments.steps.run_shell_script: id: get-work-pool script: sh utils/get-work-pool.sh stream_output: false - prefect.deployments.steps.run_shell_script: id: get-schedule-isactive script: sh utils/get-schedule-isactive.sh stream_output: false - prefect_docker.deployments.steps.build_docker_image: id: build-image image_name: "repo/image" tag: "{{ get-commit-hash.stdout }}" dockerfile: Dockerfile ``` If I update the schedule in prefect-yaml like below it works fins. ``` schedules: every_hour: &every_hour cron: "0 0 * * *" timezone: "America/Chicago" active: "false" ``` Is this because of pydantic? Workarounds? ### Version info ```Text Version: 3.1.4 API version: 0.8.4 Python version: 3.12.6 Git commit: 78ee41cb Built: Wed, Nov 20, 2024 7:37 PM OS/Arch: linux/x86_64 Profile: ephemeral Server type: server Pydantic version: 2.10.3 Integrations: prefect-docker: 0.6.2 prefect-bitbucket: 0.3.1 ``` ### Additional context _No response_kr
1medium
Title: FuzzyInteger with low bound set does not work as intended Body: #### Description My intention is to have a randomized integer with a minimum value of 1 but instead the library returns a randomized integer between 0 and 1 #### To Reproduce my_val = fuzzy.FuzzyInteger(low=1)
1medium
Title: Do you have a model trained on all datasets? Body: Great project, thank you! Do you have a model trained on all datasets?
3misc
Title: Upgrade `websockets.legacy` usage Body: ### Describe the current behavior Currently we rely on imported objects from `websockets.legacy` which is deprecated (as can be seen here, for example: https://github.com/PrefectHQ/prefect/actions/runs/13147657643/job/36689175326?pr=16972). ### Describe the proposed behavior We need to plan to move to the newer asyncio implementation following the guidelines outlined [here](https://websockets.readthedocs.io/en/stable/howto/upgrade.html). We believe this should be straightforward as we don't rely on anything deep cut, but opening this issue to track so we don't get caught off guard with an upgrade. ### Example Use _No response_ ### Additional context _No response_
1medium
Title: Exception caught for serializer field ReadOnlyField with a FileField source Body: **Describe the bug** I have a simple file server: ```python from django.db import models from django.shortcuts import render from rest_framework import generics, serializers class FooFile(models.Model): fname = models.FileField(max_length=1024, unique=True) class FooFileSerializer(serializers.HyperlinkedModelSerializer): fname = serializers.FileField(use_url=False, required=False) fsize = serializers.ReadOnlyField(source="fname.size") # <-- herein lies the problem class Meta: model = FooFile fields = ("fname", "fsize") class FooFileList(generics.ListCreateAPIView): http_method_names = ["get", "post"] serializer_class = FooFileSerializer queryset = FooFile.objects.all() ``` Generating the OpenAPI schema using drf-spectacular prints this warning: ``` Warning [FooFileList > FooFileSerializer]: could not resolve field on model <class 'foo.models.FooFile'> with path "fname.size". This is likely a custom field that does some unknown magic. Maybe consider annotating the field/property? Defaulting to "string". (Exception: 'NoneType' object has no attribute '_meta') ``` **To Reproduce** See https://github.com/jennydaman/spectacular-fsize-bug **Expected behavior** No warnings. `components.schemas.FooFile.properties.fsize.type` should be `number`.
1medium
Title: UnboundLocalError: local variable 'raw_devices' referenced before assignment Body: https://github.com/tensorpack/tensorpack/blob/801e29218f299905298b9bf430d2b95b527b04d5/tensorpack/graph_builder/training.py#L350-L355 I have reviewed this file, and by comparison I think `raw_devices = ['/gpu:{}'.format(k) for k in self.towers]` in line 351 should probably be defined before the `if-else` statement just like line 153-158 in the same file: https://github.com/tensorpack/tensorpack/blob/801e29218f299905298b9bf430d2b95b527b04d5/tensorpack/graph_builder/training.py#L153-L158
0easy
Title: Allow customization of faiss index type Body: See #348
1medium
Title: Loading a pt model and using Grad-cam Body: I hope that you can support me, I am trying to load a scripted model and then to use Grad-cam. Sadly it tells me that there is an issue on the hook that cannot be setup for a scripted model. Do you know any way in which I could fix this? Thank you in advance
1medium
Title: TST,DOC: Bump `scipy_doctest` (or remove pin) and fix new failures Body: @ev-br ping FYI, since the new scipy-doctest release (less than an hour ago) the refcheck has universal failures. I suspect, this is all just bad documentation that needs fixing, but not sure yet. In either case, until fixed both CircleCI and the "benchmark" tests which also still run the refcheck are expected to fail. (Currently, also the linter just started failing...)
1medium
Title: image.no_webcam_support Body: ### Describe the bug input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True) 这个代码部署在192.168.0.3服务器上面,我在192.168.0.5服务器访问项目 192.168.0.3:5019,然后点击摄像头报错image.no_webcam_support,这是为什么?该怎么修改 ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr ``` input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True) ### Screenshot input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True) ### Logs ```shell input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True) ``` ### System Info ```shell input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True) ``` ### Severity Blocking usage of gradio
1medium
Title: Scroll to field for read only fields is broken (develop) Body: ctrl+j > jump to field > type fieldname outside of current window. Expected behaviour: Scrolls to that field slowly Current behaviour: nothing.
1medium
Title: reccurent-dqn examples Body: This mostly goes to @kirkscheper as I have seen is the most recently active in recurrent. I try running the example recurrent_dqn_atari.py and I have several problems. In the beginning I had this problem: ``` ValueError: perm dim 5 is out of range of input rank 5 for 'permute_1/transpose' (op: 'Transpose') with input shapes: [32,?,1,84,84], [5] and with computed input tensors: input[1] = <0 2 3 4 5>.>> ``` I solved it by changing this ``` model.add(Permute((2, 3, 4, 5), batch_input_shape=input_shape)) ``` with that ``` model.add(Permute((1, 3, 4, 2), batch_input_shape=input_shape)) ``` But that create a problem in dimensions: ``` ValueError: Error when checking input: expected permute_2_input to have 5 dimensions, but got array with shape (1, 1, 84, 84) ``` Any suggestions on how to run the example? I want to implement recurrent in NAFAgent. I cannot find any examples with recurrent NAFAgent but I must have at least one recurrent example to develop the NAFAgent recurrent. Thank you!
1medium
Title: hifigan的训练无法直接使用cpu,且修改代码后无法接着训练 Body: **Summary[问题简述(一句话)]** A clear and concise description of what the issue is. hifigan的训练无法直接使用cpu,且修改代码后无法接着训练 **Env & To Reproduce[复现与环境]** 描述你用的环境、代码版本、模型 最新环境、代码版本,模型:hifigan **Screenshots[截图(如有)]** If applicable, add screenshots to help 将MockingBird-main\vocoder\hifigan下trian.py中41行torch.cuda.manual_seed(h.seed)改为torch..manual_seed(h.seed); 42行 device = torch.device('cuda:{:d}'.format(rank))改为device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')。 之后可以以cpu训练,但每次运行相同代码无法接着上一次训练。
1medium
Title: Issue when trying to run InstaPy Body: Workspace in use: "C:/Users/Jordan Gri/InstaPy" OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO INFO [2022-10-27 16:03:58] [shotsbyjordangri] Session started! oooooooooooooooooooooooooooooooooooooooooooooooooooooo INFO [2022-10-27 16:04:02] [shotsbyjordangri] - Cookie file not found, creating cookie... WARNING [2022-10-27 16:04:13] [shotsbyjordangri] Login A/B test detected! Trying another string... WARNING [2022-10-27 16:04:18] [shotsbyjordangri] Could not pass the login A/B test. Trying last string... ERROR [2022-10-27 16:04:23] [shotsbyjordangri] Login A/B test failed! b"Message: Unable to locate element: //div[text()='Log In']\nStacktrace:\nWebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:186:5\nNoSuchElementError@chrome://remote/content/shared/webdriver/Errors.jsm:398:5\nelement.find/</<@chrome://remote/content/marionette/element.js:300:16\n" Traceback (most recent call last): File "C:\Users\Jordan Gri\Desktop\Python Projects\InstaPy-master\instapy\login_util.py", line 337, in login_user login_elem = browser.find_element( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 856, in find_element return self.execute(Command.FIND_ELEMENT, { ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 429, in execute self.error_handler.check_response(response) File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 243, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //button[text()='Log In'] Stacktrace: WebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:186:5 NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.jsm:398:5 element.find/</<@chrome://remote/content/marionette/element.js:300:16 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Jordan Gri\Desktop\Python Projects\InstaPy-master\instapy\login_util.py", line 343, in login_user login_elem = browser.find_element( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 856, in find_element return self.execute(Command.FIND_ELEMENT, { ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 429, in execute self.error_handler.check_response(response) File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 243, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //button[text()='Log In'] Stacktrace: WebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:186:5 NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.jsm:398:5 element.find/</<@chrome://remote/content/marionette/element.js:300:16 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Jordan Gri\Desktop\Python Projects\InstaPy-master\instapy\login_util.py", line 350, in login_user login_elem = browser.find_element( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 856, in find_element return self.execute(Command.FIND_ELEMENT, { ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 429, in execute self.error_handler.check_response(response) File "C:\Users\Jordan Gri\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 243, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //div[text()='Log In'] Stacktrace: WebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:186:5 NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.jsm:398:5 element.find/</<@chrome://remote/content/marionette/element.js:300:16 .............................................................................................................................. CRITICAL [2022-10-27 16:04:23] [shotsbyjordangri] Unable to login to Instagram! You will find more information in the logs above. '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' ERROR [2022-10-27 16:04:23] [shotsbyjordangri] You have too few comments, please set at least 10 distinct comments to avoid looking suspicious. INFO [2022-10-27 16:04:23] [shotsbyjordangri] Sessional Live Report: |> No any statistics to show [Session lasted 31.08 seconds] OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO INFO [2022-10-27 16:04:23] [shotsbyjordangri] Session ended! ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
1medium
Title: Front end does not respond to mouse. Body: The only way to navigate the front end is with the tab key. Not a frontend guy but I'm not seeing any errors. A lot of npm warnings. It looks ok. I have noticed if I try to render only a button it is also unresponsive.
1medium
Title: st.segmented_control ==> use_container_width parameter Body: ### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [x] I added a descriptive title and summary to this issue. ### Summary Support the use_container_width parameter for st.segmented_control and st.pill ### Why? Improve UI of apps by reducing white space and allowing size of options in segmented control to expand/contract dynamically to current screen size ### How? test = st.segmented_control( label="Filter Options", options=["One", "Two", "Three"], label_visibility="collapsed", **use_container_width=True** ) ### Additional Context _No response_
1medium
Title: [BUG] Not Finding Blueprint routes when using url_prefix Body: ## Not Finding Blueprint routes when using url_prefix It seems that the lib doesn't load Blueprints routes if the Blueprint uses a prefix, I've opened a PR explaining the issue and the possible solution in depth here: https://github.com/0b01001001/spectree/pull/193#issue-1107430572
1medium
Title: How to accomplish Read/Write transactions with a one to many relationship Body: ### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python class User(SQLModel): __tablename__ = "users" id: Optional[str] cars: List[Car] = Relationship(sa_relationship=RelationshipProperty("Car", back_populates="user") class Car(SQLModel): ... user_id: str = Field(default=None, foreign_key="users.id") user: User = Relationship(sa_relationship=RelationshipProperty("User", back_populates="cars")) is_main_car: bool ``` ### Description I have two tables that have a many to one relationship, such as the one described above. Any given user can only have a single car that `is_main_car`. Additionally, the first car a user gets must be the main car. I am trying to determine how the transactional semantics work with this relationship within a Session. If I read the `user`, and then use the `user.cars` field to determine if the user has 0 cars or already has a main car, can I rely on that condition still being true when I write my new main `Car` row to the `Cars` table (assuming it is all within a single Session)? ### Operating System macOS ### Operating System Details _No response_ ### SQLModel Version 0.0.4 ### Python Version 3.9.7 ### Additional Context _No response_
1medium
Title: Admin site: Invalid date/times during import generate confusing errors Body: **Describe the bug** If a datetime is invalid during import, this information is reported via the admin site confirmation page. However it is not clear what the exact nature of the error is. The error is reported, but it looks like the import was ok, because the date field is rendered: ![date_err](https://user-images.githubusercontent.com/6249838/163270160-80521a8f-3a84-435b-a2b3-0ecd31086711.png) **To Reproduce** Steps to reproduce the behavior: 1. Edit the file `tests/core/exports/books1.csv` 2. Add a column called `added` with value: `2022/02/17 19:46:59` (this is a date which cannot be parsed by default) 3. Import the file via the Admin console 4. See error **Versions (please complete the following information):** - Django Import Export: 3.0.0 (beta) - Python 3.9 - Django 4.0 **Expected behavior** It would be best to see a clear indication of what the problem is. Note the original [exception](https://github.com/django-import-export/django-import-export/blob/033f803c5994ceba9da8b610819ee5b52a630bf7/import_export/widgets.py#L229) is: > time data '2022/02/17 19:46:59' does not match format '%Y-%m-%d %H:%M:%S' This information is lost.
1medium
Title: unable to get different dates with FuzzyDateTime Body: Hello, In Django tests, i have ``` python class EventFactory(factory.django.DjangoModelFactory): ... dtstart = FuzzyDateTime(datetime.datetime(2008, 1, 1, tzinfo=UTC), datetime.datetime(2009, 1, 1, tzinfo=UTC), force_hour=10, force_minute=30, force_second=0, force_microsecond=0).evaluate(2, None, False) ``` and i use it: ``` self.event = EventFactory.create() self.event2 = EventFactory.create() self.event3 = EventFactory.create() ``` Displaying the resulting dtstart, i got: dtstart: "2008-07-21 10:30:00+00:00" dtstart: "2008-07-21 10:30:00+00:00" dtstart: "2008-07-21 10:30:00+00:00" The dates are the same, that's not what i expect. What i don't understand is when i try it in python shell, everytime i call FuzzyDateTime(...), the rresult is always different. Am i missing something? thanks in advance for help, gerard
1medium
Title: PyCharm with virtualenv and reloadium plugin does not work - No module named reloadium.corium Body: PyCharm Pro 2022.3.1 with virtualenv and Python 3.10 after icon click "Debug 'dl' with Reloadium" debug console output: /home/user/py310env/bin/python -m reloadium pydev_proxy /home/user/pycharm/plugins/python/helpers/pydev/pydevd.py --multiprocess --save-signatures --qt-support=auto --client 127.0.0.1 --port 41999 --file /home/user/XXX/yyy/dl.py It seems like your platform or Python version are not supported yet. Windows, Linux, macOS and Python 64 bit >= 3.7 (>= 3.9 for M1) <= 3.10 are currently supported. Please submit a github issue if you believe Reloadium should be working on your system at https://github.com/reloadware/reloadium To see the exception run reloadium with environmental variable RW_DEBUG=True Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/lib/python3.10/runpy.py", line 146, in _get_module_details return _get_module_details(pkg_main_name, error) File "/usr/lib/python3.10/runpy.py", line 110, in _get_module_details __import__(pkg_name) File "/home/user/.reloadium/package/3.7/reloadium/__init__.py", line 4, in <module> pre_import_check() File "/home/user/.reloadium/package/3.7/reloadium/__utils__.py", line 21, in pre_import_check import reloadium.corium **ModuleNotFoundError: No module named 'reloadium.corium'** Process finished with exit code 1
1medium
Title: torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default with `_prepare_4d_attention_mask_for_sdpa( Body: > Hello @fxmarty > > When I try using torch.compile by using `_attn_implementation="sdpa"` in `BertConfig`, I get the error coming from `_prepare_4d_attention_mask_for_sdpa()` whichis because of the data dependent flow. > > Specifically, > > > > ``` > > File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py", line 1108, in forward > > extended_attention_mask = _prepare_4d_attention_mask_for_sdpa( > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/transformers/modeling_attn_mask_utils.py", line 448, in _prepare_4d_attention_mask_for_sdpa > > if not is_tracing and torch.all(mask == 1): > > File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/functional_tensor.py", line 411, in __torch_dispatch__ > > outs_unwrapped = func._op_dk( > > ^^^^^^^^^^^^ > > File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/utils/_stats.py", line 20, in wrapper > > return fn(*args, **kwargs) > > ^^^^^^^^^^^^^^^^^^^ > > File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 896, in __torch_dispatch__ > > return self.dispatch(func, types, args, kwargs) > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1241, in dispatch > > return self._cached_dispatch_impl(func, types, args, kwargs) > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 974, in _cached_dispatch_impl > > output = self._dispatch_impl(func, types, args, kwargs) > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1431, in _dispatch_impl > > op_impl_out = op_impl(self, func, *args, **kwargs) > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_impls.py", line 150, in dispatch_to_op_implementations_dict > > return op_implementations_dict[func](fake_mode, func, *args, **kwargs) > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > File "/home/amodab01/anaconda3/envs/ml_training/lib/python3.11/site-packages/torch/_subclasses/fake_impls.py", line 284, in local_scalar_dense > > raise DataDependentOutputException(func) > > torch._subclasses.fake_tensor.DataDependentOutputException: aten._local_scalar_dense.default > > ``` > > Is this related to https://github.com/pytorch/pytorch/pull/120400, and do you anticipate there's any solution to this? Ofcourse turning SDPA off works > > _Originally posted by @amodab01 in [221aaec](https://github.com/huggingface/transformers/commit/221aaec6ecf7558e4956dadd662d7d3adb22e420#r152370315)_
1medium
Title: Customize profile filename Body: It would be nice to easily be able to customize the profile filename in the middleware https://github.com/joerick/pyinstrument/blob/4b37f8cdc531be41a7f7e57932f0b770244025d5/pyinstrument/middleware.py#L78
1medium
Title: Suppress header output Body: Is there any way to suppress this output at the start of `pytest -n auto`? `-q` has no output. ``` [gw0] darwin Python 3.9.17 cwd: /some/path [gw1] darwin Python 3.9.17 cwd: /some/path [gw2] darwin Python 3.9.17 cwd: /some/path [gw3] darwin Python 3.9.17 cwd: /some/path [gw4] darwin Python 3.9.17 cwd: /some/path [gw5] darwin Python 3.9.17 cwd: /some/path [gw6] darwin Python 3.9.17 cwd: /some/path [gw7] darwin Python 3.9.17 cwd: /some/path [gw8] darwin Python 3.9.17 cwd: /some/path [gw9] darwin Python 3.9.17 cwd: /some/path [gw0] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)] [gw1] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)] [gw2] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)] [gw3] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)] [gw4] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)] [gw6] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)] [gw5] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)] [gw7] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)] [gw8] Python 3.9.17 (main, Jun 12 2023, 14:44:48) -- [Clang 14.0.3 (clang-1403.0.22.14.1)] gw0 [2] / gw1 [2] / gw2 [2] / gw3 ok / gw4 ok / gw5 ok / gw6 ok / gw7 ok / gw8 ok / gw9 gw0 [2] / gw1 [2] / gw2 [2] / gw3 [2] / gw4 ok / gw5 ok / gw6 ok / gw7 ok / gw8 ok / gw9 ``` This is a large log spew that is irrelevant to the user running the test. I tried searching through the issues here and docs, but couldn't find a way that would suppress this output.
1medium
Title: Prevent handlers from cancelling Body: Perhaps I have gaps in understanding aiogram, but why can't I make handler cancellation protection? ``` import asyncio import logging import os from aiogram import Bot, Dispatcher, types #API_TOKEN = os.getenv("BOT_TOKEN") API_TOKEN = "TOKEN" # Configure logging logging.basicConfig(level=logging.INFO) # Initialize bot and dispatcher bot = Bot(token=API_TOKEN) dp = Dispatcher(bot) def shielded(fn): async def wrapped(*args, **kwargs): await asyncio.shield(fn(*args, **kwargs)) return wrapped @dp.message_handler() @shielded async def echo(message: types.Message, *args, **kwargs): try: await asyncio.sleep(7) await message.answer(message.text) except asyncio.CancelledError: print("handler cancelled :(") async def cancel_dp_with_delay(dp, sec): await asyncio.sleep(sec) dp.stop_polling() async def main(): asyncio.create_task(cancel_dp_with_delay(dp, 5)) await dp.start_polling() await dp.wait_closed() await asyncio.sleep(3) if __name__ == '__main__': asyncio.run(main()) ```
1medium
Title: Make it possible for an administrator so send activation links to users upon their creation Body: ### What version of GlobaLeaks are you using? 5.0.56 ### What browser(s) are you seeing the problem on? N/A ### What operating system(s) are you seeing the problem on? N/A ### Describe the issue After upgrading our test environment to 5.0.56, admins cannot send account activation mails. Only the Escrow key admin can do so: Regular admin: ![Image](https://github.com/user-attachments/assets/c6605856-d013-47ee-9cf9-98916c511bff) Escrow key admin: ![Image](https://github.com/user-attachments/assets/bd1bee0f-94a1-4542-ac0f-cdf8b8609d49) ### Proposed solution _No response_
1medium
Title: DocType: Tables in Field Type "Text Editor" - deleting rows and colums broken Body: <!-- Welcome to the Frappe Framework issue tracker! Before creating an issue, please heed the following: 1. This tracker should only be used to report bugs and request features / enhancements to Frappe - For questions and general support, use https://stackoverflow.com/questions/tagged/frappe - For documentation issues, refer to https://frappeframework.com/docs/user/en or the developer cheetsheet https://github.com/frappe/frappe/wiki/Developer-Cheatsheet 2. Use the search function before creating a new issue. Duplicates will be closed and directed to the original discussion. 3. When making a bug report, make sure you provide all required information. The easier it is for maintainers to reproduce, the faster it'll be fixed. 4. If you think you know what the reason for the bug is, share it with us. Maybe put in a PR 😉 --> ## Description of the issue Using a Text Editor field with content type "Rich Text": deleting rows and columns from tables does not work anymore. Rows can't be deleted, and on deleting columns always the first column gets deleted. ## Context information (for bug reports) I found this bug while creating a Web Page and tried to insert a table, see below: ![Image](https://github.com/user-attachments/assets/3f87c25f-1412-4388-a9e5-3358e40d6bc9) **Output of `bench version`** ``` erpnext 15.52.0 frappe 15.56.0 ``` ## Steps to reproduce the issue 1. Create a Web Page as shown above. Use "Rich Text" for Content Type. 2. Insert a table and fill in some data. 3. Try to remove a row. 4. Try to remove a column 5. Try to insert a row in the middle of the table ### Observed result 1. Try to remove a row: not possible 2. Try to remove a column: wrong column is deleted 3. Try to insert a row in the middle of the table: row is inserted, but data is "shifted" between cells below the new row. ### Expected result 1. Try to remove a row: current row is deleted 2. Try to remove a column: current column is deleted 3. Try to insert a row in the middle of the table: row is inserted , data is not "shifted" ### Stacktrace / full error message ``` Does not occur ``` ## Additional information OS version / distribution, `Frappe` install method, etc. debian bullseye, manual install
1medium
Title: module 'labelme.utils' has no attribute 'label_colormap' Body: ### Provide environment information I am trying to convert my .json files containing my labels (semantic segmentation, polygonal bounding box information) to VOC segmentation format for visualizing the segmentation masks over the input pixels of my image. I then get the error: module 'labelme.utils' has no attribute 'label_colormap' I am following the example here for VOC segmentation format dataset creation: https://github.com/wkentaro/labelme/tree/main/examples/semantic_segmentation (base) pravin@AdminisatorsMBP SEGMENTATION % ./labelme2voc.py Images_To_Segment Images_To_Segment_voc5 --labels labels.txt Creating dataset: Images_To_Segment_voc5 class_names: ('_background_', 'vein') Saved class_names: Images_To_Segment_voc5/class_names.txt Traceback (most recent call last): File "/Users/pravin/Documents/SEGMENTATION/./labelme2voc.py", line 95, in <module> main() File "/Users/pravin/Documents/SEGMENTATION/./labelme2voc.py", line 56, in main colormap = labelme.utils.label_colormap(255) AttributeError: module 'labelme.utils' has no attribute 'label_colormap' (base) pravin@AdminisatorsMBP SEGMENTATION % pwd /Users/pravin/Documents/SEGMENTATION Please help. I tried: cd labelme pip install -e . But this did not fix the issue. ### What OS are you using? Mac OS Ventura 13.2.1 ### Describe the Bug I am trying to convert my .json files containing my labels (semantic segmentation, polygonal bounding box information) to VOC segmentation format for visualizing the segmentation masks over the input pixels of my image. I then get the error: module 'labelme.utils' has no attribute 'label_colormap' I am following the example here for VOC segmentation format dataset creation: https://github.com/wkentaro/labelme/tree/main/examples/semantic_segmentation (base) pravin@AdminisatorsMBP SEGMENTATION % ./labelme2voc.py Images_To_Segment Images_To_Segment_voc5 --labels labels.txt Creating dataset: Images_To_Segment_voc5 class_names: ('_background_', 'vein') Saved class_names: Images_To_Segment_voc5/class_names.txt Traceback (most recent call last): File "/Users/pravin/Documents/SEGMENTATION/./labelme2voc.py", line 95, in <module> main() File "/Users/pravin/Documents/SEGMENTATION/./labelme2voc.py", line 56, in main colormap = labelme.utils.label_colormap(255) AttributeError: module 'labelme.utils' has no attribute 'label_colormap' (base) pravin@AdminisatorsMBP SEGMENTATION % pwd /Users/pravin/Documents/SEGMENTATION Please help. I tried: cd labelme pip install -e . But this did not fix the issue. ### Expected Behavior _No response_ ### To Reproduce _No response_
1medium
Title: In softmax layer of word2vec, do we use cosine similarity or dot product? Body: <!-- **IMPORTANT**: - Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports. - Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers. Github bug reports that do not include relevant information and context will be closed without an answer. Thanks! --> #### Problem description I have read the paper "Efficient Estimation of Word Representation in Vector Space". This article says that, we use **cosine similarity** in softmax layer of word2vec. But someone says that gensim uses **dot product** in softmax layer of word2vec while uses **cosine similarity** between word vectors which have been trained. I have not read the source code, and I wanted to confirm whether use dot product in softmax layer and use cosine similarity after trained.
1medium
Title: dns.asyncresolver timeout Body: Hi, While experimenting with `dns.asyncresolver` I encountered an error which occurs only on my Windows 10 machine and not on WSL or other Linux hosts. Running the following code throws a timeout exception: ``` import asyncio import dns.asyncresolver import dns.asyncbackend import dns.exception from typing import List async def asyncquery(target, type="A"): record_type = "A" resolver = dns.asyncresolver.Resolver() resolver.nameservers = ["1.1.1.1", "8.8.8.8"] resolver.timeout = 10.0 resolver.lifetime = 10.0 try: answers = await resolver.resolve(target, rdtype=record_type) records = [rdata for rdata in answers] except dns.resolver.NoAnswer: print(f'{target} query returned no answer') return None except dns.exception.Timeout: print(f'{target} query timed out') return None return records if __name__ == "__main__": target = "google.com" res = asyncio.run(asyncquery(target, "A")) if res: print(f"Results") for r in res: print(r) ``` I do see a valid response in Wireshark, but it doesn't seem to be captured by Python. ![תמונה](https://user-images.githubusercontent.com/60382890/108408722-0f8ea200-722e-11eb-90e2-1839ed09f7d1.png) The non-async resolver works just fine though 🤷‍♀️ Python version: 3.9.1 dnspython: 2.1.0 Any ideas what can cause this? Thanks for the help!
1medium
Title: [utils.py] IndexError: indices are out-of-bounds Body: Hi, I probably did something wrong but... I really don't find it. During the fit call, I have this exception: ``` X (301, 2) Y (301, 2) --------------------------------- Run id: JTHQIT Log directory: /tmp/tflearn_logs/ --------------------------------- Training samples: 301 Validation samples: 0 -- Exception in thread Thread-3: Traceback (most recent call last): File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/threading.py", line 914, in _bootstrap_inner self.run() File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/threading.py", line 862, in run self._target(*self._args, **self._kwargs) File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/tflearn/data_flow.py", line 186, in fill_feed_dict_queue data = self.retrieve_data(batch_ids) File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/tflearn/data_flow.py", line 221, in retrieve_data utils.slice_array(self.feed_dict[key], batch_ids) File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/tflearn/utils.py", line 187, in slice_array return X[start] File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/pandas/core/frame.py", line 2051, in __getitem__ return self._getitem_array(key) File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/pandas/core/frame.py", line 2096, in _getitem_array return self.take(indexer, axis=1, convert=True) File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/pandas/core/generic.py", line 1669, in take convert=True, verify=True) File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/pandas/core/internals.py", line 3932, in take indexer = maybe_convert_indices(indexer, n) File "/home/shazz/projects/anaconda/envs/tensorflow/lib/python3.5/site-packages/pandas/core/indexing.py", line 1872, in maybe_convert_indices raise IndexError("indices are out-of-bounds") IndexError: indices are out-of-bounds ``` I really started from the titanic example, I just took a different dataset (weight, height => sex), that I clean using pandas, that's the only difference. Code: ``` import tflearn import data_importer X, Y = data_importer.load_data(0) print("X", X.shape, "Y", Y.shape) # Build neural network net = tflearn.input_data(shape=[None, 2]) net = tflearn.fully_connected(net, 32) net = tflearn.fully_connected(net, 32) net = tflearn.fully_connected(net, 2, activation='softmax') net = tflearn.regression(net) # Define model model = tflearn.DNN(net) # Start training (apply gradient descent algorithm) model.fit(X, Y, n_epoch=10, batch_size=1, show_metric=True) ``` Data importer code and data are available here: https://github.com/shazz/tflearn-experiments/tree/master/cdc Any help welcome...... is a a bug in my code (probably) or in tflearn ? Thanks !
1medium
Title: `spacy.cli.download` doesn't work for transformer model Body: <!-- NOTE: For questions or install related issues, please open a Discussion instead. --> ## How to reproduce the behaviour 1. Create a new virtual env (e.g. in `pyenv`). 2. `pip install spacy[transformers]==3.7.2`. 3. Start python REPL and: ```py import spacy spacy.cli.download('en_core_web_trf') nlp = spacy.load('en_core_web_trf') ``` 4. At this point you'll get the following error: > ValueError: [E002] Can't find factory for 'curated_transformer' for language English (en). This usually happens when spaCy calls `nlp.create_pipe` with a custom component name that's not registered on the current language class. If you're using a Transformer, make sure to install 'spacy-transformers'. If you're using a custom component, make sure you've added the decorator `@Language.component` (for function components) or `@Language.factory` (for class components). Apparently, `spacy.cli.download` installed `curated-transformers` as a dependency of `en-core-web-trf` but couldn't load it. Everything works fine if you reenter the REPL and try `spacy.load` again. ## Your Environment <!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.--> - **spaCy version:** 3.7.2 - **Platform:** macOS-14.0-arm64-arm-64bit - **Python version:** 3.9.16 - **Pipelines:** en_core_web_trf (3.7.2)
1medium
Title: Functions called in threads marked as missing in coverage report Body: # Summary I am testing a function which runs code in a thread. All lines in that function are marked as `missing` in the coverage report in Windows and Linux. ## Expected vs actual result Expected behaviour is that functions called in threads are not marked as `missing` in the coverage report, actual result is that they are. # Reproducer Here is a minimal example: ```python # root_dir/my_file.py import _thread as thread from time import sleep def foo(arr: list): arr.append(1) def bar(): arr = [] val = thread.start_new_thread(foo, (arr,)) sleep(5) return arr ``` ```python from my_file import bar def test_bar(): arr = bar() assert 1 in arr ``` The test passes, but the contents of `foo` are marked as `missing` in the coverage report. ## Versions `python==3.11.4` `pytest==8.3.2` `pytest-cov==5.0.0` ## Config My `pyproject.toml` looks like this: ```toml [tool.coverage.run] source = ["root_dir"] branch = false concurrency = ["thread"] [tool.coverage.report] sort = "cover" fail_under = 30 show_missing = true skip_covered = true exclude_lines = [ "pragma: no cover", "if __name__ == \"__main__\":", "@abstractmethod", "if TYPE_CHECKING:", ] ```
1medium
Title: Unable to filter dates in Jupyter (hass pyscript kernel) Body: Hello. Thank you for a great integration! As electricity price got up I am working on a script that would control my heating depending on current electricity price. I have faced the problem that I am unable to filter out today's prices from the list. I have double checked next test case in regular `Python 3 (ipykernel)` where it works. Test case to reproduce filtering bug in `hass pyscript` kernel ``` import datetime import zoneinfo data = [{'start': datetime.datetime(2021, 10, 24, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 1, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.114}, {'start': datetime.datetime(2021, 10, 24, 1, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 2, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.068}, {'start': datetime.datetime(2021, 10, 24, 2, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.081}, {'start': datetime.datetime(2021, 10, 24, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 4, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.079}, {'start': datetime.datetime(2021, 10, 24, 4, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 5, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.079}, {'start': datetime.datetime(2021, 10, 24, 5, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 6, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.079}, {'start': datetime.datetime(2021, 10, 24, 6, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 7, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.085}, {'start': datetime.datetime(2021, 10, 24, 7, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.084}, {'start': datetime.datetime(2021, 10, 24, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 9, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.085}, {'start': datetime.datetime(2021, 10, 24, 9, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 10, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.089}, {'start': datetime.datetime(2021, 10, 24, 10, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 11, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.1}, {'start': datetime.datetime(2021, 10, 24, 11, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 12, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.099}, {'start': datetime.datetime(2021, 10, 24, 12, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 13, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.105}, {'start': datetime.datetime(2021, 10, 24, 13, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 14, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.088}, {'start': datetime.datetime(2021, 10, 24, 14, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 15, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.074}, {'start': datetime.datetime(2021, 10, 24, 15, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 16, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.072}, {'start': datetime.datetime(2021, 10, 24, 16, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 17, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.092}, {'start': datetime.datetime(2021, 10, 24, 17, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 18, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.111}, {'start': datetime.datetime(2021, 10, 24, 18, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 19, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.125}, {'start': datetime.datetime(2021, 10, 24, 19, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 20, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.179}, {'start': datetime.datetime(2021, 10, 24, 20, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 21, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.163}, {'start': datetime.datetime(2021, 10, 24, 21, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 22, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.116}, {'start': datetime.datetime(2021, 10, 24, 22, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 24, 23, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.09}, {'start': datetime.datetime(2021, 10, 24, 23, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.084}, {'start': datetime.datetime(2021, 10, 25, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 1, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.072}, {'start': datetime.datetime(2021, 10, 25, 1, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 2, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.086}, {'start': datetime.datetime(2021, 10, 25, 2, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.088}, {'start': datetime.datetime(2021, 10, 25, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 4, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.049}, {'start': datetime.datetime(2021, 10, 25, 4, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 5, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.051}, {'start': datetime.datetime(2021, 10, 25, 5, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 6, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.088}, {'start': datetime.datetime(2021, 10, 25, 6, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 7, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.096}, {'start': datetime.datetime(2021, 10, 25, 7, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.117}, {'start': datetime.datetime(2021, 10, 25, 8, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 9, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.156}, {'start': datetime.datetime(2021, 10, 25, 9, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 10, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.156}, {'start': datetime.datetime(2021, 10, 25, 10, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 11, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.142}, {'start': datetime.datetime(2021, 10, 25, 11, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 12, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.134}, {'start': datetime.datetime(2021, 10, 25, 12, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 13, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.133}, {'start': datetime.datetime(2021, 10, 25, 13, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 14, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.133}, {'start': datetime.datetime(2021, 10, 25, 14, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 15, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.126}, {'start': datetime.datetime(2021, 10, 25, 15, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 16, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.133}, {'start': datetime.datetime(2021, 10, 25, 16, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 17, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.138}, {'start': datetime.datetime(2021, 10, 25, 17, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 18, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.154}, {'start': datetime.datetime(2021, 10, 25, 18, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 19, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.18}, {'start': datetime.datetime(2021, 10, 25, 19, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 20, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.247}, {'start': datetime.datetime(2021, 10, 25, 20, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 21, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.18}, {'start': datetime.datetime(2021, 10, 25, 21, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 22, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.156}, {'start': datetime.datetime(2021, 10, 25, 22, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 25, 23, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.122}, {'start': datetime.datetime(2021, 10, 25, 23, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'end': datetime.datetime(2021, 10, 26, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Tallinn')), 'value': 0.116}] def is_today(x): today = datetime.date.today() return today == x["start"].date() def filter_today(candidates): return filter(is_today, candidates) result = list(filter_today(data)) expected = 24 assert expected == len(result), f"Should be {expected}. Got {len(result)}" result ``` I am getting 48. But in real python it produces 24 results.
1medium
Title: Option to not pretty print the response Body: I'm hitting `http://httpbin.org/headers` and am receiving this JSON back: ```json { "headers": { "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Encoding": "gzip, deflate", "Accept-Language": "en-US,en;q=0.5", "Host": "httpbin.org", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0" } } ``` I'd like an option to receive the response back without pretty printing: ```json {"headers":{"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8","Accept-Encoding":"gzip, deflate","Accept-Language":"en-US,en;q=0.5","Host":"httpbin.org","Upgrade-Insecure-Requests":"1","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"}} ``` Maybe via a query parameter like `?pretty=false`?
1medium
Title: [Feature] Will Typescript with bun runtime be supported Body: It would be great if Typescript would be supported. And one can use libs like Effect-ts to orchestrate the Typescript code to be run parallel and concurrent.
1medium
Title: Base breadcrumb is always НАЧАЛО Body: Looks to have been hard-coded into the JS rather than using localization.
1medium
Title: '_xsrf' missing while logging in Body: <!-- Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! --> ### Bug description Hello, i cloned repository, signed in as admin and got this error: 403 : Forbidden '_xsrf' argument missing from POST ![image](https://user-images.githubusercontent.com/94524043/233970886-845056ff-eb57-4694-90ff-40a8748caaa7.png) Login and password actually doesnt matter, if i register another user and use his creds or if i use wrong creds i still getting this error. I launched original jupyterhub image before and it works fine. Separate image with single jupyter notebook also works fine. Mb issue is that my os is windows. Is it possible to build an image on windows? Appreciate any help
1medium
Title: AKShare 接口问题报告 | ak.stock_zh_a_hist()错误 Body: lib/python3.12/site-packages/akshare/stock_feature/stock_hist_em.py", line 1049, in stock_zh_a_hist "secid": f"{code_id_dict[symbol]}.{symbol}", ~~~~~~~~~~~~^^^^^^^^ KeyError: '300114' During handling of the above exception, another exception occurred: 1.16.6陆续更新了几个版本一直到最新的1.16.9,上面的问题依然存在,每次读取到300114出错 ak.stock_zh_a_hist( symbol=code, period=ASD.DATA_PERIOD, start_date=start_date.strftime('%Y%m%d'), end_date=end_date.strftime('%Y%m%d'), adjust=ASD.DATA_ADJUST )
1medium
Title: Support get_chunk_meta in RayExecutionContext Body: Currently `RayExecutionContext.get_chunk_meta` is not supported, which will make any operands relied on this API failed on tiling, such as when call `DataFrame.groupby`: ``` df = md.DataFrame(mt.random.rand(300, 4, chunk_size=100), columns=list("abcd")) df["a"], df["b"] = (df["a"] * 5).astype(int), (df["b"] * 2).astype(int) df.groupby(["a", "b"]).apply(lambda pdf: pdf.sum()).execute() ``` Will got following error: ``` ================================================================================== FAILURES ================================================================================== ________________________________________________________________________________ test_shuffle ________________________________________________________________________________ ray_start_regular_shared2 = RayContext(dashboard_url='127.0.0.1:8265', python_version='3.8.2', ray_version='1.12.0', ray_commit='f18fc31c756299095...127.0.0.1:55710', 'address': '127.0.0.1:55710', 'node_id': '38787319e06bc89f95d7600524069ed4dfba256068c917c261fe697f'}) create_cluster = (<mars.deploy.oscar.local.LocalClient object at 0x7fb22aaf38b0>, {}) @require_ray @pytest.mark.asyncio async def test_shuffle(ray_start_regular_shared2, create_cluster): df = md.DataFrame(mt.random.rand(300, 4, chunk_size=100), columns=list("abcd")) # `describe` contains multiple shuffle. df.describe().execute() arr = np.random.RandomState(0).rand(31, 27) t1 = mt.tensor(arr, chunk_size=10).reshape(27, 31) t1.op.extra_params["_reshape_with_shuffle"] = True np.testing.assert_almost_equal(arr.reshape(27, 31), t1.to_numpy()) np.testing.assert_equal(mt.bincount(mt.arange(5, 10)).to_numpy(), np.bincount(np.arange(5, 10))) # `RayExecutionContext.get_chunk_meta` not supported, skip dataframe.groupby df["a"], df["b"] = (df["a"] * 5).astype(int), (df["b"] * 2).astype(int) > df.groupby(["a", "b"]).apply(lambda pdf: pdf.sum()).execute() mars/deploy/oscar/tests/test_ray_dag.py:147: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ mars/core/entity/tileables.py:462: in execute result = self.data.execute(session=session, **kw) mars/core/entity/executable.py:144: in execute return execute(self, session=session, **kw) mars/deploy/oscar/session.py:1855: in execute return session.execute( mars/deploy/oscar/session.py:1649: in execute execution_info: ExecutionInfo = fut.result( ../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:439: in result return self.__get_result() ../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:388: in __get_result raise self._exception mars/deploy/oscar/session.py:1835: in _execute await execution_info mars/deploy/oscar/session.py:105: in wait return await self._aio_task mars/deploy/oscar/session.py:953: in _run_in_background raise task_result.error.with_traceback(task_result.traceback) mars/services/task/supervisor/processor.py:364: in run async for stage_args in self._iter_stage_chunk_graph(): mars/services/task/supervisor/processor.py:158: in _iter_stage_chunk_graph chunk_graph = await self._get_next_chunk_graph(chunk_graph_iter) mars/services/task/supervisor/processor.py:149: in _get_next_chunk_graph chunk_graph = await fut mars/lib/aio/_threads.py:36: in to_thread return await loop.run_in_executor(None, func_call) ../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/thread.py:57: in run result = self.fn(*self.args, **self.kwargs) mars/services/task/supervisor/processor.py:144: in next_chunk_graph return next(chunk_graph_iter) mars/services/task/supervisor/preprocessor.py:194: in tile for chunk_graph in chunk_graph_builder.build(): mars/core/graph/builder/chunk.py:440: in build yield from self._build() mars/core/graph/builder/chunk.py:434: in _build graph = next(tile_iterator) mars/services/task/supervisor/preprocessor.py:74: in _iter_without_check to_update_tileables = self._iter() mars/core/graph/builder/chunk.py:317: in _iter self._tile( mars/core/graph/builder/chunk.py:211: in _tile need_process = next(tile_handler) mars/core/graph/builder/chunk.py:183: in _tile_handler tiled_tileables = yield from handler.tile(tiled_tileables) mars/core/entity/tileables.py:79: in tile tiled_result = yield from tile_handler(op) mars/dataframe/groupby/apply.py:151: in tile return [auto_merge_chunks(get_context(), ret)] mars/dataframe/utils.py:1333: in auto_merge_chunks metas = ctx.get_chunks_meta( mars/services/context.py:188: in get_chunks_meta return self._call(self._get_chunks_meta(data_keys, fields=fields, error=error)) mars/services/context.py:84: in _call return fut.result() ../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:439: in result return self.__get_result() ../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:388: in __get_result raise self._exception _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <mars.services.task.execution.ray.context.RayExecutionContext object at 0x7fb22b3485e0> data_keys = ['9f92dcd8196d32f25e43e33ba1f56e02_0', '223590f1093c414359f466c42a698006_0', 'dc80798f45b8ed8bb358a7b39b6d8170_0'], fields = ['memory_size'], error = 'ignore' async def _get_chunks_meta( self, data_keys: List[str], fields: List[str] = None, error: str = "raise" ) -> List[Dict]: # get chunks meta get_metas = [] for data_key in data_keys: meta = self._meta_api.get_chunk_meta.delay( data_key, fields=["bands"], error=error ) get_metas.append(meta) metas = await self._meta_api.get_chunk_meta.batch(*get_metas) api_to_keys_calls = defaultdict(lambda: (list(), list())) for data_key, meta in zip(data_keys, metas): > addr = meta["bands"][0][0] E TypeError: 'NoneType' object is not subscriptable mars/services/context.py:145: TypeError ``` We need to support get_chunk_meta for ray task backend.
1medium
Title: [Feature Request] The console displays the back-end interface for the actual request Body: dash Back-end request interface, the browser console does not print the request interface, such as login request, but the browser does not display the login request interface, only such interfaces as update-component, how to display the actual interface in the control?
1medium
Title: Enabling `trace on` creates erroneous traceback Body: <!--- Provide a general summary of the issue in the Title above --> <!--- If you have a question along the lines of "How do I do this Bash command in xonsh" please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html If you don't find an answer there, please do open an issue! --> ## xonfig <details> ``` +------------------+-----------------+ | xonsh | 0.13.1 | | Python | 3.10.5 | | PLY | 3.11 | | have readline | True | | prompt toolkit | 3.0.30 | | shell type | prompt_toolkit | | history backend | json | | pygments | 2.13.0 | | on posix | True | | on linux | True | | distro | unknown | | on wsl | False | | on darwin | False | | on windows | False | | on cygwin | False | | on msys2 | False | | is superuser | False | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | xontrib | [] | | RC file | [] | +------------------+-----------------+ ``` </details> ## Expected Behavior <!--- Tell us what should happen --> No exception and traceback in the output ## Current Behavior <!--- Tell us what happens instead of the expected behavior --> <!--- If part of your bug report is a traceback, please first enter debug mode before triggering the error To enter debug mode, set the environment variable `XONSH_DEBUG=1` _before_ starting `xonsh`. On Linux and OSX, an easy way to to do this is to run `env XONSH_DEBUG=1 xonsh` --> When setting `trace on` an exception is raised and a traceback is produced ### Traceback (if applicable) <details> ``` Exception ignored in: <function _removeHandlerRef at 0x7f6872497f40> Traceback (most recent call last): File "/home/gn/anaconda3/envs/xonsh/lib/python3.10/logging/__init__.py", line 836, in _removeHandlerRef File "/home/gn/anaconda3/envs/xonsh/lib/python3.10/site-packages/xonsh/tracer.py", line 87, in trace TypeError: 'NoneType' object is not callable ``` </details> ## Steps to Reproduce <!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! --> Any MWE with using `trace on` e.g. run the following `xonsh test.xsh` ```sh $cat test.xsh #!/usr/bin/env xonsh $XONSH_TRACE_SUBPROC = True trace on echo "OK" ``` ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
1medium
Title: Multiple encoded outputs into a single classification network Body: ![networksketch](https://user-images.githubusercontent.com/29553460/27308621-9efe609c-551d-11e7-81c7-0cbfc898f266.png) I have multiple data sets, where data sets are distinguished by feature length of a feature vector. Regardless input feature size, I would like the network to be able to classify one of the two classes given the input. Hence, I would like to train an encoder for each data set (will train n encoders for n datasets), but pass the encoded outputs from all encoders to a single classification network since the encoded outputs are of the same dimension. See attached image for the network sketch. I have code written below so far, 1. but I am not really sure if the current way of setting up the regression is valid or need to use merge with some sort of 'mean' to average classification results from different data sets. 2. I am also having trouble figuring out how to perform training given the situation. Any help will be greatly appreciated. ``` import h5py import numpy as np import tflearn STEPS_PER_EPOCH = 10 NUM_CLASSES = 2 train_Xs = [] train_Ys = [] batch_sizes = [] encodings = [] encode_hidden_1 = 500 classify_hidden_1 = 500 classify_hidden_2 = 100 for trainDataset in trainDatasets: train_file = h5py.File(trainDataset,'r') train_X = np.array(train_file['features']) train_Y = np.array(train_file['label']) train_Xs(train_X) # number of samples x number of features train_Ys(train_Y) # number of samples x 2 (two classes) nb_samples = train_X.shape[0] nb_features = train_X.shape[1] batch_size = int(nb_samples/STEPS_PER_EPOCH) # batch size is determined by the number of samples in each dataset encoder = tflearn.input_data(shape=[None, nb_features]) encoder = tflearn.fully_connected(encoder, encode_hidden_1) encodings.append(encoder) classifiers_1 = [] classifiers_2 = [] softmax_outputs = [] for encoding in encodings: classifier1 = tflearn.fully_connected(encoding, classify_hidden_1, activation='relu') classifiers_1.append(classifier1) classifier2 = tflearn.fully_connected(classifiers1, classify_hidden_2, activation='relu') classifiers_2.append(classifier2) softmax = tflearn.fully_connected(classifier2, 2, activation='softmax') softmax_outputs.append(softmax) network = regression(softmax_outputs, optimizer = 'momentum', loss='categorical_crossentropy', learning_rate=0.1) ```
2hard
Title: Database autogenerate migrations Body: * GINO version: 0.8.3 * Python version: 3.7.5 * asyncpg version: 0.19.0 * aiocontextvars version: 0.2.2 * PostgreSQL version: 10.10 ### Description How to make model's `__tablename__` is autogenerated? I've tried to using `gino.declarative.declared_attr` to automatically generate `__tablename__` attribute for model, but got error `KeyError: '<function BaseModel.__tablename__ at 0x7f21defb7950>'` ### What I Did I tried to structuring applications with `starlette` here https://github.com/nsiregar/letsgo but unable to autogenerate `__tablename__`
1medium
Title: 在python3.10.1环境下numpy版本问题 Body: 这个程序要求numpy的版本低于1.21,然而python3.10不支持低于1.21版本的numpy。因此如何在python3.10环境下能正确运行程序呢?
1medium
Title: Add a timer and more warnings/protections in imports Body: We should have some kind of timer to give users an idea of how long the import process is taking. If it goes beyond some period of time we should notify the user to contact a server administrator to run the import manually or something to that extent. Or we should come up with some way to have CTFd pause entirely until the import succeeds, maybe with a special config like `import_in_progress`.
1medium
Title: ldaseqmodel convergence Body: <!-- **IMPORTANT**: - Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports. - Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers. Github bug reports that do not include relevant information and context will be closed without an answer. Thanks! --> #### Problem description https://github.com/RaRe-Technologies/gensim/blob/742fb188dc6de03a42411510bf5b45e26574b328/gensim/models/ldaseqmodel.py#L303 This line in `ldaseqmodel.py` seems preventing the early termination of the algorithm. Set the `convergence` to 1 whenever the convergence criterion is met makes it must exhaust the `em_max_iter` hence cannot terminate earlier. #### Versions Please provide the output of: ```python import platform; print(platform.platform()) import sys; print("Python", sys.version) import struct; print("Bits", 8 * struct.calcsize("P")) import numpy; print("NumPy", numpy.__version__) import scipy; print("SciPy", scipy.__version__) import gensim; print("gensim", gensim.__version__) from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION) ``` gensim version 4.1.2
1medium
Title: BUG: Error when using CyLP Body: After finally managing to successfully install CyLP, using it in phase_proc_lp (pyart.correct.phase_proc_lp(radar, 2.0, self_const = 12000.0, low_z=0.0, high_z=53.0, min_phidp=0.01, min_ncp=0.3, min_rhv=0.8, LP_solver='cylp_mp', proc=15)) does not work. The error seems to be "Error in `python': free(): invalid pointer: 0x00005597c77d6c98" A long list of messages and memory map is being printed out: [cylp_messages.txt](https://github.com/ARM-DOE/pyart/files/7589014/cylp_messages.txt) And then the script just hangs. I installed CyLP following these instructions https://github.com/coin-or/CyLP I tried also installing CyLP following these instructions provided in the Py-ART documentation https://arm-doe.github.io/pyart/setting_up_an_environment.html but unsuccessfully. I got what looked like compiling issues even after installing additional conda compilers. So the original CyLP installation instructions worked, but for some reason the phase_proc_lp function is not working still.
1medium
Title: store_parquet_metadata, path_ignore_suffix has conflicting types Body: *P.S. Please do not attach files as it's considered a security risk. Add code snippets directly in the message body as much as possible.* https://github.com/aws/aws-sdk-pandas/blob/main/awswrangler/s3/_write_parquet.py#L808 arg to store_parquet_metadata, path_ignore_suffix has conflicting types; [doc string](https://github.com/aws/aws-sdk-pandas/blob/main/awswrangler/s3/_write_parquet.py#L864) shows; Union[str, List[str], None] [typing](https://github.com/aws/aws-sdk-pandas/blob/main/awswrangler/s3/_write_parquet.py#L814) in the code shows; Optional[str] = None awswrangler v2 used to have Union[str, List[str], None] as the type. If the code is right, and doc string is stale, then how can we use several suffixes to ignore?
1medium
Title: [BUG] pypwalker failed to load the visual in Streamlit Body: **Describe the bug** After selecting a field in either x or y axis, pypwalker showed the visual but very quickly the selection was cleared with blank visual screen. **To Reproduce** Steps to reproduce the behavior: 1. Copy the demo example of pypwalker for streamlit 2. Run the codes 3. Select any field to the axis 4. See error as described above **Versions** - pygwalker version: pygwalker==0.4.9.4 - python version 3.12.4 - browser latest Chrome - streamlit==1.37.1 <img width="1280" alt="image" src="https://github.com/user-attachments/assets/b2816582-ecb7-4b2d-a9cd-73e3fa88e07d">
1medium
Title: Filter Time 00:10:00 Body: Is possible filter value time like this: 00:10:00? **Example:** `+ [{'if': {'column_id': 'Duração no Status', 'filter': 'Duração no Status >= 00:10:00'}, 'backgroundColor': 'white' ,'color': 'red', 'font-size': '1.1em'}]`
1medium
Title: 配好环境之后尝试运行python tools/process_data.py --config demos/process_on_ray/configs/demo.yaml,遇到疑似卡住无任何日志输出的情况 Body: ### Before Asking 在提问之前 - [x] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。 - [x] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。 ### Search before asking 先搜索,再提问 - [x] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。 ### Question 从源码进行安装了data-juicer(python==3.10.6, Ray==2.40.0, grpcio==1.71.0),当前设备是4*T4, 24Core, 512G。 ray start --head ray status python tools/process_data.py --config demos/process_on_ray/configs/demo.yaml 命令行界面在输出如下内容之后卡住: 2025-03-18 07:47:26 | INFO | data_juicer.core.ray_executor:56 - Initing Ray ... 2025-03-18 07:47:26,492 INFO worker.py:1636 -- Connecting to existing Ray cluster at address: 10.233.65.253:6379... 2025-03-18 07:47:26,504 INFO worker.py:1812 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265 log中也仅有如下内容: 2025-03-18 07:47:26.256 | INFO | data_juicer.config.config:config_backup:742 - Back up the input config file [/workspace/data-juicer/demos/process_on_ray/configs/demo.yaml] into the work_dir [/workspace/data-juicer/outputs/demo] 2025-03-18 07:47:26.277 | INFO | data_juicer.config.config:display_config:764 - Configuration table: 2025-03-18 07:47:26.477 | INFO | data_juicer.core.ray_executor:__init__:56 - Initing Ray ... 不知道是哪里的问题,十分困惑,还请解答。 ### Additional 额外信息 _No response_
1medium
Title: python 3 incompatible when use map function Body: https://github.com/tflearn/tflearn/blob/master/tflearn/layers/core.py#L662 ` x = map(lambda t: tf.reshape(t, [-1, 1]+utils.get_incoming_shape(t)[1:]), x) ` cause a compatible issue when using python3. should change to `x = list(map(lambda t: tf.reshape(t, [-1, 1]+utils.get_incoming_shape(t)[1:]), x))` here is the exception I got: ... File "/anaconda/lib/python3.5/site-packages/tflearn/layers/core.py", line 654, in time_distributed return tf.concat(1, x) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1077, in concat return identity(values[0], name=scope) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1424, in identity result = _op_def_lib.apply_op("Identity", input=input, name=name) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 493, in apply_op raise err File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 490, in apply_op preferred_dtype=default_dtype) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 669, in convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 165, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 441, in make_tensor_proto tensor_proto.string_val.extend([compat.as_bytes(x) for x in proto_values]) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 441, in <listcomp> tensor_proto.string_val.extend([compat.as_bytes(x) for x in proto_values]) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/util/compat.py", line 65, in as_bytes (bytes_or_text,)) TypeError: Expected binary or unicode string, got <map object at 0x159ca0630>
0easy
Title: How to change the directory of ". cache/huggingface/diffusers/models" to my favorite file location Body:
3misc
Title: Mobile support? Body: I would love to use this but on mobile that add domain field is out of the screen and I have to go into landscape mode to view it and then it looks really bad...
1medium
Title: When sys.argv be changed, pudb3 cannot enter REPL Body: **Describe the bug** When sys.argv be changed, pudb3 cannot enter REPL **To Reproduce** test.py ```python import sys argv = sys.argv sys.argv = [] print(1) print(2) print(3) sys.argv = argv print(4) print(5) print(6) ``` `pudb3 test.py`, when `sys.argv` is `[]`, press `!` cannot enter REPL. when `sys.argv` recovery, press `!` can enter REPL. **Expected behavior** press `!` can enter REPL. **Additional context** Can we backup `sys.argv` when the program start, And when we press `!` recover `sys.argv` temporarily to avoid this bug? ## Complete: if some module of `sys.modules` (e.g, argparse) be changed, the same phenomenon will also happen.
1medium
Title: ImportError: cannot import name 'ModelSchema' from 'ninja' Body: I have a project setup using django-ninja==0.13.2, Django 4.0 and django-ninja-auth It used to work but I haven't worked on it for a few months and now I come back to run it in the same venv and I'm getting this: `ImportError: cannot import name 'ModelSchema' from 'ninja'` Anyone know why this could be?
1medium
Title: [FEATURE]: euclidean_l2 and cosine distance have identical ROC curves, so you could drop one of them in benchmarks. Body: ### Description Let u and v be unit vectors (i.e. you've already divided by the euclidean norm). Let n = len(u) Then the cosine distance is 1 - sum(u[i]*v[i] for i in range(n)). On the other hand, the square of the euclidean distance is sum((u[i] - v[i])**2 for i in range(n)) = sum(u[i]*u[i] + v[i]*v[i] - 2*u[i]*v[i] for i in range(n)) = sum(u[i]*u[i]) + sum(v[i]*v[i]) - 2*sum(u[i]*v[i]) = 2 - 2*sum(u[i]*v[i]), which is twice the cosine distance. So the metric provide the same information. ### Additional Info _No response_
1medium
Title: EarlyStopping in the middle of an epoch Body: ### Description & Motivation I'm fitting a normalizing flow to learn the mapping between two embedding spaces. The first embedding space is sampled using the mapper of a pretrained stylegan and the second embedding space is derived by a pretrained covnet. I want to learn a mapper from the second embedding space back to the first one. Since the stylegan can produce infinite data, I'm using an iterable dataset across one single epoch that encompasses the entire training run. So, I want `EarlyStopping` to trigger in the middle of the epoch. Validation data isn't available. ### Pitch An option called `check_interval` should be added to `EarlyStopping`. If the value is a float, it is the fraction of an epoch between checks. If the value is an integer, it is the amount of training steps between checks. For the change to be non-breaking, its default should be `1.0`. ### Alternatives Currently, I'm passing the EarlyStopping callback to the LightningModule and manually calling the check at the end of each training batch: ```py def on_train_batch_end(self, outputs, batch, batch_idx): self.early_stopping_callback._run_early_stopping_check(self.trainer) ``` ### Additional context _No response_ cc @borda @carmocca @awaelchli
1medium
Title: webui hyper-parameter Body: **Describe the issue**: This is how I define my own model and search space,It searches correctly for all parameters,However, in the hyperparameter curve of webui, the value of this parameter cannot be displayed. I want to know what the problem is and how to solve it? Looking forward to your reply, thanks very much! class MyModelSpace(ModelSpace): def __init__(self): super().__init__() input_size = 10 feature_1 = nni.choice('feature1', [64, 128, 256]) self.layer1 = MutableLinear(input_size, feature_1) self.dropout1 = MutableDropout(nni.choice('dropout1', [0.25, 0.5, 0.75])) # choose dropout rate from 0.25, 0.5 and 0.75 self.relu1 = LayerChoice([ ReLUWrapper(), TanhWrapper(), SigmoidWrapper(), ], label='relu1') self.skip_1 = MyModule(self.add_mutable(nni.choice('skip_connect_1', [0,1]))).chosen model_space = MyModelSpace() evaluator = FunctionalEvaluator(evaluate_model) exp = NasExperiment(model_space, evaluator, search_strategy) exp.config.max_trial_number = 10 exp.config.trial_concurrency = 2 exp.config.training_service.use_active_gpu = True exp.config.trial_gpu_number = 1 **Environment**: - NNI version: 3.0 - Training service (local|remote|pai|aml|etc): local - Client OS: - Server OS (for remote mode only): - Python version: 3.8 - PyTorch/TensorFlow version: 1.9 - Is conda/virtualenv/venv used?: conda - Is running in Docker?: no **Configuration**: - Experiment config (remember to remove secrets!): - Search space: **Log message**: - nnimanager.log:[2023-09-28 10:28:09] INFO (main) Start NNI manager [2023-09-28 10:28:09] INFO (RestServer) Starting REST server at port 8034, URL prefix: "/" [2023-09-28 10:28:09] INFO (RestServer) REST server started. [2023-09-28 10:28:09] INFO (NNIDataStore) Datastore initialization done [2023-09-28 10:28:09] INFO (NNIManager) Starting experiment: spg98lnc [2023-09-28 10:28:09] INFO (NNIManager) Setup training service... [2023-09-28 10:28:09] INFO (NNIManager) Setup tuner... [2023-09-28 10:28:09] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING [2023-09-28 10:28:10] INFO (NNIManager) Add event listeners [2023-09-28 10:28:10] INFO (LocalV3.local) Start [2023-09-28 10:28:10] INFO (NNIManager) NNIManager received command from dispatcher: ID, [2023-09-28 10:28:10] INFO (NNIManager) Updated search space [object Object] [2023-09-28 10:28:10] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameters": {"status": "frozen", "model_symbol": {"__nni_type__": "bytes:gAWVrhUAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwUX21ha2Vfc2tlbGV0b25fY2xhc3OUk5QojAhidWlsdGluc5SMBHR5cGWUk5SMDE15TW9kZWxTcGFjZZSMF25uaS5uYXMubm4ucHl0b3JjaC5iYXNllIwKTW9kZWxTcGFjZZSTlIWUfZSMCl9fbW9kdWxlX1+UjAhfX21haW5fX5RzjCA3YjYxMTEyZDI4YmU0MjA4YjcyN2ZkMjlmYTA5OGRiNZROdJRSlIwcY2xvdWRwaWNrbGUuY2xvdWRwaWNrbGVfZmFzdJSMD19jbGFzc19zZXRzdGF0ZZSTlGgQfZQoaAxoDYwIX19pbml0X1+UaACMDl9tYWtlX2Z1bmN0aW9ulJOUKGgAjA1fYnVpbHRpbl90eXBllJOUjAhDb2RlVHlwZZSFlFKUKEsBSwBLAEsDSwpLH0OKfABqAHwBfAKOAQEAdAF8AGQBgwJzOHwAagJkAGsJci50A3wAagKDAXwAXwRuCnQFZAKDAXwAXwR0AXwAZAGDAnJ2fABqBGoGc3Z8AGoEjx4BAIgAfABmAXwBngJ8Ao4BVwACADUAUQBSAKMAUwBRAFIAWABuEIgAfABmAXwBngJ8Ao4BUwBkAFMAlE6MDF9sYWJlbF9zY29wZZSMCF91bnVzZWRflIeUKIwYYXV0b19zYXZlX2luaXRfYXJndW1lbnRzlIwHaGFzYXR0cpSMDV9sYWJlbF9wcmVmaXiUjAtsYWJlbF9zY29wZZRoHowSc3RyaWN0X2xhYmVsX3Njb3BllIwJYWN0aXZhdGVklHSUjARzZWxmlIwEYXJnc5SMBmt3YXJnc5SHlIw7L3NzZC96enIvbXRsL0xHQk1fbXVsdGlfdGFzay9ubmkvbm5pL25hcy9ubi9weXRvcmNoL2Jhc2UucHmUjAhuZXdfaW5pdJRNvgFDEgACDAIKAQoBDgIKARICCAEkApSMEWluaXRfd2l0aF9jb250ZXh0lIWUKXSUUpR9lCiMC19fcGFja2FnZV9flIwSbm5pLm5hcy5ubi5weXRvcmNolIwIX19uYW1lX1+UaAeMCF9fZmlsZV9flIw7L3NzZC96enIvbXRsL0xHQk1fbXVsdGlfdGFzay9ubmkvbm5pL25hcy9ubi9weXRvcmNoL2Jhc2UucHmUdU5OaACMEF9tYWtlX2VtcHR5X2NlbGyUk5QpUpSFlHSUUpRoEYwSX2Z1bmN0aW9uX3NldHN0YXRllJOUaD59lH2UKGg2aC2MDF9fcXVhbG5hbWVfX5SMKm1vZGVsX3NwYWNlX2luaXRfd3JhcHBlci48bG9jYWxzPi5uZXdfaW5pdJSMD19fYW5ub3RhdGlvbnNfX5R9lChoKGgJjAZyZXR1cm6UTnWMDl9fa3dkZWZhdWx0c19flE6MDF9fZGVmYXVsdHNfX5ROaAxoB4wHX19kb2NfX5ROjAtfX2Nsb3N1cmVfX5RoAIwKX21ha2VfY2VsbJSTlGgXKGgcKEsBSwBLAEsESwpLH0NgdACDAH0DfANkAGsIckJ0AYMAfABfAnwAagKPHgEAiAB8AGYBfAGeAnwCjgFXAAIANQBRAFIAowBTAFEAUgBYAG4adAGgA6EAfABfAogAfABmAXwBngJ8Ao4BUwBkAFMAlE6FlCiMDWN1cnJlbnRfbW9kZWyUjA5mcm96ZW5fY29udGV4dJSMD19mcm96ZW5fY29udGV4dJSMC3RvcF9jb250ZXh0lHSUKGgoaCloKowEYXJjaJR0lGgsaC9NrgFDDgABBgEIBggBCAEkAwoBlIwQb3JpZ2luYWxfaW5pdF9mbpSFlCl0lFKUaDNOTmg6KVKUhZR0lFKUaEBoX32UfZQoaDZoL2hDjDNtb2RlbF9zcGFjZV9pbml0X3dyYXBwZXIuPGxvY2Fscz4uaW5pdF93aXRoX2NvbnRleHSUaEV9lGgoaAlzaEhOaElOaAxoB2hKTmhLaE1oFyhoHChLAUsASwBLB0sISwNCngIAAHQAgwCgAaEAAQBkAX0BdAKgA2QCZANkBGQFZwOhAn0CfACgBHwCoQEBAHQFfAF8AoMCfABfBnQHdAKgA2QGZAdkCGQJZwOhAoMBfABfCHQJdAqDAHQLgwB0DIMAZwNkCmQLjQJ8AF8NdA58AKAEdAKgA2QMZA1kDmcCoQKhAYMBag98AF8QfACgBHwCoQEBAHwAahBkDWsCcux0AqADZA9kA2QEZAVnA6ECfQN0BXwCfAODAnwAXxF0B3QCoANkEGQHZAhkCWcDoQKDAXwAXxJ0CXQKgwB0C4MAdAyDAGcDZBFkC40CfABfE24EfAJ9A3QOfACgBHQCoANkEmQNZA5nAqECoQGDAWoPfABfFHwAahRkDWsCkAFybHQCoANkE2QDZARkBWcDoQJ9BHQFfAN8BIMCfABfFXQHdAKgA2QUZAdkCGQJZwOhAoMBfABfFnQJdAqDAHQLgwB0DIMAZwNkFWQLjQJ8AF8XbgR8A30EdA58AKAEdAKgA2QWZA1kDmcCoQKhAYMBag98AF8YfABqGGQNawKQAXLsdAKgA2QXZANkBGQFZwOhAn0FdAV8BHwFgwJ8AF8ZdAd0AqADZBhkB2QIZAlnA6ECgwF8AF8adAl0CoMAdAuDAHQMgwBnA2QZZAuNAnwAXxtuBHwEfQV0DnwAoAR0AqADZBpkDWQOZwKhAqEBgwFqD3wAXxx8AGocZA1rApACcmx0AqADZBtkA2QEZAVnA6ECfQZ0BXwFfAaDAnwAXx10B3QCoANkHGQHZAhkCWcDoQKDAXwAXx50CXQKgwB0C4MAdAyDAGcDZB1kC40CfABfH24EfAV9BnQOfACgBHQCoANkHmQNZA5nAqECoQGDAWoPfABfIHQFfAZkDoMCfABfIWQAUwCUKE5LCowIZmVhdHVyZTGUS0BLgE0AAYwIZHJvcG91dDGURz/QAAAAAAAARz/gAAAAAAAARz/oAAAAAAAAjAVyZWx1MZSMBWxhYmVslIWUjA5za2lwX2Nvbm5lY3RfMZRLAEsBjAhmZWF0dXJlMpSMCGRyb3BvdXQylIwFcmVsdTKUjA5za2lwX2Nvbm5lY3RfMpSMCGZlYXR1cmUzlIwIZHJvcG91dDOUjAVyZWx1M5SMDnNraXBfY29ubmVjdF8zlIwIZmVhdHVyZTSUjAhkcm9wb3V0NJSMBXJlbHU0lIwOc2tpcF9jb25uZWN0XzSUjAhmZWF0dXJlNZSMCGRyb3BvdXQ1lIwFcmVsdTWUjA5za2lwX2Nvbm5lY3RfNZR0lCiMBXN1cGVylGgVjANubmmUjAZjaG9pY2WUjAthZGRfbXV0YWJsZZSMDU11dGFibGVMaW5lYXKUjAZsYXllcjGUjA5NdXRhYmxlRHJvcG91dJRoZowLTGF5ZXJDaG9pY2WUjAtSZUxVV3JhcHBlcpSMC1RhbmhXcmFwcGVylIwOU2lnbW9pZFdyYXBwZXKUaGeMCE15TW9kdWxllIwGY2hvc2VulIwGc2tpcF8xlIwGbGF5ZXIylGhsaG2MBnNraXBfMpSMBmxheWVyM5RocGhxjAZza2lwXzOUjAZsYXllcjSUaHRodYwGc2tpcF80lIwGbGF5ZXI1lGh4aHmMBnNraXBfNZSMBmZjX291dJR0lChoKIwKaW5wdXRfc2l6ZZSMCWZlYXR1cmVfMZSMCWZlYXR1cmVfMpSMCWZlYXR1cmVfM5SMCWZlYXR1cmVfNJSMCWZlYXR1cmVfNZR0lIwtL3NzZC96enIvbXRsL0xHQk1fbXVsdGlfdGFzay9hdXRvbWwvY3VzdG9tLnB5lGgVSzFDiAABCgEEARIBCgEMARgBAgEEAQQBBP0CBAL8CAYeAQoCCgESAQwBGAECAQQBBAEE/QIEAvwKBgQCHgMMARIBDAEYAQIBBAEEAQT9AgQC/AoGBAIeAwwBEgEMARgBAgEEAQQBBP0CBAL8CgYEAh4DDAESAQwBGAECAQQBBAEE/QIEAvwKBgQCHgKUjAlfX2NsYXNzX1+UhZQpdJRSlH2UKGg0Tmg2aA1oN2ibdU5OaDopUpSFlHSUUpRoQGilfZR9lChoNmgVaEOMFU15TW9kZWxTcGFjZS5fX2luaXRfX5RoRX2UaEhOaElOaAxoDWhKTmhLaE1oEIWUUpSFlIwXX2Nsb3VkcGlja2xlX3N1Ym1vZHVsZXOUXZSMC19fZ2xvYmFsc19flH2UKGh9aACMCXN1YmltcG9ydJSTlIwDbm5plIWUUpRogIwabm5pLm5hcy5ubi5weXRvcmNoLl9sYXllcnOUaICTlGiCaLZogpOUaIOMGW5uaS5uYXMubm4ucHl0b3JjaC5jaG9pY2WUaIOTlGiEaAIoaAVohIwXdG9yY2gubm4ubW9kdWxlcy5tb2R1bGWUjAZNb2R1bGWUk5SFlH2UaAxoDXOMIDM5YTliNjUxYjM1MzRhYjhhYjQ3YmJkMTU2MjZiNDNklE50lFKUaBNown2UKGgMaA2MB2ZvcndhcmSUaBcoaBwoSwJLAEsASwJLA0tDQwp0AKABfAGhAVMAlE6FlIwBRpSMBHJlbHWUhpRoKIwBeJSGlGibaMRLIEMCAAGUKSl0lFKUaKFOTk50lFKUaEBo0H2UfZQoaDZoxGhDjBNSZUxVV3JhcHBlci5mb3J3YXJklGhFfZRoSE5oSU5oDGgNaEpOaEtOaK1dlGivfZRox2iyjBN0b3JjaC5ubi5mdW5jdGlvbmFslIWUUpRzdYaUhlIwaEpOdX2UhpSGUjBohWgCKGgFaIVovYWUfZRoDGgNc4wgYzM0ZWRlNGYzNzRkNGU1Y2FkNDIxYzA2Zjc2MDMwZjmUTnSUUpRoE2jhfZQoaAxoDWjEaBcoaBwoSwJLAEsASwJLA0tDQwp0AKABfAGhAVMAlGjGjAV0b3JjaJSMBHRhbmiUhpRoy2ibaMRLJEMCAAGUKSl0lFKUaKFOTk50lFKUaEBo632UfZQoaDZoxGhDjBNUYW5oV3JhcHBlci5mb3J3YXJklGhFfZRoSE5oSU5oDGgNaEpOaEtOaK1dlGivfZRo5GiyaOSFlFKUc3WGlIZSMGhKTnV9lIaUhlIwaIZoAihoBWiGaL2FlH2UaAxoDXOMIDBhM2M1M2YzOGFlMDQ1YWY4MDZlZTk2ZmIxMmFkYjNilE50lFKUaBNo+32UKGgMaA1oxGgXKGgcKEsCSwBLAEsCSwNLQ0MKdACgAXwBoQFTAJRoxmjkjAdzaWdtb2lklIaUaMtom2jESyhDAgABlCkpdJRSlGihTk5OdJRSlGhAagQBAAB9lH2UKGg2aMRoQ4wWU2lnbW9pZFdyYXBwZXIuZm9yd2FyZJRoRX2UaEhOaElOaAxoDWhKTmhLTmitXZRor32UaORo83N1hpSGUjBoSk51fZSGlIZSMGiHaAIoaAVoh2gHjBJQYXJhbWV0cml6ZWRNb2R1bGWUk5SFlH2UaAxoDXOMIDdhODY1NjkwZjQ1NjQyNjNhNzI4MmFkZjdhYmNmOTU3lE50lFKUaBNqFAEAAH2UKGgMaA1oFWgXKGgcKEsBSwBLAEsFSwVLH0OedACDAH0DfANkAGsJcjR8AGoBfANmAXwBngJ8Ao4BXAJ9AX0CiAB8AGYBfAGeAnwCjgFTAHwAagJ8AXwCjgEBAHQDoAR8AXwCoAWhAKECRABdKH0EdAZ8BHQHgwJyanwAoAh8BKEBAQBxUHQJfAR8AGoKaguDAgEAcVB8AGoBZAF8AZ4CfAKOAVwCfQF9AogAfABmAXwBngJ8Ao4BUwCUTk6FlIaUKGhQjBVmcmVlemVfaW5pdF9hcmd1bWVudHOUaCGMCWl0ZXJ0b29sc5SMBWNoYWlulIwGdmFsdWVzlIwKaXNpbnN0YW5jZZSMB011dGFibGWUaH+MF193YXJuX2lmX25lc3RlZF9tdXRhYmxllGidaDZ0lChoKGgpaCpoVYwDYXJnlHSUaCxoLU1vAkMWAAEGAQgDFgEQAwwBFAEKAQwCEAQUAZRoWSl0lFKUaDNOTmg6KVKUhZR0lFKUaEBqKQEAAH2UfZQoaDZoLWhDjDJwYXJhbWV0cml6ZWRfbW9kdWxlX2luaXRfd3JhcHBlci48bG9jYWxzPi5uZXdfaW5pdJRoRX2UaChqDwEAAHNoSE5oSU5oDGgHaEpOaEtoTWgXKGgcKEsCSwBLAEsCSwJLA0MUdACDAKABoQABAHwBfABfAmQAUwCUaMZofGgVaIiHlGjLaJtoFUssQwQAAQoBlGieKXSUUpRooU5OaDopUpSFlHSUUpRoQGo2AQAAfZR9lChoNmgVaEOMEU15TW9kdWxlLl9faW5pdF9flGhFfZRoSE5oSU5oDGgNaEpOaEtoTWoUAQAAhZRSlIWUaK1dlGivfZR1hpSGUjCFlFKUhZRorV2UaK99lChoUIwUbm5pLm5hcy5zcGFjZS5mcm96ZW6UaFCTlGoaAQAAaLKMCWl0ZXJ0b29sc5SFlFKUah4BAACME25uaS5tdXRhYmxlLm11dGFibGWUah4BAACTlGofAQAAaAdqHwEAAJOUdXWGlIZSMGhKTowNX2luaXRfd3JhcHBlZJRqNgEAAHV9lIaUhlIwdXWGlIZSMIWUUpSFlGitXZRor32UKGhQakcBAABoUYwSbm5pLm11dGFibGUuZnJvemVulGhRk5R1dYaUhlIwhZRSlIWUaK1dlGivfZQoaCSMEW5uaS5tdXRhYmxlLnV0aWxzlGgkk5RoJWgHaCWTlHV1hpSGUjBoxGgXKGgcKEsCSwBLAEsDSwNLQ0PMfACgAHwBoQF9AXwAoAF8AaEBfQF8AKACfAGhAX0BfABqA2QBawJyRnwAoAR8AaEBfQF8AKAFfAGhAX0BfACgBnwBoQF9AXwAagdkAWsCcm58AKAIfAGhAX0BfACgCXwBoQF9AXwAoAp8AaEBfQF8AGoLZAFrAnKWfACgDHwBoQF9AXwAoA18AaEBfQF8AKAOfAGhAX0BfABqD2QBawJyvnwAoBB8AaEBfQF8AKARfAGhAX0BfACgEnwBoQF9AXwAoBN8AaEBfQJ8AlMAlE5LAIaUKGiBaGdoZmiJaIpobWhsaItojGhxaHBojWiOaHVodGiPaJBoeWh4aJJ0lGgoaMqMBm91dHB1dJSHlGibaMRLfkMqAAEKAQoBCgEKAQoBCgEKAQoBCgEKAQoBCgEKAQoBCgEKAQoBCgEKAQoBlCkpdJRSlGihTk5OdJRSlGhAam0BAAB9lH2UKGg2aMRoQ4wUTXlNb2RlbFNwYWNlLmZvcndhcmSUaEV9lGhITmhJTmgMaA1oSk5oS05orV2UaK99lHWGlIZSMGhKTmpPAQAAaKVoI051fZSGlIZSMC4="}, "model_args": [], "model_kwargs": {}, "evaluator": {"__symbol__": "path:nni.nas.evaluator.functional.FunctionalEvaluator", "__kwargs__": {"function": {"__nni_type__": "bytes:gAWVHSAAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX21ha2VfZnVuY3Rpb26Uk5QoaACMDV9idWlsdGluX3R5cGWUk5SMCENvZGVUeXBllIWUUpQoSwFLAEsASwtLB0tDQ8pkAX0BdABq ..... [2023-09-28 10:28:21] INFO (NNIManager) Trial job OaXfE status changed from RUNNING to SUCCEEDED [2023-09-28 10:28:21] INFO (NNIManager) Trial job BXIkl status changed from RUNNING to SUCCEEDED .............. [2023-09-28 10:28:59] INFO (NNIManager) Change NNIManager status from: NO_MORE_TRIAL to: DONE [2023-09-28 10:28:59] INFO (NNIManager) Experiment done. [2023-09-28 10:28:59] INFO (ShutdownManager) Initiate shutdown: REST request [2023-09-28 10:28:59] INFO (RestServer) Stopping REST server. [2023-09-28 10:28:59] INFO (NNIManager) Change NNIManager status from: DONE to: STOPPING [2023-09-28 10:28:59] INFO (NNIManager) Stopping experiment, cleaning up ... [2023-09-28 10:28:59] INFO (TaskScheduler) Release whole experiment spg98lnc [2023-09-28 10:28:59] INFO (LocalV3.local) All trials stopped [2023-09-28 10:28:59] INFO (RestServer) REST server stopped. [2023-09-28 10:28:59] INFO (NNIManager) Change NNIManager status from: STOPPING to: STOPPED [2023-09-28 10:28:59] INFO (NNIManager) Experiment stopped. [2023-09-28 10:28:59] INFO (NNITensorboardManager) Forced stopping all tensorboard task. [2023-09-28 10:28:59] INFO (NNITensorboardManager) All tensorboard task stopped. [2023-09-28 10:28:59] INFO (NNITensorboardManager) Tensorboard manager stopped. [2023-09-28 10:28:59] INFO (ShutdownManager) Shutdown complete. - dispatcher.log: - nnictl stdout and stderr: <!-- Where can you find the log files: LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout --> **How to reproduce it?**:
1medium
Title: 场外基金调用fund_nav() 缺少资产净值等信息 Body: 代码如下 ``` print(pro.fund_nav(ts_code="000171.OF")) ``` 结果如下 ``` ts_code ann_date end_date unit_nav accum_nav accum_div net_asset total_netasset adj_nav update_flag 0 000171.OF 20200815 20200814 1.959 1.959 None NaN NaN 1.959 0 1 000171.OF 20200814 20200813 1.953 1.953 None NaN NaN 1.953 0 2 000171.OF 20200813 20200812 1.953 1.953 None NaN NaN 1.953 0 3 000171.OF 20200812 20200811 1.961 1.961 None NaN NaN 1.961 0 4 000171.OF 20200811 20200810 1.964 1.964 None NaN NaN 1.964 0 ... ... ... ... ... ... ... ... ... ... ... 1696 000171.OF 20130917 20130916 1.007 1.007 None NaN NaN 1.007 0 1697 000171.OF 20130914 20130913 1.005 1.005 None 6.211282e+08 6.211282e+08 1.005 0 1698 000171.OF 20130907 20130906 1.004 1.004 None 6.205057e+08 6.205057e+08 1.004 0 1699 000171.OF 20130831 20130830 0.997 0.997 None 6.163435e+08 6.163435e+08 0.997 0 1700 000171.OF 20130824 20130823 1.000 1.000 None 6.182839e+08 6.182839e+08 1.000 0 ``` 缺少最新的net_asset 和 total_netasset 信息 https://tushare.pro id: 386529
1medium
Title: uwsgi background callbacks progress hangs Body: dash 2.11.1 dash-core-components 2.0.0 dash-html-components 2.0.0 dash-table 5.0.0 OS: debian 12.0 uwsgi 2.0.21 chrome 114.0.5735.198 (Official Build) (64-bit) try to use uwsgi as app server: `uwsgi --http-socket :8080 --master --workers 4 -w dtest2:wsgi_app` and test background callbacks with progress ```python #!/usr/bin/python3 # -*- coding: utf-8 -*- import os import time from dash import Dash, html, dcc, Input, Output, callback, DiskcacheManager import diskcache import plotly.express as px import plotly.io as pio import pandas as pd dcache = diskcache.Cache("./cache") bcm = DiskcacheManager(dcache) app = Dash(__name__, title='dash test2', background_callback_manager=bcm) wsgi_app = app.server app.layout = html.Div([ html.Div( [ html.Div( [ html.P(id="paragraph1", children=["Button not clicked"]), html.Progress(id="progress_bar1", value="0"), ] ), html.Button(id="button_run1", children="Run Job!"), html.Button(id="button_cancel1", children="Cancel Running Job!"), ] ), html.Div( [ html.Div( [ html.P(id="paragraph2", children=["Button not clicked"]), html.Progress(id="progress_bar2", value="0"), ] ), html.Button(id="button_run2", children="Run Job!"), html.Button(id="button_cancel2", children="Cancel Running Job!"), ] ) ] ) def long_task(set_progress, n_clicks): total = 10 for i in range(total + 1): set_progress((str(i), str(total))) time.sleep(1) pid = os.getpid() return f"Clicked {n_clicks} times, pid {pid}" @callback( output=Output("paragraph1", "children"), inputs=Input("button_run1", "n_clicks"), running=[ (Output("button_run1", "disabled"), True, False), (Output("button_cancel1", "disabled"), False, True), ( Output("paragraph1", "style"), {"visibility": "hidden"}, {"visibility": "visible"}, ), ( Output("progress_bar1", "style"), {"visibility": "visible"}, {"visibility": "hidden"}, ), ( Output("progress_bar1", "value"), '0', '0', ), ], cancel=Input("button_cancel1", "n_clicks"), progress=[Output("progress_bar1", "value"), Output("progress_bar1", "max")], background=True, prevent_initial_call=True ) def long_task_calback1(set_progress, n_clicks): return long_task(set_progress, n_clicks) @callback( output=Output("paragraph2", "children"), inputs=Input("button_run2", "n_clicks"), running=[ (Output("button_run2", "disabled"), True, False), (Output("button_cancel2", "disabled"), False, True), ( Output("paragraph2", "style"), {"visibility": "hidden"}, {"visibility": "visible"}, ), ( Output("progress_bar2", "style"), {"visibility": "visible"}, {"visibility": "hidden"}, ), ( Output("progress_bar2", "value"), '0', '0', ), ], cancel=Input("button_cancel2", "n_clicks"), progress=[Output("progress_bar2", "value"), Output("progress_bar2", "max")], background=True, prevent_initial_call=True ) def long_task_calback2(set_progress, n_clicks): return long_task(set_progress, n_clicks) if __name__ == '__main__': app.run(debug=True) ``` look like background callbacks run in background as expected but no update progress performed and `http://127.0.0.1:8080/_dash-update-component` hangs in Pending state why uwsgi? lot more options than in gunicorn's with `gunicorn dtest2:wsgi_app -b :8081` everything works as expected not sure this is bug or feature
2hard