text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: [BUG] The learning rate scheduler is being ignored in the first optimization step.
Body: **Describe the bug**
It appears that no matter how the learning rate scheduler is configured, during the first training step a learning rate of `1e-3` is always used. My understanding is that this happens because `lr_scheduler.step()` is always called after `optimizer.step()` and there is no initialization for the initial step learning rate. I'm currently using `optimizer.set_lr(0)` explicitly to work around this issue. It can be quite detrimental to full fine-tuning of LLMs (as opposed to just LoRA/PEFT training) as it throws the model off completely in the first training step.
Is this intentional? If so, should this be fixed inside DeepSpeed or are we supposed to just use `optimizer.set_lr(0)` like I'm currently doing. In either case, I think this should be documented somewhere as it is somewhat surprising behavior.
**To Reproduce**
It's pretty easy to reproduce with any configuration you already have. I was using this when debugging and finding this out:
```
"optimizer": {
"type": "AdamW",
"params": {
"betas": [0.9, 0.98],
"eps": 1e-6,
"weight_decay": 3e-6,
},
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 1e-4,
"warmup_num_steps": 100,
"total_num_steps": 1000,
},
}
``` | 2hard
|
Title: Error when using session- or module- scoped fixtures and fixture unions.
Body: While trying to solve this : https://stackoverflow.com/questions/46909275/parametrizing-tests-depending-of-also-parametrized-values-in-pytest
I found out that my proposal with union fixtures was not working correctly. | 1medium
|
Title: Add ability to override the SocialLoginSerializer
Body: In my application, I override the RegisterSerializer to do some further validation (I require users to have an invite before signing up). I'd like to do the same thing with social logins, but currently there is no way to override the SocialLoginSerializer in my application settings the way I can with the RegisterSerializer. | 1medium
|
Title: Serialized relationship id should be string
Body: According to [JSON:API](https://jsonapi.org/format/#document-resource-object-identification), the `id` field should always be a string, but this isn't the case when fetching relationships through `SqlalchemyDataLayer`.
I suspect the problem originates here: https://github.com/miLibris/flask-rest-jsonapi/blob/b4cb5576b75ffaf6463abb8952edc8839b03463f/flask_rest_jsonapi/data_layers/alchemy.py#L256
@akira-dev I would be happy to contribute with a pull request if you consider this correct. | 1medium
|
Title: Bahdanau attention (attention mechanisms) tensorflow notebook build fails in Turkish version.
Body: Currently, PR-63 (Fixing more typos) builds fail only for tensorflow notebook. I could not figure out the reason. @AnirudhDagar , I would be glad if you could take a look. | 1medium
|
Title: 关于路径的一点建议
Body: 首先感谢大神的分享!
其次有一点点小建议,就是在组合路径时可以用os.path.join() 函数,这样就不用判断是什么操作系统,程序上看起来更加简洁 | 0easy
|
Title: Add support for new kinds of nerves: n-simplices and min-intersection.
Body: Now that @michiexile abstracted a nerve class, we can include support for other nerves. Most obvious are a min-intersection nerve and an n-simplices nerve. Though I believe we will also need a custom nerve for multi-mapper and we could experiment with a multi-nerve.
- [x] min-intersection
- [ ] n-simplices
## min-intersection
It would be nice to set a minimum number of points each cluster has to intersect on to be considered connected.
```
edges = nerve({"a": [1,2,3], "b": [2,3,4], "c": [4,5,6]}, min_intersection=2)
-> ["a", "b"] in edges and ["b", "c"] not in edges
```
## n-simplices
It would be nice to have a general n-simplex nerve that constructs simplices of order n or less.
Before building this, is there an established format for simplexes? Are there any libraries that we could use?
Most promising simplicial complex libraries found in the wild:
- [pycomplex](https://github.com/EelcoHoogendoorn/pycomplex)
- [simplicial](https://github.com/simoninireland/simplicial)
I'd prefer not to reinvent the wheel but I think a strong python simplicial complex library could be useful to the community.
| 1medium
|
Title: 下载的百度预训练模型,可以作为checkpoint继续进行训练吗?
Body: 因为bert pytorch版本提供了language model fine tuning的方法,对预训练的model进行language model层面的fine tuning,或者可以理解为预训练模型上再次进行预训练。ERNIE模型现在只找到了下载的model直接进行fine tuning的方法,想知道有没有类似的language model fine tuning的方法。 | 1medium
|
Title: Unable to get http://storage.insightface.ai/files/models/buffalo_l.zip
Body: I think http://storage.insightface.ai is down, are there any alternative links for the model from which I can download the model and set it in `/root/.insightface/models/` manually?
Code:
```
from insightface.app import FaceAnalysis
app = FaceAnalysis()
```
Output:
```
download_path: /root/.insightface/models/buffalo_l
Downloading /root/.insightface/models/buffalo_l.zip from http://storage.insightface.ai/files/models/buffalo_l.zip...
---------------------------------------------------------------------------
TimeoutError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/urllib3/connection.py in _new_conn(self)
158 conn = connection.create_connection(
--> 159 (self._dns_host, self.port), self.timeout, **extra_kw)
160
23 frames
TimeoutError: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fc7b71ce910>: Failed to establish a new connection: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
MaxRetryError: HTTPConnectionPool(host='storage.insightface.ai', port=80): Max retries exceeded with url: /files/models/buffalo_l.zip (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc7b71ce910>: Failed to establish a new connection: [Errno 110] Connection timed out'))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
514 raise SSLError(e, request=request)
515
--> 516 raise ConnectionError(e, request=request)
517
518 except ClosedPoolError as e:
ConnectionError: HTTPConnectionPool(host='storage.insightface.ai', port=80): Max retries exceeded with url: /files/models/buffalo_l.zip (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc7b71ce910>: Failed to establish a new connection: [Errno 110] Connection timed out'))
``` | 1medium
|
Title: Is it safe to say the selection of thresholds will not affect training process at all?
Body: Thanks so much for this great framework.
After reading the code, I feel that the selection of threshold only appears during evaluate(), which is after the training epoch. Can I say the choice of threshold does not have any impact on the training? | 1medium
|
Title: On MacOS ARM some tests fail because some functions return different results
Body: From https://github.com/biolab/orange3/actions/runs/9317726517/job/25648607002
```
======================================================================
FAIL: test_max_features_reg (Orange.tests.test_random_forest.RandomForestTest.test_max_features_reg)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/runner/work/orange3/orange3/.tox/orange-released/lib/python3.11/site-packages/Orange/tests/test_random_forest.py", line 134, in test_max_features_reg
self.assertGreater(diff, 1.2)
AssertionError: 0.030000000000001137 not greater than 1.2
======================================================================
FAIL: test_info (Orange.widgets.evaluate.tests.test_owpermutationplot.TestOWPermutationPlot.test_info)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/runner/work/orange3/orange3/.tox/orange-released/lib/python3.11/site-packages/Orange/widgets/evaluate/tests/test_owpermutationplot.py", line 94, in test_info
self.assertIn("0.5021", self.widget._info.text())
AssertionError: '0.5021' not found in '\n<table width=100% align="center" style="font-size:11px">\n <tr style="background:#fefefe">\n <th style="background:transparent;padding: 2px 4px"></th>\n <th style="background:transparent;padding: 2px 4px">Corr = 0</th>\n <th style="background:transparent;padding: 2px 4px">Corr = 100</th>\n </tr>\n <tr style="background:#fefefe">\n <th style="padding: 2px 4px" align=right>Train</th>\n <td style="padding: 2px 4px" align=right>0.9980</td>\n <td style="padding: 2px 4px" align=right>0.9996</td>\n </tr>\n <tr style="background:#fefefe">\n <th style="padding: 2px 4px" align=right>CV</th>\n <td style="padding: 2px 4px" align=right>0.4978</td>\n <td style="padding: 2px 4px" align=right>0.8951</td>\n </tr>\n</table>\n '
``` | 2hard
|
Title: DOC: Pivot() example call incorrectly used and would give "error: duplicate index"
Body: ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
doc/source/user_guide/reshaping.rst
### Documentation problem
the table given as an example for pivot() is wrong and cant be used. it would return "error duplicate index" as there are duplicate values in the column given for "index" parameter.
<img width="772" alt="Image" src="https://github.com/user-attachments/assets/d7198715-b74e-4c35-88f8-5a01ee8eefe4" />
### Suggested fix for documentation
The "foo" column must contain unique values | 0easy
|
Title: Need a wheel for python 3.13
Body: When I am trying to install gensim with python 3.13, scipy is having trouble installing because it is trying to compile a new wheel but is unable to. | 1medium
|
Title: cose-bilkent does not work
Body: few issues with this
https://github.com/plotly/dash-cytoscape/blob/master/demos/usage-cose-bilkent-layout.py
throws an error about the conflict between serving locally and external scripts
You have set your config to `serve_locally=True` but A local version of https://cdn.rawgit.com/cytoscape/cytoscape.js-cose-bilkent/d810281d/cytoscape-cose-bilkent.js is not available.
If you added this file with `app.scripts.append_script` or `app.css.append_css`, use `external_scripts` or `external_stylesheets` instead.
See https://dash.plotly.com/external-resources
seems like you have built in 'close-bilkent' into your code somewhere instead of 'cose-bilkent' as you get an error pop up saying it is looking for 'close-bilkent'
Invalid argument `layout.name` passed into Cytoscape with ID "cytoscape".
Expected one of ["random","preset","circle","concentric","grid","breadthfirst","cose","close-bilkent","cola","euler","spread","dagre","klay"].
demo works if you replace the external script loading with
cyto.load_extra_layouts()
then replace all instances of 'close-bilkent' in the local cyto files with 'cose-bilkent'
| 1medium
|
Title: Move /test folder
Body: Our `/test` folder has reached 24MB, and we need to move it outside the main `bbot` folder so it's not packaged by PyPi. | 1medium
|
Title: Two warning tests fail
Body: Running `pytest` on 1.3.13 I get the following two failed tests:
```
_____________________________________________________________________________________________________________________ test_odd_num_levels_raises_warning ______________________________________________________________________________________________________________________
setup_param_file_with_groups = '/tmp/tmphtdhaumg'
def test_odd_num_levels_raises_warning(setup_param_file_with_groups):
parameter_file = setup_param_file_with_groups
problem = read_param_file(parameter_file)
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
# Trigger a warning.
sample(problem, 10, num_levels=3)
# Verify some things
> assert len(w) == 1
E assert 2 == 1
E +2
E -1
tests/sample/morris/test_morris.py:55: AssertionError
_______________________________________________________________________________________________________________________ test_even_num_levels_no_warning _______________________________________________________________________________________________________________________
setup_param_file_with_groups = '/tmp/tmpiur667kp'
def test_even_num_levels_no_warning(setup_param_file_with_groups):
parameter_file = setup_param_file_with_groups
problem = read_param_file(parameter_file)
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
# Trigger a warning.
sample(problem, 10, num_levels=4)
# Verify some things
> assert len(w) == 0
E assert 1 == 0
E +1
E -0
tests/sample/morris/test_morris.py:70: AssertionError
``` | 1medium
|
Title: [BUG] Opening and ending tag mismatch: meta line 3 and head
Body: - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
Edit this with a clear and concise description of what the bug.
```py
ansiText = "..."
width = 80
console = Console(width=width, record=True, file=StringIO())
richText = text.Text.from_ansi(ansiText)
console.print(richText)
console.height = len(richText.wrap(console, width=width))
console.save_html(fileName)
with sync_playwright() as p:
install(p.chromium)
browser = p.chromium.launch()
page = browser.new_page()
page.set_viewport_size({"width": console.width * 3, "height": console.height * 3})
page.goto(f"file:///{fileName}")
page.screenshot(path=outFile, omit_background=True)
browser.close()
```
Looks to be that the meta tag is not closed
<meta charset="UTF-8">

Full implementation at https://github.com/FHPythonUtils/AnsiToImg/blob/master/ansitoimg/render.py
**Platform**
<details>
<summary>Click to expand</summary>
```
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=122 ColorSystem.TRUECOLOR> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 9 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=122, height=9), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=122, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=9, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=122, height=9) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 122 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭── <class 'rich._windows.WindowsConsoleFeatures'> ───╮
│ Windows features available. │
│ │
│ ╭─────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=True, truecolor=True) │ │
│ ╰─────────────────────────────────────────────────╯ │
│ │
│ truecolor = True │
│ vt = True │
╰─────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': None, │
│ 'COLORTERM': 'truecolor', │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': 'vscode', │
│ 'COLUMNS': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Windows"
rich==12.6.0
```
</details>
| 1medium
|
Title: latest version of requests is not backwards compatible
Body: Summary.
## Expected Result
requests(URL, headers, payload) returns payload
What you expected.
## Actual Result
Error, because new version of requests does not accept 3 parameters, only 2. It causes all python programs that use requests with 3 parameters to crash and need to be reprogrammed.
What happened instead.
## Reproduction Steps
```python
import requests
r = requests.get(URL, headers=headers, params=payload)
```
## System Information
$ python -m requests.help
```
{
"chardet": {
"version": "3.0.4"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "2.8"
},
"implementation": {
"name": "CPython",
"version": "3.8.10"
},
"platform": {
"release": "5.11.0-27-generic",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.22.0"
},
"system_ssl": {
"version": "1010106f"
},
"urllib3": {
"version": "1.25.8"
},
"using_pyopenssl": false
}
```
This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c).
| 1medium
|
Title: Specify template location for ploomber scaffold
Body: It would be nice to specify the template location, whether a url or local directory, when calling ploomber scaffold.
ploomber scaffold --from-template https://github.com/user/my-template
ploomber scaffold --from-template ~/custom_template
It'd also be nice to be able to set the default template from a user / all users (when using a system install of python) like:
ploomber scaffold --set-template https://github.com/user/my-template
| 1medium
|
Title: Ai voice cloning
Body: | 2hard
|
Title: 可以提供隐马维特比训练命名实体的语料格式吗?
Body: <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.7.1
我使用的版本是:1.7.1
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
hanks你好,在你的帮助下我这边已经熟悉感知机的操作,所以现在想试试用隐马和维特比来训练新实体。
我参照wiki中的https://github.com/hankcs/HanLP/wiki/%E8%A7%92%E8%89%B2%E6%A0%87%E6%B3%A8%E5%91%BD%E5%90%8D%E5%AE%9E%E4%BD%93 进行了仿写,
但是在TestNRDctionaryMaker.java中的语料文件【"data/dictionary/2014_dictionary.txt"】和【D:\JavaProjects\CorpusToolBox\data\2014\】并没有在任何地方找到,希望可以提供该处语料所需的格式,以准备新实体的语料。
我已在relese中下载了最新的data的压缩包并没有找到对应文件,https://github.com/hankcs/HanLP/issues/311 这个issue中所提供的页面也已经找不到了。
感谢您的时间!
| 1medium
|
Title: Add support for setting httpx client options
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
While running the `postgresql` integration tests on a slow connection I ran into a lot of `httpx.ReadTimeout` errors.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should either increase the timeout option or allow users to set it themselves, maybe something like:
```py
client = Client(
http={
'timeout': 10,
},
)
```
| 1medium
|
Title: Add possibility to write dictionary-style object
Body: It will be useful for backward compatibility:
- [v1 example](https://github.com/influxdata/influxdb-python#examples)
| 1medium
|
Title: assert_match does not fail assertion
Body: **Describe the bug**
`snapshot.assert_match(ANY)` always passed
**To Reproduce**
Steps to reproduce the behavior:
1. Replace with `snapshot.assert_match(None)`
https://github.com/tophat/syrupy/blob/a5c46b15942e4d75c9470b24f3e2bfc9fa661087/tests/test_snapshots.py#L14
2. Run command `inv test`
3. See that test does not fail
**Expected behavior**
Should fail with the same error as `assert None == snapshot`
| 1medium
|
Title: Make convert_to_parquet CLI command create script branch
Body: As proposed by @severo, maybe we should add this functionality as well to the CLI command to convert a script-dataset to Parquet. See: https://github.com/huggingface/datasets/pull/6795#discussion_r1562819168
> When providing support, we sometimes suggest that users store their script in a script branch. What do you think of this alternative to deleting the files? | 1medium
|
Title: Profiling for HF Filesystem shows there are easy performance gains to be made
Body: ### Describe the bug
# Let's make it faster
First, an evidence...

Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106 seconds long.
See? It's pretty slow.
What is resolve pattern doing?
```
resolve_pattern called with **/train/** and hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d5511943ca1f5ff0e3eb5e293543
resolve_pattern took 20.815081119537354 seconds
```
Makes sense. How to improve it?
## Bigger project, biggest payoff
Databricks (and consequently, spark) store a compressed manifest file of the files contained in the remote filesystem.
Then, you download one tiny file, decompress it, and all the operations are local instead of this shenanigans.
It seems pretty straightforward to make dataset uploads compute a manifest and upload it alongside their data.
This would make resolution time so fast that nobody would ever think about it again.
It also means you either need to have the uploader compute it _every time_, or have a hook that computes it.
## Smaller project, immediate payoff: Be diligent in avoiding deepcopy
Revise the _ls_tree method to avoid deepcopy:
```
def _ls_tree(
self,
path: str,
recursive: bool = False,
refresh: bool = False,
revision: Optional[str] = None,
expand_info: bool = True,
):
..... omitted .....
for path_info in tree:
if isinstance(path_info, RepoFile):
cache_path_info = {
"name": root_path + "/" + path_info.path,
"size": path_info.size,
"type": "file",
"blob_id": path_info.blob_id,
"lfs": path_info.lfs,
"last_commit": path_info.last_commit,
"security": path_info.security,
}
else:
cache_path_info = {
"name": root_path + "/" + path_info.path,
"size": 0,
"type": "directory",
"tree_id": path_info.tree_id,
"last_commit": path_info.last_commit,
}
parent_path = self._parent(cache_path_info["name"])
self.dircache.setdefault(parent_path, []).append(cache_path_info)
out.append(cache_path_info)
return copy.deepcopy(out) # copy to not let users modify the dircache
```
Observe this deepcopy at the end. It is making a copy of a very simple data structure. We do not need to copy. We can simply generate the data structure twice instead. It will be much faster.
```
def _ls_tree(
self,
path: str,
recursive: bool = False,
refresh: bool = False,
revision: Optional[str] = None,
expand_info: bool = True,
):
..... omitted .....
def make_cache_path_info(path_info):
if isinstance(path_info, RepoFile):
return {
"name": root_path + "/" + path_info.path,
"size": path_info.size,
"type": "file",
"blob_id": path_info.blob_id,
"lfs": path_info.lfs,
"last_commit": path_info.last_commit,
"security": path_info.security,
}
else:
return {
"name": root_path + "/" + path_info.path,
"size": 0,
"type": "directory",
"tree_id": path_info.tree_id,
"last_commit": path_info.last_commit,
}
for path_info in tree:
cache_path_info = make_cache_path_info(path_info)
out_cache_path_info = make_cache_path_info(path_info) # copy to not let users modify the dircache
parent_path = self._parent(cache_path_info["name"])
self.dircache.setdefault(parent_path, []).append(cache_path_info)
out.append(out_cache_path_info)
return out
```
Note there is no longer a deepcopy in this method. We have replaced it with generating the output twice. This is substantially faster. For me, the entire resolution went from 1100s to 360s.
## Medium project, medium payoff
After the above change, we have this profile:

Figure 2: x-axis is 355 seconds. Note that globbing and _ls_tree deep copy is gone. No surprise there. It's much faster now, but we still spend ~187seconds in get_fs_token_paths.
Well get_fs_token_paths is part of fsspec. We don't need to fix that because we can trust their developers to write high performance code. Probably the caller has misconfigured something. Let's take a look at the storage_options being provided to the filesystem that is constructed during this call.
Ah yes, streaming_download_manager::_prepare_single_hop_path_and_storage_options. We know streaming download manager is not compatible with async right now, but we really need this specific part of the code to be async. We're spending so much time checking isDir on the remote filesystem, it's a huge waste.
We can make the call easily 20-30x faster by using async, removing this performance bottleneck almost entirely (and reducing the total time of this part of the code to <30s. There is no reason to block async isDir calls for streaming.
I'm not going to mess w/ this one myself; I didn't write the streaming impl, and I don't know how it works, but I know the isDir check can be async.
### Steps to reproduce the bug
```
with cProfile.Profile() as pr:
pr.enable()
# Begin Data
if not os.path.exists(data_cache_dir):
os.makedirs(data_cache_dir, exist_ok=True)
training_dataset = load_dataset(training_dataset_name, split=training_split, cache_dir=data_cache_dir, streaming=True).take(training_slice)
eval_dataset = load_dataset(eval_dataset_name, split=eval_split, cache_dir=data_cache_dir, streaming=True).take(eval_slice)
# End Data
pr.disable()
pr.create_stats()
if not os.path.exists(profiling_path):
os.makedirs(profiling_path, exist_ok=True)
pr.dump_stats(os.path.join(profiling_path, "cprofile.prof"))
```
run this code for "cerebras/SlimPajama-627B" and whatever other params
### Expected behavior
Something better.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | 1medium
|
Title: [CI] Update codecov configuration file
Body: I've closed #990 but forget to create an issue about the codecov configuration issue.
Apparently, we should have a `coverage.yml` file for the Codecov app/github-action, but I'm not sure how this will go with local coverage reports, which seems to use `.coveragerc`. This require a little investigation.
The goal here is to have:
- a single config file (local reports and CI)
- an up-to-date configuration file (as recommended in codecov docs):
- as a consequence, we can more easily customize the codecov config (if we discuss we need to) | 1medium
|
Title: Clarify usage of yarn vs npm in contributing.md
Body: Right now, it's not clear in `contributing.md` whether yarn or npm should be used. Since `yarn` was chosen to be the package manager, we should more clearly indicate that in contributing. | 0easy
|
Title: Integrate the ITKWidgets
Body: ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Integrate the itkwidgets to view 3d images
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
3d Visualization for medical images besides the slicer
### Pitch
<!-- A clear and concise description of what you want to happen. -->
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| 1medium
|
Title: How to access an Iframe from an external source without uncaught DOMexception?
Body: I have some other website I own and I want to embed html from that website as Iframes. I want to access some properties of the actual element (such as scroll height) to adjust the Iframe in dash.
But I get a `Uncaught DOMException: Blocked a frame with origin "http://localhost:8050" from accessing a cross-origin frame.`, which is to be expected. In flask there's a way to whitelist other sites, is there a way to do this in Dash?
thank you! | 1medium
|
Title: management subcommand `manage_dkim_keys` breaks if it fails
Body: # Impacted versions
* Modoboa: 2.0.3
# Steps to reproduce
# Current behavior
Try this `$ python manage.py modo manage_dkim_keys` from an account that lacks write permissions for `dkim_keys_storage_dir`. It crashes, but it adds a `dkim_private_key_path` to the `admin_domain` table all the same, so that on subsequent runs, this command does nothing because the [query for missing values](https://github.com/modoboa/modoboa/blob/572e32f868c24c2f08c0d387620c58ef42ebb714/modoboa/admin/management/commands/subcommands/_manage_dkim_keys.py#L48) does not return this domain any more.
So on fixing the write permissions, or running it from a privileged account, nothing happens unless you manually edit the database and inset an empty string in domain's `dkim_private_key_path`, after which a new `.pem` file is created as expected.
# Expected behavior
If for any reason the `.pem` file cannot be created then the associated `dkim_private_key_path` field in the `admin_domain` table should be an empty string not and invalid path (pointing to a file that does not exist). Data integrity basically.
| 1medium
|
Title: [RFC] Support multidoc yaml files
Body: **Is your feature request related to a problem? Please describe.**
Sometimes it can be difficult or impossible to pass multiple files with config fragments. yaml support multiple documents in one file and `safe_load_all` from pyaml api loads that accordingly. It is standard yaml feature, it would be nice to support it and make in usable in cases when passing one file (composited from more files) would be easier.
**Describe the solution you'd like**
Support `safe_load_all` as yaml loader.
**Describe alternatives you've considered**
Passing multiple files will do the work, however it doesn't have to be always straightforward.
**Additional context**
I have prepared a patch
| 1medium
|
Title: I suspect it's a fake interface
Body: I suspect it's a fake interface
 | 1medium
|
Title: 我猜 Whole Word Masking (wwm) 实际是一种更高效的方式,字级别增加预训练时间和随机性 也能达到同样最终效果吧?
Body: @ymcui 多谢! | 3misc
|
Title: [Feat] Export grammar of plots
Body: It would be really great to be able to export and import a description of the graph for later reuse, like it is done in Vega.
This also relates to [[Feat] Force data types in code #70 ](https://github.com/Kanaries/pygwalker/issues/70).
Besides not having to setup predefined data types every time this would enable users of PyGWalker to export predefined setup of the plots in its entirety. | 2hard
|
Title: Running process killed abruptly
Body: Hi everyone, I was running PyOD but halfway the process was apparently killed, it shows only the word "Killed". Does anyone know why it happened ? Thanks a lot. | 3misc
|
Title: Not able to detect face
Body: * face_recognition version: Latest
* Python version: 3.7
* Operating System: Mac
### Description
I am not able to detect a face in the below picture can someone tell me why..and it's not only about this picture it's with many of them clicked with my ONE PLUS 6
 Can someone try and let me know why
| 1medium
|
Title: Make ResourceProtector extensible
Body: Only BearerToken is supported currently, it is required to make the protector extensible so that we can add more token type later. | 2hard
|
Title: Discrepancy between specified `5min` frequency and DeepAR model configuration
Body: # Subject: Discrepancy between specified 'freq' and DeepAR config 'freq'
Hi everyone,
I'm encountering an issue where I specified `freq='5min'` during the training of a DeepAR model using AutoGluon, but the final model configuration shows `freq='D'`. I'm trying to understand why this discrepancy exists and if it could be impacting my model.
## Details from Training Logs
Here are some key points from the training logs:
```plaintext
=================== System Info ===================
AutoGluon Version: 1.2
Python Version: 3.12.3
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024
CPU Count: 16
GPU Count: 1
Memory Avail: 27.98 GB / 31.29 GB (89.4%)
Disk Space Avail: 437.20 GB / 953.26 GB (45.9%)
===================================================
Setting presets to: best_quality
Fitting with arguments:
{'enable_ensemble': True,
'eval_metric': MASE,
'excluded_model_types': ['RecursiveTabular', 'DirectTabular', 'TiDE'],
'freq': '5min',
'hyperparameters': 'default',
'known_covariates_names': ['minute_sin',
'minute_cos',
'hour_sin',
'hour_cos',
'day_of_week_sin',
'day_of_week_cos',
'is_weekend'],
'num_val_windows': 5,
'prediction_length': 30,
'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
'random_seed': 123,
'refit_every_n_windows': 2,
'refit_full': False,
'skip_model_selection': False,
'target': 'target',
'time_limit': 20000,
'verbosity': 2}
Data with frequency 'None' has been resampled to frequency '5min'.
The provided data has a large number of rows and several time series.
The data contains a target column and the following known covariates:
categorical: ['is_weekend']
continuous (float): ['minute_sin', 'minute_cos', 'hour_sin', 'hour_cos', 'day_of_week_sin', 'day_of_week_cos']
To learn how to fix incorrectly inferred types, please see documentation for TimeSeriesPredictor.fit
AutoGluon will gauge predictive performance using evaluation metric: 'MASE'
This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value.
===================================================
```
And here's the relevant part of the saved DeepAR model's config:
```json
{
"__kind__": "instance",
"args": [],
"class": "gluonts.torch.model.predictor.PyTorchPredictor",
"kwargs": {
"batch_size": 64,
"device": "auto",
"forecast_generator": {
"__kind__": "instance",
"args": [],
"class": "gluonts.model.forecast_generator.SampleForecastGenerator",
"kwargs": {}
},
"input_names": [
"feat_static_cat",
"feat_static_real",
"past_time_feat",
"past_target",
"past_observed_values",
"future_time_feat"
],
"input_transform": {
"__kind__": "instance",
"class": "gluonts.transform._base.Chain",
"kwargs": {
"transformations": [
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.field.RemoveFields",
"kwargs": {
"field_names": [
"feat_static_real"
]
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.field.SetField",
"kwargs": {
"output_field": "feat_static_cat",
"value": [
0
]
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.field.SetField",
"kwargs": {
"output_field": "feat_static_real",
"value": [
0.0
]
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.convert.AsNumpyArray",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "builtins.int"
},
"expected_ndim": 1,
"field": "feat_static_cat"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.convert.AsNumpyArray",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"expected_ndim": 1,
"field": "feat_static_real"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.convert.AsNumpyArray",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"expected_ndim": 1,
"field": "target"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.feature.AddObservedValuesIndicator",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"imputation_method": {
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.feature.DummyValueImputation",
"kwargs": {
"dummy_value": 0.0
}
},
"output_field": "observed_values",
"target_field": "target"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.feature.AddTimeFeatures",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"output_field": "time_feat",
"pred_length": 30,
"start_field": "start",
"target_field": "target",
"time_features": [
{
"__kind__": "type",
"class": "autogluon.timeseries.utils.datetime.time_features.minute_of_hour"
},
{
"__kind__": "type",
"class": "autogluon.timeseries.utils.datetime.time_features.hour_of_day"
},
{
"__kind__": "type",
"class": "autogluon.timeseries.utils.datetime.time_features.day_of_week"
},
{
"__kind__": "type",
"class": "autogluon.timeseries.utils.datetime.time_features.day_of_month"
},
{
"__kind__": "type",
"class": "autogluon.timeseries.utils.datetime.time_features.day_of_year"
}
]
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.feature.AddAgeFeature",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"log_scale": true,
"output_field": "feat_dynamic_age",
"pred_length": 30,
"target_field": "target"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.convert.VstackFeatures",
"kwargs": {
"drop_inputs": true,
"h_stack": false,
"input_fields": [
"time_feat",
"feat_dynamic_age",
"feat_dynamic_real"
],
"output_field": "time_feat"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.convert.AsNumpyArray",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"expected_ndim": 2,
"field": "time_feat"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.split.InstanceSplitter",
"kwargs": {
"dummy_value": 0.0,
"forecast_start_field": "forecast_start",
"future_length": 30,
"instance_sampler": {
"__kind__": "instance",
"class": "gluonts.transform.sampler.PredictionSplitSampler",
"kwargs": {
"allow_empty_interval": false,
"axis": -1,
"min_future": 0,
"min_past": 0
}
},
"is_pad_field": "is_pad",
"lead_time": 0,
"output_NTC": true,
"past_length": 1212,
"start_field": "start",
"target_field": "target",
"time_series_fields": [
"time_feat",
"observed_values"
]
}
}
]
}
},
"lead_time": 0,
"output_transform": null,
"prediction_length": 30,
"prediction_net": {
"__kind__": "instance",
"args": [],
"class": "gluonts.torch.model.deepar.lightning_module.DeepARLightningModule",
"kwargs": {
"lr": 0.001,
"model_kwargs": {
"cardinality": [
1
],
"context_length": 60,
"default_scale": null,
"distr_output": {
"__kind__": "instance",
"args": [],
"class": "gluonts.torch.distributions.studentT.StudentTOutput",
"kwargs": {
"beta": 0.0
}
},
"dropout_rate": 0.1,
"embedding_dimension": null,
"freq": "D",
"hidden_size": 40,
"lags_seq": [
1,
2,
3,
4,
5,
6,
7,
10,
11,
12,
13,
14,
22,
23,
24,
25,
26,
34,
35,
36,
37,
38,
287,
288,
289,
575,
576,
577,
863,
864,
865,
1151,
1152,
1153
],
"nonnegative_pred_samples": false,
"num_feat_dynamic_real": 14,
"num_feat_static_cat": 1,
"num_feat_static_real": 1,
"num_layers": 2,
"num_parallel_samples": 100,
"prediction_length": 30,
"scaling": true
},
"patience": 10,
"weight_decay": 1e-08
}
}
}
}
```
As you can see, the training arguments clearly state freq='5min', and the logs confirm that the data was resampled to '5min'. However, the freq within the model_kwargs of the trained DeepAR model is 'D'.
Could someone shed some light on why this might be happening? Is this expected behavior, or is there something I might be missing in how AutoGluon handles frequency with DeepAR? Could this mismatch potentially affect the model's accuracy or the interpretation of the results?
Any insights would be greatly appreciated. Thanks! | 1medium
|
Title: Face recognition
Body: | 2hard
|
Title: How to cleanly finish a Process with split-n-join tasks?
Body: Say I have a Process which has two optional View tasks to the finish:
--> Split --> optional_view_1 ---> Join ----> finish
| ^
| |
-----> optional_view_2 -------
Let's say both tasks are assigned, but a human logs in and completes optional_view_1. I want to ensure the Process finishes cleanly.
AFAICS, the Process gets into a "stuck" state due to optional_view_2 at the Join. Is that correct? I tried cancelling optional_view_2, but that had no effect. I'm looking for a programmatic approach if that makes any difference. I'm aware of #93, and so I think my question boils down to:
- how to finish the process cleanly (i.e. not cancel it) and without races
- from where (e.g. from inside the Join or a Handler after each View to cancel the other one?)
What is the correct procedure in this case? | 1medium
|
Title: Add zebra striping to rows in tables
Body: ### NetBox version
4.1.3
### Feature type
Change to existing functionality
### Triage priority
N/A
### Proposed functionality
Add zebra striping to rows in tables.
Add 5-10% difference in background colors between rows.
### Use case
In wide tables with a large number of columns, you can position yourself faster and read the desired line across the entire width.
### Database changes
_No response_
### External dependencies
_No response_ | 1medium
|
Title: UnicodeDecodeError: 'gbk' codec can't decode byte 0x80
Body: ### Describe the bug
My code:
`interpreter.chat("Please summarize this article:https://about.fb.com/news/2023/08/code-llama-ai-for-coding/")
interpreter.chat()`
Report an error:
`Exception in thread Thread-117 (handle_stream_output):
Traceback (most recent call last):
File "C:\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\john\AppData\Roaming\Python\Python311\site-packages\interpreter\code_interpreters\subprocess_code_interpreter.py", line 121, in handle_stream_output
for line in iter(stream.readline, ""):
UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 870: illegal multibyte sequence`

### Reproduce
`interpreter.chat("Please summarize this article:https://about.fb.com/news/2023/08/code-llama-ai-for-coding/")
interpreter.chat()`
### Expected behavior
..
### Screenshots
_No response_
### Open Interpreter version
0.1.11
### Python version
Python 3.11.4
### Operating System name and version
window10
### Additional context
_No response_ | 1medium
|
Title: reformat to fit awesome list format?
Body: https://github.com/sindresorhus/awesome if you don't know what i'm talking about, seems like this would be quite fitting if reformatted right | 3misc
|
Title: Fix type warnings in `playground/commands/run.py`
Body: * Missing `id` annotations for tables
* Use new join syntax
* Fix ipython import | 1medium
|
Title: How to train a paired dataset
Body: Hello, author.
My dataset is paired, and I want to train it in pairs. I tried to input the parameter "--dataset_mode aligned", but I got an error message "AssertionError: ./datasets/underwater\train is not a valid directory". The format of my dataset is shown in the following figure. Could you please tell me how to place the paired data for training?

| 1medium
|
Title: Is SQLModel naturally async aware ?
Body: ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
No code available
```
### Description
Can I use SQLModel inside my FastAPI async endpoint implementations without problems ? Apparently it works. But I'm not sure if it's destroying the asynchronicity.
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.5
### Additional Context
_No response_ | 1medium
|
Title: [Feature Request] Merge profile sessions from multiple runs to get statistical numbers like Average/p95/mean
Body: Hi,
Firstly I want to thank you for having this tool and I have benefited a lot from it to optimize our codebase!
A feature that has been in my mind for some time: **since in real case, different modules in code may have different time variances and to get more accurate profiling results, we need to run multiple times and look at average/mean/p95 numbers for further optimization.**
Currently I don't see an option to do that in pyinstrument and it would be best if it can support merging multiple profile sessions from multiple runs to get statistical numbers like Average/p95/Mean.
Thank you! | 1medium
|
Title: Call basic methods on `hparams` in a query
Body: ## 🚀 Feature : Call basic methods on `hparams` in a query ?
Having tracked runs with aim I can query runs with a term like the following:
```
( run.experiment == "AE_noisy" or run_experiment == "AE_smoothed" )
```
This can be shortened to: `run.experiment.startwith("AE_")`
The same option for tracked hyperparameters, could be really usefull. So instead of having to run:
```
( run.hparams.optimizer == "ADAM or run.hparams.optimizer == "RADAM" )
```
I'd wish to run
```
run.hparams.optimizer.endswith("ADAM")
```
Currently this results in the error `query failed, 'Undefined' object is not callable`
### Motivation
This would allow more flexible and shorter queries (especially when there are a lot of suffixes). Happy to hear any feedback whether this could be possible or is out of scope. Cheers!
| 1medium
|
Title: ImageField inside Maybe declaration no longer working since 3.1.0
Body: #### Description
In a factory that I defined for companies, I'm randomly generating a logo using a `Maybe` declaration. This used to work fine up to and including 3.0.1, but as of 3.1.0 it has different behaviour.
#### To Reproduce
##### Model / Factory code
Leaving out the other fields as they cannot be relevant to the problem.
```python
from factory import Faker, Maybe
from factory.django import DjangoModelFactory, ImageField
from ..models import Company
class CompanyFactory(DjangoModelFactory):
logo_add = Faker("pybool")
logo = Maybe(
"logo_add",
yes_declaration=ImageField(width=500, height=200, color=Faker("color")),
no_declaration=None,
)
class Meta:
model = Company
exclude = ("logo_add",)
```
##### The issue
Up to and including 3.0.1 the behaviour - which is the desired behaviour as far as I'm concerend - was that I could generate companies that either had a logo or did not (about 50/50 since I'm just using "pybool" for the decider field). If they had a logo, the logo would be 500x200 with a random color.
Now that I use 3.1.0, the randomness of about half the companies having logos still works, but _all_ generated logo's are now 100x100 and blue, which are simply defaults (although the [documentation](https://factoryboy.readthedocs.io/en/latest/orms.html?highlight=imagefield#factory.django.ImageField) says that "green" is actually the default), which is definitely something to fix :)
Perhaps I was misusing/misunderstanding this feature all along, but then I'd still like to know how to get the desired behaviour described.
| 1medium
|
Title: Enhancing TikTok-Api Integration for TikTok-to-YouTube Automation Projects
Body: **Title:** Enhancing TikTok-Api Integration for TikTok-to-YouTube Automation Projects
**Issue:**
We are developing a project, [tiktok-to-youtube-automation](https://github.com/scottsdevelopment/tiktok-to-youtube-automation), aimed at automating the process of downloading TikTok videos and uploading them to YouTube. In our pursuit of efficient and reliable solutions, we have explored various TikTok API wrappers, including [TikTok-Api](https://github.com/davidteather/TikTok-Api).
**Context:**
During our development, we encountered challenges with existing tools. For instance, the [tiktok-scraper](https://github.com/drawrowfly/tiktok-scraper) project has been discontinued, as noted in [this issue](https://github.com/drawrowfly/tiktok-scraper/issues/834). This has led us to seek alternative solutions for integrating TikTok functionalities into our automation workflow.
**Proposal:**
We are considering integrating [TikTok-Api](https://github.com/davidteather/TikTok-Api) into our project to handle TikTok video retrieval. Before proceeding, we would like to understand the current capabilities and limitations of TikTok-Api, especially concerning:
- **Video Downloading:** The ability to programmatically download TikTok videos without watermarks.
- **Rate Limiting:** Handling TikTok's rate limits to ensure stable operation.
- **Maintenance and Support:** The project's activity level and responsiveness to issues or updates.
**Request for Collaboration:**
We invite the maintainers and community of TikTok-Api to provide insights into these aspects. Additionally, we welcome suggestions for best practices when integrating TikTok-Api into automation projects similar to ours.
**Broader Community Engagement:**
We believe that a robust solution for TikTok to YouTube automation can benefit a wide range of users. By collaborating and sharing knowledge across projects, we can develop more resilient and feature-rich tools for the community.
**References:**
- [tiktok-to-youtube-automation Project](https://github.com/scottsdevelopment/tiktok-to-youtube-automation)
- [Discontinuation of tiktok-scraper](https://github.com/drawrowfly/tiktok-scraper/issues/834)
- [TikTok-Api Repository](https://github.com/davidteather/TikTok-Api)
We look forward to the possibility of integrating TikTok-Api into our project and contributing to the broader community's efforts in this domain. | 1medium
|
Title: Fitting autokeras with the EarlyStopping baseline parameter does not work
Body: ### Bug Description
When the EarlyStopping baseline parameter is triggered, autokeras is crashing with the following error : TypeError: object of type 'NoneType' has no len()
### Bug Reproduction
Here is the colab:
https://colab.research.google.com/drive/1oqxaIaXb51qGaFSRBJtUL5yDZ77Udjx1?usp=sharing
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python:
- autokeras: 1.0.12
- keras-tuner: master
- scikit-learn:
- numpy:
- pandas:
- tensorflow:
| 1medium
|
Title: Internal: Blas GEMM launch failed when running classifier for URLs
Body: System information:
- os: Windows 11
- gpu: Nvidia GeForce RTX 3080 TI (12GB)
- Tensor flow: tensorflow-gpu v 1.14.0
- cuda: v 10.0 (but i have other version installed: 12.4 and 9.0 which dont have the required .dll file, all of them are in my PATH)
- python: 3.7 (for the purposes of protobuf)
The error:
```
2024-04-22 15:35:14.641625: E tensorflow/stream_executor/cuda/cuda_blas.cc:428] failed to run cuBLAS routine: CUBLAS_STATUS_EXECUTION_FAILED
ERROR:tensorflow:Error recorded from training_loop: 2 root error(s) found.
(0) Internal: Blas GEMM launch failed : a.shape=(4096, 2), b.shape=(2, 768), m=4096, n=768, k=2
[[node bert/embeddings/MatMul (defined at D:\Faks\UM-Mag 23-25\Drugi semester\JT\google-bert\modeling.py:487) ]]
[[loss/Mean/_4861]]
(1) Internal: Blas GEMM launch failed : a.shape=(4096, 2), b.shape=(2, 768), m=4096, n=768, k=2
[[node bert/embeddings/MatMul (defined at D:\Faks\UM-Mag 23-25\Drugi semester\JT\google-bert\modeling.py:487) ]]
0 successful operations.
0 derived errors ignored.
```
What i've tried:
I tried checking `nvidia-smi.exe` to see if i had something running on the GPU while training but got the following result:
```
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 552.22 Driver Version: 552.22 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3080 Ti WDDM | 00000000:0A:00.0 Off | N/A |
| 0% 36C P8 24W / 350W | 1598MiB / 12288MiB | 3% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 9072 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 10652 C+G ...al\Discord\app-1.0.9042\Discord.exe N/A |
| 0 N/A N/A 10688 C+G ...ekyb3d8bbwe\PhoneExperienceHost.exe N/A |
| 0 N/A N/A 10820 C+G ...nt.CBS_cw5n1h2txyewy\SearchHost.exe N/A |
| 0 N/A N/A 10844 C+G ...2txyewy\StartMenuExperienceHost.exe N/A |
| 0 N/A N/A 14572 C+G ...\cef\cef.win7x64\steamwebhelper.exe N/A |
| 0 N/A N/A 14744 C+G ...GeForce Experience\NVIDIA Share.exe N/A |
| 0 N/A N/A 14824 C+G ...1.0_x64__8wekyb3d8bbwe\Video.UI.exe N/A |
| 0 N/A N/A 15072 C+G ...t.LockApp_cw5n1h2txyewy\LockApp.exe N/A |
| 0 N/A N/A 17164 C+G ...CBS_cw5n1h2txyewy\TextInputHost.exe N/A |
| 0 N/A N/A 18100 C+G ...les\Microsoft OneDrive\OneDrive.exe N/A |
| 0 N/A N/A 18936 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 N/A N/A 19308 C+G ...air\Corsair iCUE5 Software\iCUE.exe N/A |
| 0 N/A N/A 19848 C+G ...crosoft\Edge\Application\msedge.exe N/A |
| 0 N/A N/A 23840 C+G ..._x64__kzf8qxf38zg5c\Skype\Skype.exe N/A |
| 0 N/A N/A 24040 C+G ...lf\0.248.120.19\OverwolfBrowser.exe N/A |
| 0 N/A N/A 25276 C+G ...on\123.0.2420.97\msedgewebview2.exe N/A |
| 0 N/A N/A 25604 C+G ...ejd91yc\AdobeNotificationClient.exe N/A |
| 0 N/A N/A 25636 C+G ...509_x64__8wekyb3d8bbwe\ms-teams.exe N/A |
| 0 N/A N/A 25920 C+G ...ktop\EA Desktop\EACefSubProcess.exe N/A |
| 0 N/A N/A 25972 C+G ...\GOG Galaxy\GalaxyClient Helper.exe N/A |
| 0 N/A N/A 26180 C+G ...EA Desktop\EA Desktop\EADesktop.exe N/A |
| 0 N/A N/A 28536 C+G ...cks-services\BlueStacksServices.exe N/A |
| 0 N/A N/A 29332 C+G ...aam7r\AcrobatNotificationClient.exe N/A |
| 0 N/A N/A 31120 C+G ...on\123.0.2420.97\msedgewebview2.exe N/A |
| 0 N/A N/A 31468 C+G ..._x64__kzf8qxf38zg5c\Skype\Skype.exe N/A |
| 0 N/A N/A 32340 C+G ...m Files\Mozilla Firefox\firefox.exe N/A |
| 0 N/A N/A 33104 C+G ...on\HEX\Creative Cloud UI Helper.exe N/A |
| 0 N/A N/A 33208 C+G ...on\123.0.2420.97\msedgewebview2.exe N/A |
| 0 N/A N/A 35396 C+G ...wekyb3d8bbwe\XboxGameBarWidgets.exe N/A |
| 0 N/A N/A 39264 C+G ...m Files\Mozilla Firefox\firefox.exe N/A |
+-----------------------------------------------------------------------------------------+
```
Then i tried googling for other similar issues and found [this](https://stackoverflow.com/questions/43990046/tensorflow-blas-gemm-launch-failed) and when following [this answers](https://stackoverflow.com/a/65523597) instructions, adding the lines to the `modeling.py` after the imports i received the same error.
I didn't find any other possible solutions and i'm unsure as to what i'm doing wrong. Did i add the memory growth lines in the wrong file or did i go about solving the issue completely wrong? Any help is appreciated.
I am running the 3.7 kernel in an virtual environment and the data i am feeding the model is properly formatted. I am using the BERT base uncased model downloaded from this repository. | 2hard
|
Title: model_store: FileExistsError: [Errno 17] File exists: '/root/.mxnet/models'
Body: Upon running the maskrcnn example, I hit the above error.
Following stack trace:
```
[1,22]<stdout>:Downloading /root/.mxnet/models/resnet50_v1b-0ecdba34.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet50_v1b-0ecdba34.zip...
[1,37]<stdout>:Downloading /root/.mxnet/models/resnet50_v1b-0ecdba34.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet50_v1b-0ecdba34.zip...
[1,38]<stdout>:Traceback (most recent call last):
[1,38]<stdout>: File "train_mask_rcnn.py", line 698, in <module>
[1,38]<stdout>: per_device_batch_size=args.batch_size // num_gpus, **kwargs)
[1,38]<stdout>: File "/usr/local/lib/python3.7/site-packages/gluoncv/model_zoo/model_zoo.py", line 403, in get_model
[1,38]<stdout>: net = _models[name](**kwargs)
[1,38]<stdout>: File "/usr/local/lib/python3.7/site-packages/gluoncv/model_zoo/rcnn/mask_rcnn/predefined_models.py", line 97, in mask_rcnn_fpn_resnet50_v1b_coco
[1,38]<stdout>: base_network = resnet50_v1b(pretrained=pretrained_base, dilated=False, use_global_stats=True)
[1,38]<stdout>: File "/usr/local/lib/python3.7/site-packages/gluoncv/model_zoo/resnetv1b.py", line 367, in resnet50_v1b
[1,38]<stdout>: tag=pretrained, root=root), ctx=ctx)
[1,38]<stdout>: File "/usr/local/lib/python3.7/site-packages/gluoncv/model_zoo/model_store.py", line 274, in get_model_file
[1,38]<stdout>: os.makedirs(root)
[1,38]<stdout>: File "/usr/local/lib/python3.7/os.py", line 223, in makedirs
[1,38]<stdout>: mkdir(name, mode)
[1,38]<stdout>:FileExistsError: [Errno 17] File exists: '/root/.mxnet/models'
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 38 in communicator MPI COMMUNICATOR 5 DUP FROM 0
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
```
Looks like multiple ranks are trying to create the folder simultaneously.
Note: This error is hit intermittently [not reproducible 100% times] | 1medium
|
Title: Trax Tutorials
Body: Hey,
I see that there are no tutorial notebooks for **[Trax](https://github.com/google/trax)** implementations in this repository yet. Trax is an _end-to-end_ library for deep learning that focuses on clear code and speed. It is actively used and maintained in the **Google Brain team**.
I would like to add such tutorial notebooks in Trax
| 0easy
|
Title: API Endpoint /represent fails with 400 for array of img URLs
Body: ### Description
Using an array of `img` items via the API endpoint `represent` responds with the following error
```
{
"error": "Exception while representing: img must be numpy array or str but it is <class 'list'>"
}
```
#### Steps to Reproduce
1. Pull repo
2. Build the image
`docker build -t deepface -f Dockerfile .`
3. Run the image
`docker run deepface`
4. Attempt to use the represent endpoint with an array of images i.e.:
```
{
"img": ["imgUrl1", "imgUrl2"]
}
```
### Expected Behavior
The API is able to parse the list into a numpy array and completed the request. | 1medium
|
Title: How to login with cookie?
Body: Hi, I create a login form page , I want to login it with cookie, How to do it? I test it , but not work.
```
auth_backends = [
CookieAuthentication(secret=SECRET, lifetime_seconds=3600),
# JWTAuthentication(secret=SECRET, lifetime_seconds=3600),
]
fastapi_users = FastAPIUsers(
user_db, auth_backends, User, UserCreate, UserUpdate, UserDB, SECRET,
)
app.include_router(fastapi_users.router, prefix="/users", tags=["users"])
templates = Jinja2Templates(directory='templates')
...
@app.get("/")
async def read_root(user: User = Depends(fastapi_users.get_current_active_user)):
return {"Hello": f"{user.email}"}
@app.route("/login", methods=['GET'])
async def login(request):
return templates.TemplateResponse('login.html', {'request': request})
```
template
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Login</title>
</head>
<body>
<h1>Login</h1>
<form method="post" action="/users/login/cookie">
<input name="username" autocomplete="off">
<input name="password" autocomplete="off">
<button>submit</button>
</form>
</body>
</html>
```
I access Login page and input username and password, then submit it, response null, then I access homepage "http://127.0.0.1:8000", but still display `{"detail":"Unauthorized"}` | 1medium
|
Title: Got an unexpected keyword argument 'filters'
Body: Argument filters shows up in GraphiQL, but still got an error. It works fine before upgrading to v3.


Related code:
<https://github.com/he0119/smart-home/commit/9922c788a2d79761bd6c100c3bd4b13c31cfb4d6#diff-af3602ede1befa32df28d961add5b48aaf07992465fb6d0f1dbb50e4a0568cdbR79-R85>
```python
@gql.django.type(models.Device, filters=DeviceFilter, order=DeviceOrder)
class Device(relay.Node):
name: auto
device_type: auto
location: auto
created_at: auto
edited_at: auto
is_online: auto
online_at: auto
offline_at: auto
token: auto
# FIXME: Device.autowatering_data() got an unexpected keyword argument 'filters'
@gql.django.connection(
gql.django.ListConnectionWithTotalCount[AutowateringData],
filters=AutowateringDataFilter,
order=AutowateringDataOrder,
)
def autowatering_data(self, info) -> Iterable[models.AutowateringData]:
return models.AutowateringData.objects.all()
``` | 1medium
|
Title: catplot with redundant hue assignment creates empty legend with title
Body: ```python
sns.catplot(tips, x="day", y="total_bill", hue="day", col="sex", row="smoker", kind="box", height=3)
```

Set `legend=False` to workaround, but with default `legend='auto'` this should be disabled due to the redundancy. | 1medium
|
Title: Add scope_name field when doing bulk ipam.prefixes imports
Body: ### NetBox version
v4.2.2
### Feature type
Change to existing functionality
### Proposed functionality
Add `scope_name` in addition or as alternative field to `scope_id` when doing builk `ipam.prefix` (and others) imports.
### Use case
Before updating from v4.1 to v4.2 we noticed the breaking change about the removal of the `site` field which got replaced by a combination of `scope_type` and `scope_id`. I don't question the need for the multipe scope types, but to have a better migration when doing bulk imports it would be nice to have a `scope_name` field in addition to `scope_id`. Of course you can only have one field filled (id or name).
Sure this would cause an id-lookup when scope_name is used (like in v4.1) but it would help the transition (we would just have to add scope_type=dcim.site and rename the csv field site to scope_name and continue instead of using numeric ids during mass imports.
### Database changes
Not that I know of
### External dependencies
None | 1medium
|
Title: `xr.open_zarr` is 3x slower than `zarr.open`, even at scale
Body: ### What is your issue?
I'm doing some benchmarks on Xarray + Zarr vs. some other formats, and I get quite a surprising result — in a very simple array, xarray is adding a lot of overhead to reading a Zarr array.
Here's a quick script — super simple, just a single chunk. It's 800MB of data — so not some tiny array where reading a metadata json file or allocating an index is going to throw the results.
```python
import numpy as np
import zarr
import xarray as xr
import dask
print(zarr.__version__, xr.__version__, dask.__version__)
(
xr.DataArray(np.random.rand(10000, 10000), name="foo")
.to_dataset()
.chunk(None)
.to_zarr("test.zarr", mode="w")
)
%timeit xr.open_zarr("test.zarr").compute()
%timeit zarr.open("test.zarr")["foo"][:]
```
```
2.17.2 2024.5.1.dev37+gce196d56 2024.5.2
551 ms ± 15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
183 ms ± 2.93 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
So:
- 551ms for xarray
- 183ms for zarr
Having a quick look with `py-spy` suggests there might be some thread contention, but not sure how much is really contention vs. idle threads waiting.
---
Making the array 10x bigger (with 10 chunks) reduces the relative difference, but it's still fairly large:
```
2.17.2 2024.5.1.dev37+gce196d56 2024.5.2
6.88 s ± 353 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
4.15 s ± 264 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
---
Any thoughts on what might be happening? Is the benchmark at least correct? | 1medium
|
Title: Add support for running DFS using multiprocessing
Body: - We support running DFS via multiprocessing (https://docs.python.org/3/library/multiprocessing.html) | 1medium
|
Title: Fillna does not work if fields_group is not None
Body: ## 🐛 Bug Description
<!-- A clear and concise description of what the bug is. -->
The Fillna processor does not work if fields_group is not None since assigning values to df.values changes nothing.
## To Reproduce
Use any model and specify fields_group for Fillna processor.
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
No nan after calling Fillna.
## Additional Notes
<!-- Add any other information about the problem here. -->
Same as the issue here: https://github.com/microsoft/qlib/issues/1307#issuecomment-1785284039.
| 1medium
|
Title: 'NoneType' object has no attribute 'description'
Body: Firstly, thanks for this great library.
Now, in:
https://github.com/axnsan12/drf-yasg/blob/2ef7cfbfe369a55e8b68e574bf20fe32d40cac38/src/drf_yasg/inspectors/query.py#L49
If we can't find out which type it is, we default to string.
However, later, we try to ask the schema description:
https://github.com/axnsan12/drf-yasg/blob/2ef7cfbfe369a55e8b68e574bf20fe32d40cac38/src/drf_yasg/inspectors/query.py#L51
My field that is getting passed in, is from https://github.com/django-money/django-money, has no schema:
``` python
ipdb> field
Field(name=u'currency', required=False, location='query', schema=None, description=u'', type=None, example=None)
ipdb> type(field.schema)
<type 'NoneType'>
ipdb> field.schema.description
*** AttributeError: 'NoneType' object has no attribute 'description'
```
Not sure what to do exactly but it seems that we should make a `field.schema is None` check? | 1medium
|
Title: relationship_proxy documentation doesn't work with `_: dataclasses.KW_ONLY`
Body: ### Describe the bug
Consider the example [here](https://docs.sqlalchemy.org/en/20/orm/extensions/associationproxy.html#simplifying-association-objects):
If you add `MappedAsDataclass` and `_: dataclasses.KW_ONLY` to `UserKeywordAssociation` the example stops working because the implicit creator passes the argument as positional.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.31
### DBAPI (i.e. the database driver)
asyncpg
### Database Vendor and Major Version
postgres 16
### Python Version
3.12
### Operating system
Linux
### To Reproduce
```python
from __future__ import annotations
from typing import List
from typing import Optional
from sqlalchemy import ForeignKey
from sqlalchemy import String
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.associationproxy import AssociationProxy
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import MappedAsDataclass
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
class Base(DeclarativeBase, MappedAsDataclass):
pass
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(64))
user_keyword_associations: Mapped[List[UserKeywordAssociation]] = relationship(
back_populates="user",
cascade="all, delete-orphan",
)
# association proxy of "user_keyword_associations" collection
# to "keyword" attribute
keywords: AssociationProxy[List[Keyword]] = association_proxy(
"user_keyword_associations",
"keyword",
)
def __init__(self, name: str):
self.name = name
class UserKeywordAssociation(Base):
__tablename__ = "user_keyword"
_: dataclasses.KW_ONLY
user_id: Mapped[int] = mapped_column(ForeignKey("user.id"), primary_key=True)
keyword_id: Mapped[int] = mapped_column(ForeignKey("keyword.id"), primary_key=True)
special_key: Mapped[Optional[str]] = mapped_column(String(50))
user: Mapped[User] = relationship(back_populates="user_keyword_associations")
keyword: Mapped[Keyword] = relationship()
class Keyword(Base):
__tablename__ = "keyword"
id: Mapped[int] = mapped_column(primary_key=True)
keyword: Mapped[str] = mapped_column("keyword", String(64))
def __init__(self, keyword: str):
self.keyword = keyword
def __repr__(self) -> str:
return f"Keyword({self.keyword!r})"
user = User("log")
for kw in (Keyword("new_from_blammo"), Keyword("its_big")):
user.keywords.append(kw) # boom
print(user.keywords)
```
### Error
```
/Users/tamird/Library/Caches/pypoetry/virtualenvs/common-hf-Ms37h-py3.12/lib/python3.12/site-packages/sqlalchemy/ext/associationproxy.py:1505: in append
item = self._create(value)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _create(self, value: _T) -> Any:
> return self.creator(value)
E TypeError: __init__() takes 1 positional argument but 2 were given
```
### Additional context
_No response_ | 1medium
|
Title: [Bug]: MODEL_ZOO is not reused in subprocesses
Body: Since `model_key` (the hash of partial function) changes in mapped processes, the preloaded models are never truly reused.
https://github.com/modelscope/data-juicer/blob/aaa404a5b12c9ef87ebf54a7bf38c7b4bcbfd0f1/data_juicer/utils/model_utils.py#L546-L549 | 1medium
|
Title: id missing in the payload for the endpoint POST /accounts
Body: In the response for GET /\__api\__,
for the endpoint POST /accounts, the ObjectSchema( body) is
<img width="1711" alt="Screenshot 2022-04-15 at 2 07 07 PM" src="https://user-images.githubusercontent.com/26853764/163551763-7f9821ef-0586-4fb6-bc3b-0e84423ffcb1.png">
Using this body gives the following error,
```
{
"code": 400,
"errno": 107,
"error": "Invalid parameters",
"message": "data.id in body: Required",
"details": [
{
"location": "body",
"name": "data.id",
"description": "data.id in body: Required"
}
]
}
```
The "id" field is missing in the body of the endpoint POST /accounts in the OpenAPI Spec.
Correct payload should be,
```
{
"data": {
"password": "string",
"id": "username",
"additionalProp1": "string",
"additionalProp2": "string",
"additionalProp3": "string"
},
"permissions": {
"read": [
"string"
],
"write": [
"string"
]
}
}
``` | 1medium
|
Title: In tf1.13.1 version, bert performs downstream tasks, how to run bert on multiple GPUs?
Body: Written on github:Yes, all of the code in this repository works out-of-the-box with CPU, GPU, and Cloud TPU. However, GPU training is single-GPU only.
When bert completes downstream tasks and performs fine_tune, can't he train on multiple GPUs? If not, when can the downstream task be trained on the bert model of multiple GPUs? Because, when a very large long text is trained for bert, an error will be reported and the efficiency will be low | 1medium
|
Title: Can't find "index"
Body: Hi ! There some trouble in the running process of sweetviz.
My code is `report = sw.analyze(df_a)`
But python noted e with the following keyerror
`"None of ['index'] are in the columns"`
I had checked my columns' name, and tried to resent the index as well, however, pyhton still send the notes.
Hope for ur help if it is convenient !
The whole error notes
`---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[57], line 1
----> 1 report = sw.analyze(df_a)
File ~\AppData\Roaming\Python\Python39\site-packages\sweetviz\sv_public.py:12, in analyze(source, target_feat, feat_cfg, pairwise_analysis)
8 def analyze(source: Union[pd.DataFrame, Tuple[pd.DataFrame, str]],
9 target_feat: str = None,
10 feat_cfg: FeatureConfig = None,
11 pairwise_analysis: str = 'auto'):
---> 12 report = sweetviz.DataframeReport(source, target_feat, None,
13 pairwise_analysis, feat_cfg)
14 return report
File ~\AppData\Roaming\Python\Python39\site-packages\sweetviz\dataframe_report.py:256, in DataframeReport.__init__(self, source, target_feature_name, compare, pairwise_analysis, fc)
253 for f in features_to_process:
254 # start = time.perf_counter()
255 self.progress_bar.set_description_str(f"Feature: {f.source.name}")
--> 256 self._features[f.source.name] = sa.analyze_feature_to_dictionary(f)
257 self.progress_bar.update(1)
258 # print(f"DONE FEATURE------> {f.source.name}"
259 # f" {(time.perf_counter() - start):.2f} {self._features[f.source.name]['type']}")
260 # self.progress_bar.set_description_str('[FEATURES DONE]')
261 # self.progress_bar.close()
262
263 # Wrap up summary
File ~\AppData\Roaming\Python\Python39\site-packages\sweetviz\series_analyzer.py:92, in analyze_feature_to_dictionary(to_process)
89 | 1medium
|
Title: Add conda installation instructions
Body: | 0easy
|
Title: Shapefile gives warning with GSHHS feature
Body: Hi,
shapefile gives me a warning when I save a figure which has GSHHS features in it. Thought I'd ask it here:
MWE:
```python
import matplotlib.pyplot as plt;
import cartopy.crs as ccrs;
import cartopy;
fig = plt.figure();
proj = ccrs.Mercator(central_longitude = 0);
ax = plt.axes(projection = proj);
ax.add_feature(cartopy.feature.GSHHSFeature(levels = (1, 2), linewidth = 2.0));
print("before");
fig.savefig("test.png");
print("after");
```
output:
before
```
/home/max/.local/lib/python3.8/site-packages/shapefile.py:385: UserWarning: Shapefile shape has invalid polygon: no exterior rings found (must have clockwise orientation); interpreting holes as exteriors.
warnings.warn('Shapefile shape has invalid polygon: no exterior rings found (must have clockwise orientation); interpreting holes as exteriors.')
```
after
What is the cause of this and how can it be resolved?
OS: Ubuntu 20.04
Cartopy: 0.18.0
pyshp: 2.1.2
Thanks in advance. | 1medium
|
Title: adjust font size under heatmap_track.yticks
Body: Great package!
I want to know how to adjust font size under heatmap_track.yticks
I followed the example https://moshi4.github.io/pyCirclize/phylogenetic_tree/#2-3-with-heatmap
```
from pycirclize import Circos
from pycirclize.utils import load_example_tree_file, ColorCycler
import numpy as np
np.random.seed(0)
tree_file = load_example_tree_file("large_example.nwk")
circos, tv = Circos.initialize_from_tree(
tree_file,
start=-350,
end=0,
r_lim=(10, 80),
leaf_label_size=5,
leaf_label_rmargin=21,
line_kws=dict(color="lightgrey", lw=1),
)
# Define group-species dict for tree annotation
# In this example, set minimum species list to specify group's MRCA node
group_name2species_list = dict(
Monotremata=["Tachyglossus_aculeatus", "Ornithorhynchus_anatinus"],
Marsupialia=["Monodelphis_domestica", "Vombatus_ursinus"],
Xenarthra=["Choloepus_didactylus", "Dasypus_novemcinctus"],
Afrotheria=["Trichechus_manatus", "Chrysochloris_asiatica"],
Euarchontes=["Galeopterus_variegatus", "Theropithecus_gelada"],
Glires=["Oryctolagus_cuniculus", "Microtus_oregoni"],
Laurasiatheria=["Talpa_occidentalis", "Mirounga_leonina"],
)
# Set tree line color
ColorCycler.set_cmap("Set2")
for species_list in group_name2species_list.values():
tv.set_node_line_props(species_list, color=ColorCycler())
# Plot heatmap
sector = circos.sectors[0]
heatmap_track = sector.add_track((80, 100))
matrix_data = np.random.randint(0, 100, (5, tv.leaf_num))
heatmap_track.heatmap(matrix_data, cmap="viridis")
heatmap_track.yticks([0.5, 1.5, 2.5, 3.5, 4.5], list("EDCBA"), vmax=5, tick_length=0, fontsize=10)
fig = circos.plotfig()
```
but got
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-56-ce35496d538a>](https://localhost:8080/#) in <cell line: 0>()
39 matrix_data = np.random.randint(0, 100, (5, tv.leaf_num))
40 heatmap_track.heatmap(matrix_data, cmap="viridis")
---> 41 heatmap_track.yticks([0.5, 1.5, 2.5, 3.5, 4.5], list("EDCBA"), vmax=5, tick_length=0, fontsize=10)
42
43 fig = circos.plotfig()
TypeError: Track.yticks() got an unexpected keyword argument 'fontsize'
``` | 0easy
|
Title: Please work on browser fingerprints
Body: Please add bypass options for
canvas fingerprinting
audio fingerprinting
and also timezone integration on browser level
id like to work with you to implement these features
| 2hard
|
Title: I want a parameter configuration that reproduces the results
Body: Using the operating instructions you provided and the parameter settings in the documentation did not reproduce the results,
The parameter settings in the article are even less effective.
Can you provide a parameter setting that achieves the effect in the article and put it in the document.
`
Evaluating market1501 (source)
Extracting features from query set ...
Done, obtained 3368-by-512 matrix
Extracting features from gallery set ...
Done, obtained 15913-by-512 matrix
Speed: 0.0299 sec/batch
Computing distance matrix with metric=euclidean ...
Computing CMC and mAP ...
** Results **
mAP: 78.0%
CMC curve
Rank-1 : 91.3%
Rank-5 : 96.8%
Rank-10 : 97.9%
Rank-20 : 98.8%
Checkpoint saved to "log/osnet_x1_0-softmax-market1501/model/model.pth.tar-64"
`
has converged
`
Evaluating market1501 (source)
Extracting features from query set ...
Done, obtained 3368-by-512 matrix
Extracting features from gallery set ...
Done, obtained 15913-by-512 matrix
Speed: 0.0299 sec/batch
Computing distance matrix with metric=euclidean ...
Computing CMC and mAP ...
** Results **
mAP: 78.1%
CMC curve
Rank-1 : 91.3%
Rank-5 : 96.8%
Rank-10 : 97.9%
Rank-20 : 98.8%
Checkpoint saved to "log/osnet_x1_0-softmax-market1501/model/model.pth.tar-100"
`
`Total params: 2,578,879
Trainable params: 2,578,879
Non-trainable params: 0
Input size (MB): 0.38
Forward/backward pass size (MB): 282.45
Params size (MB): 9.84
Estimated Total Size (MB): 292.66
Loading checkpoint from "******************************/model.pth.tar-100"
Loaded model weights
Loaded optimizer
Last epoch = 100
Last rank1 = 91.3%
dist_metric='cosine'
Evaluating dukemtmcreid (target)
Extracting features from query set ...
Done, obtained 2228-by-512 matrix
Extracting features from gallery set ...
Done, obtained 17661-by-512 matrix
Speed: 0.0303 sec/batch
Computing distance matrix with metric=cosine ...
Computing CMC and mAP ...
** Results **
mAP: 24.3%
CMC curve
Rank-1 : 41.7%
Rank-5 : 57.7%
Rank-10 : 63.3%
Rank-20 : 69.0%
` | 1medium
|
Title: BUG: Single Index of Tuples as Output on Tuple Groupings
Body: ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
df = pandas.DataFrame({"A" : [1,2,3,1,2,3], "B": [4,5,6,4,5,6], "C" : [7,8,9,7,8,9]})
df = df.set_index(['A', 'B'])
df.groupby(lambda x : (x[0], x[1])).aggregate('sum')
```
### Issue Description
This issue appears to be similar to #24786 except that here a lambda is used to create the tuples. The [User Guide for Pandas](https://pandas.pydata.org/docs/user_guide/groupby.html) states that
> The result of the aggregation will have the group names as the new index. In the case of multiple keys, the result is a MultiIndex by default.
Thus I was expecting the tuples to be automatically combined to form a ```MultiIndex``` as opposed to an ```Index``` with tuples as indices. Internally, the ```Index.map()``` [call](https://github.com/pandas-dev/pandas/blob/8a5344742c5165b2595f7ccca9e17d5eff7f7886/pandas/core/groupby/grouper.py#L511) converts the produced list of tuples to a ```MultiIndex```, but when it [produces the aggregation index](https://github.com/pandas-dev/pandas/blob/8a5344742c5165b2595f7ccca9e17d5eff7f7886/pandas/core/groupby/ops.py#L753), it appears to produce a single ``Index``` instead.
I wasn't sure if this was intended behaviour, or a bug; I apologize if it is the former!
### Expected Behavior
```python
df = pandas.DataFrame({"A" : [1,2,3,1,2,3], "B": [4,5,6,4,5,6], "C" : [7,8,9,7,8,9]})
df = df.set_index(['A', 'B'])
df.groupby(['A', 'B']).aggregate('sum')
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.1
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.18363
machine : AMD64
processor : AMD64 Family 21 Model 96 Stepping 1, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_Canada.1252
pandas : 2.2.3
numpy : 2.2.1
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None</details>
| 1medium
|
Title: Quality problems with H264 Codec and Server example
Body: Hey, I have the following setup:
- The [aiortc server](https://github.com/aiortc/aiortc/blob/main/examples/server/server.py) from the aiortc examples is running on a Windows machine
- An Android phone is connecting to the server (Using Google's Java WebRTC implementation)
This works perfectly fine as long as I'm using a VP8 Encoder/Decoder. Using the H264 Codec, the video stream sent to the aiortc server also looks fine (Tested this by recording it). However, the video stream sent back to the Android system is very bad quality-wise (low PFS, bad image). Looking at the Sender Statistics, the server receives millions of bytes, but only sends back a couple thousand.
A similar issue also happens with the original server example, using the following setup:
- [Aiortc server](https://github.com/aiortc/aiortc/blob/main/examples/server/server.py) from original example, runs on Windows
- [Javascript client](https://github.com/aiortc/aiortc/blob/main/examples/server/client.js) from original example (Using Firefox/Chrome on Windows/Android using Virtual Camera/Phone Camera -> all same result)
Here, the quality seems fine originally, but degrades within 30-60 seconds of connection, eventually losing all colors and getting ultra blurry. This also does not happen with VP8, only with H264.
Could it be that the aiortc H264 Codec is not compatible with other implementations? | 1medium
|
Title: Add linewidth parameter to plot method
Body: #### Description
It seems that this parameter has a significant effect on the output
## Linewidth: 1.2 | Spot Size: 0.1

## Linewidth: 1.2 | Spot Size: 10

## Linewidth: 12 | Spot Size: 0.1

## Linewidth: 12 | Spot Size: 10

| 1medium
|
Title: Barplot
Body: barplot max value shows almost half of maximum real data, but matplotlib bar fit the data well.
try to compare this
> import seaborn as sns
> tips = sns.load_dataset('tips')
> sns.barplot(y=tips['total_bill'], x=tips['day'])
and
> import matplotlib.pyplot as plt
> plt.bar(height=tips['total_bill'], x=tips['day']) | 1medium
|
Title: Feature Request
Body: ### Missing functionality
As a frequent user of ydata_profiling am encountering the below issue
In the given dataset for the numeric datatype column we have to exclude empty cell while calculating 'sum'. When we are not including empty cells then the value of sum is coming as 'NaN'. From our side if we are replacing empty with '0' then it is impacting 'min' value, if we are replacing with some other value then it is impacting the data type of the corresponding column.
### Proposed feature
we have to exclude empty or null cell of the numeric data type columns while calculating 'sum'. When we are not including empty cells then the value of sum is coming as 'NaN'
### Alternatives considered
Below logic in describe_numeric_spark.py is the place where 'sum' is been is calculated. Please correct me if am wrong
@describe_numeric_1d.register
def describe_numeric_1d_spark(
config: Settings, df: DataFrame, summary: dict
) -> Tuple[Settings, DataFrame, dict]:
"""Describe a boolean series.
Args:
series: The Series to describe.
summary: The dict containing the series description so far.
Returns:
A dict containing calculated series description values.
"""
stats = numeric_stats_spark(df, summary)
summary["min"] = stats["min"]
summary["max"] = stats["max"]
summary["mean"] = stats["mean"]
summary["std"] = stats["std"]
summary["variance"] = stats["variance"]
summary["skewness"] = stats["skewness"]
summary["kurtosis"] = stats["kurtosis"]
summary["sum"] = stats["sum"]
### Additional context
We are building wheel file from our code and installing the same in databricks cluster and trying to do exploratory data analysis of the given source dataset in CSV file format | 1medium
|
Title: new version numpy from text-generation-webui
Body: ### Describe the bug
one month ago all working fine...
btw you have the best real OFFLINE voice extention for obadooga ;)
### To Reproduce
loadthe extention
### Expected behavior
_No response_
### Logs
```shell
12:58:44-519624 INFO Loading the extension "coqui_tts"
12:58:45-869491 ERROR Failed to load the extension "coqui_tts".
Traceback (most recent call last):
File "e:\text-generation-webui\modules\extensions.py", line 37, in load_extensions
extension = importlib.import_module(f"extensions.{name}.script")
File "e:\text-generation-webui\installer_files\env\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "e:\text-generation-webui\extensions\coqui_tts\script.py", line 10, in <module>
from TTS.api import TTS
File "e:\text-generation-webui\installer_files\env\lib\site-packages\TTS\api.py", line 9, in <module>
from TTS.cs_api import CS_API
File "e:\text-generation-webui\installer_files\env\lib\site-packages\TTS\cs_api.py", line 12, in <module>
from TTS.utils.audio.numpy_transforms import save_wav
File "e:\text-generation-webui\installer_files\env\lib\site-packages\TTS\utils\audio\__init__.py", line 1, in <module> from TTS.utils.audio.processor import AudioProcessor
File "e:\text-generation-webui\installer_files\env\lib\site-packages\TTS\utils\audio\processor.py", line 10, in <module>
from TTS.utils.audio.numpy_transforms import (
File "e:\text-generation-webui\installer_files\env\lib\site-packages\TTS\utils\audio\numpy_transforms.py", line 8, in <module>
from librosa import magphase, pyin
File "e:\text-generation-webui\installer_files\env\lib\site-packages\lazy_loader\__init__.py", line 78, in __getattr__ attr = getattr(submod, name)
File "e:\text-generation-webui\installer_files\env\lib\site-packages\lazy_loader\__init__.py", line 77, in __getattr__ submod = importlib.import_module(submod_path)
File "e:\text-generation-webui\installer_files\env\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "e:\text-generation-webui\installer_files\env\lib\site-packages\librosa\core\spectrum.py", line 12, in <module>
from numba import jit
File "e:\text-generation-webui\installer_files\env\lib\site-packages\numba\__init__.py", line 55, in <module>
_ensure_critical_deps()
File "e:\text-generation-webui\installer_files\env\lib\site-packages\numba\__init__.py", line 42, in _ensure_critical_deps
raise ImportError("Numba needs NumPy 1.24 or less")
ImportError: Numba needs NumPy 1.24 or less
Running on local URL: http://127.0.0.1:7861
```
### Environment
```shell
-
```
### Additional context
_No response_ | 1medium
|
Title: Add MindSpore support.
Body: Dear maintainers,
We are developing MindSpore adapted version of d2l-zh. However, it was found that the English version and the Chinese version were very different, could we just follow the newest version to develop? If possible, can you create a new repository like 'd2l-jax-colab'? | 1medium
|
Title: Keybind to close Workflow
Body: ### Feature Idea
Dear ComfyUI Team,
I would like to request the addition of a keyboard shortcut to quickly close the current workspace (workflow). This feature would improve efficiency and user experience by providing a faster way to manage workflows without relying solely on the mouse.
Thank you for considering this suggestion!
Best regards,
### Existing Solutions
_No response_
### Other
_No response_ | 0easy
|
Title: [Bug]: UserWarning on skipping serialisation of PostGradPassManager
Body: ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.7
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 16 2024, 04:38:48) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5317 CPU @ 3.00GHz
CPU family: 6
Model: 106
Thread(s) per core: 1
Core(s) per socket: 12
Socket(s): 1
Stepping: 6
BogoMIPS: 6002.58
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush acpi mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves umip pku ospke gfni vaes vpclmulqdq rdpid md_clear flush_l1d arch_capabilities
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 576 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 15 MiB (12 instances)
L3 cache: 216 MiB (12 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.2.post1+cu124torch2.5
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.2.1
[pip3] sentence-transformers==3.2.1
[pip3] torch==2.6.0
[pip3] torchao==0.9.0
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.21.0
[pip3] transformers==4.49.0
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==3.2.0
[pip3] tritonclient==2.51.0
[pip3] vector-quantize-pytorch==1.21.2
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.3.dev758+g489b7938
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-11 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
CUDA_PATH=/nix/store/skdw4l72lgrj628l0arj1m3ynlzfksi8-cuda-merged-12.4
LD_LIBRARY_PATH=/workspace/vllm/.venv/lib/python3.11/site-packages/cv2/../../lib64:/nix/store/lmyyfaz2amcs2an1f6m9h263151jiajy-cuda-merged-12.4/lib
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
```
</details>
### 🐛 Describe the bug
I have started to see this serialization warning whenever running `vllm serve` from time to time when running with v1:
```prolog
/workspace/project/.venv/lib/python3.12/site-packages/torch/utils/_config_module.py:189: UserWarning: Skipping serialization of post_grad_custom_post_pass value <vllm.compilation.pass_manager.PostGradPassManager object at 0x7f23314f7c20>
warnings.warn(f"Skipping serialization of {k} value {v}")
```
Not sure how impactful is this, or we can ignore this.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 1medium
|
Title: Hipchat is dropping XMPP, we need to port the backend fully to their API.
Body: In the past it was not possible but it now it should if they drop XMPP, reevaluate and remplace the XMPP calls to native API ones. | 2hard
|
Title: LORA
Body: An implementation of LORA and other tuning techniques would be nice. | 2hard
|
Title: falcon.testing.* is not (re-)exported
Body: Now that falcon 4 with type hints has been released 🎉 , we've enabled type checking using mypy as well for the falcon namespace for our codebase.
Unfortunately, `falcon.testing.__init__` does not reexport the names it imports, which causes complaints by mypy, for example:
```
tests/test_subscription_data.py:13: error: Name "falcon.testing.TestClient" is not defined [name-defined]
```
(PR will follow)
| 1medium
|
Title: errorbar wont be plotted if using 'col'
Body: using next command
sns.catplot(data=df, x=x, y=y, hue=hue, kind='bar', height=6, aspect=1)
**will display error bars**
sns.catplot(data=df, x=x, y=y, hue=hue, **col=col,** kind='bar', height=6, aspect=1) or with some variations like adding **errorbar=('ci', 98)**
**will not display error bars** | 1medium
|
Title: [BUG]
Body: **描述出现的错误**
大佬帮忙看一下是不是py的版本有问题。 module not found。
**bug复现**
点击build.bat

在build.bat加入--exclude-module _bootlocale后build成功

但是运行时报错

**桌面(请填写以下信息):**
-操作系统:windows10 64bit
-vpn代理
关闭
-python版本
Python 3.10.8
pyinstaller 3.6
**附文**
在此处添加有关此问题的文字。
| 1medium
|
Title: Restrict dash version to be less than 1.21.0
Body: For current codebase, restrict dash version to address #349 | 1medium
|
Title: Error during WebSocket handshake: Unexpected response code: 400
Body: I've a django socketio as server and reactjs as client
When i'm running both cilent and server,i'm getting error WebSocket connection to 'ws://localhost:8000/socket.io/?EIO=3&transport=websocket' failed: Error during WebSocket handshake: Unexpected response code: 400 at client side and "GET /socket.io/?EIO=3&transport=websocket HTTP/1.1" 400 11 at server side
socket.io-client verison-2.3.0
django version-3.1.3 | 1medium
|
Title: Using a timestamp column for facet_col or facet_row gives a KeyError
Body: Using a column which is a `datetime64[ns]` (e.g. what `pd.to_datetime()` or `pd.date_range()` returns) for `facet_col` or `facet_row` gives a key error.
```python
import pandas as pd
import numpy as np
import plotly.express as px
df = pd.DataFrame({'Cost': np.random.normal(size=20), 'Date': pd.date_range(start='2023-04-20', periods=20)})
px.histogram(df, x='Cost', facet_col='Date')
```
This gives `KeyError: Timestamp('2023-04-20 00:00:00')`, as does `facet_row='Date'`.
Converting the timestamp column to a string or a `datetime.date` (e.g. via `df['Date'].dt.date`) is a temporary workaround. | 1medium
|
Title: How to link data inside topic model to original training data without preprocessed?
Body: I use bertopic in Chinese .
So I must split the text to token and add space between every 2 tokens, and remove stopwords, and some special symbol, than I input the processed text into bertopic.
And the problem is how can I link the document to the oringinal training data without preprocessed? if I use documents in the model, the data are tokens without stopwords etc., its s weired.
except transform every text again?Somebody get better ideas? | 1medium
|
Title: Add a custom URL type instead of rely on `httpx.URL`
Body: - We use the `httpx.URL` in the [ProxyConfiguration](https://github.com/apify/crawlee-python/blob/master/src/crawlee/proxy_configuration.py), in the tests and maybe in other places as well.
- It kind of shot us into the foot in #618.
- ~I suggest replacing it with our custom data type (either the Pydantic model or data class).~
- As Honza said, we could probably utilize some 3rd party library, [yarl](https://pypi.org/project/yarl/) seems like a good option.
- Also please find other occurrences of the `httpx.URL` in the Crawlee and replace them with our custom type.
- This applies also to tests: `httpbin: URL`
- Keep in mind, that URLs in the `Request` model need to be serialized as strings. | 1medium
|
Title: IPython 9 `logfile` causes crash
Body: Running `ipython` with `--logfile` or `--logappend` causes a crash in `ipython>=9`
e.g. `ipython --logfile=log.txt`
This is failing due to the following error:
```py
File "/usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py", line 817, in init_logstart
self.magic('logstart %s' % self.logfile)
^^^^^^^^^^
AttributeError: 'TerminalInteractiveShell' object has no attribute 'magic'
```
---
Using docker for reproducibility
`docker pull python@sha256:385ccb8304f6330738a6d9e6fa0bd7608e006da7e15bc52b33b0398e1ba4a15b`
(digest matches current `latest` tag)
Installing `ipython==9.0.1` and running with `--logfile=log.txt` with verbose crash:
```sh
docker run --rm python@sha256:385ccb8304f6330738a6d9e6fa0bd7608e006da7e15bc52b33b0398e1ba4a15b \
sh -c \
'pip install -qq ipython
ipython --logfile=log.txt --BaseIPythonApplication.verbose_crash=True
cat /root/.ipython/Crash_report_ipython.txt'
```
<details>
```
---------------------------------------------------------------------------
---------------------------------------------------------------------------
AttributeError Python 3.13.2: /usr/local/bin/python3.13
Thu Mar 6 19:37:39 2025
A problem occurred executing Python code. Here is the sequence of function
calls leading up to the error, with the most recent (innermost) call last.
File /usr/local/bin/ipython:8
1 #!/usr/local/bin/python3.13
2 # -*- coding: utf-8 -*-
3 import re
4 import sys
5 from IPython import start_ipython
6 if __name__ == '__main__':
7 sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
----> 8 sys.exit(start_ipython())
File /usr/local/lib/python3.13/site-packages/IPython/__init__.py:139, in start_ipython(argv=None, **kwargs={})
113 def start_ipython(argv=None, **kwargs):
114 """Launch a normal IPython instance (as opposed to embedded)
115
116 `IPython.embed()` puts a shell in a particular calling scope,
(...) 136 allowing configuration of the instance (see :ref:`terminal_options`).
137 """
138 from IPython.terminal.ipapp import launch_new_instance
--> 139 return launch_new_instance(argv=argv, **kwargs)
launch_new_instance = <bound method Application.launch_instance of <class 'IPython.terminal.ipapp.TerminalIPythonApp'>>
argv = None
kwargs = {}
File /usr/local/lib/python3.13/site-packages/traitlets/config/application.py:1074, in Application.launch_instance(cls=<class 'IPython.terminal.ipapp.TerminalIPythonApp'>, argv=None, **kwargs={})
1067 @classmethod
1068 def launch_instance(cls, argv: ArgvType = None, **kwargs: t.Any) -> None:
1069 """Launch a global instance of this Application
1070
1071 If a global instance already exists, this reinitializes and starts it
1072 """
1073 app = cls.instance(**kwargs)
-> 1074 app.initialize(argv)
app = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>
argv = None 1075 app.start()
File /usr/local/lib/python3.13/site-packages/traitlets/config/application.py:118, in catch_config_error.<locals>.inner(app=<IPython.terminal.ipapp.TerminalIPythonApp object>, *args=(None,), **kwargs={})
115 @functools.wraps(method)
116 def inner(app: Application, *args: t.Any, **kwargs: t.Any) -> t.Any:
117 try:
--> 118 return method(app, *args, **kwargs)
method = <function TerminalIPythonApp.initialize at 0xffffa318f1a0>
app = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>
args = (None,)
kwargs = {} 119 except (TraitError, ArgumentError) as e:
120 app.log.fatal("Bad config encountered during initialization: %s", e)
121 app.log.debug("Config at the time: %s", app.config)
122 app.exit(1)
File /usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py:286, in TerminalIPythonApp.initialize(self=<IPython.terminal.ipapp.TerminalIPythonApp object>, argv=None)
274 @catch_config_error
275 def initialize(self, argv=None):
276 """Do actions after construct, but before starting the app."""
277 super(TerminalIPythonApp, self).initialize(argv)
278 if self.subapp is not None:
279 # don't bother initializing further, starting subapp
280 return
281 # print(self.extra_args)
282 if self.extra_args and not self.something_to_run:
283 self.file_to_run = self.extra_args[0]
284 self.init_path()
285 # create the shell
--> 286 self.init_shell()
self = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620> 287 # and draw the banner
288 self.init_banner()
289 # Now a variety of things that happen after the banner is printed.
290 self.init_gui_pylab()
291 self.init_extensions()
292 self.init_code()
File /usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py:300, in TerminalIPythonApp.init_shell(self=<IPython.terminal.ipapp.TerminalIPythonApp object>)
294 def init_shell(self):
295 """initialize the InteractiveShell instance"""
296 # Create an InteractiveShell instance.
297 # shell.display_banner should always be False for the terminal
298 # based app, because we call shell.show_banner() by hand below
299 # so the banner shows *before* all extension loading stuff.
--> 300 self.shell = self.interactive_shell_class.instance(parent=self,
self = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620> 301 profile_dir=self.profile_dir,
302 ipython_dir=self.ipython_dir, user_ns=self.user_ns)
303 self.shell.configurables.append(self)
File /usr/local/lib/python3.13/site-packages/traitlets/config/configurable.py:583, in SingletonConfigurable.instance(cls=<class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'>, *args=(), **kwargs={'ipython_dir': '/root/.ipython', 'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>, 'profile_dir': <IPython.core.profiledir.ProfileDir object>, 'user_ns': None})
553 @classmethod
554 def instance(cls: type[CT], *args: t.Any, **kwargs: t.Any) -> CT:
555 """Returns a global instance of this class.
556
557 This method create a new instance if none have previously been created
(...) 579 True
580 """
581 # Create and save the instance
582 if cls._instance is None:
--> 583 inst = cls(*args, **kwargs)
cls = <class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'>
args = ()
kwargs = {'parent': <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>, 'profile_dir': <IPython.core.profiledir.ProfileDir object at 0xffffa354bcb0>, 'ipython_dir': '/root/.ipython', 'user_ns': None} 584 # Now make sure that the instance will also be returned by
585 # parent classes' _instance attribute.
586 for subclass in cls._walk_mro():
587 subclass._instance = inst
589 if isinstance(cls._instance, cls):
590 return cls._instance
591 else:
592 raise MultipleInstanceError(
593 f"An incompatible sibling of '{cls.__name__}' is already instantiated"
594 f" as singleton: {type(cls._instance).__name__}"
595 )
File /usr/local/lib/python3.13/site-packages/IPython/terminal/interactiveshell.py:977, in TerminalInteractiveShell.__init__(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>, *args=(), **kwargs={'ipython_dir': '/root/.ipython', 'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>, 'profile_dir': <IPython.core.profiledir.ProfileDir object>, 'user_ns': None})
976 def __init__(self, *args, **kwargs) -> None:
--> 977 super(TerminalInteractiveShell, self).__init__(*args, **kwargs)
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590>
args = ()
kwargs = {'parent': <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>, 'profile_dir': <IPython.core.profiledir.ProfileDir object at 0xffffa354bcb0>, 'ipython_dir': '/root/.ipython', 'user_ns': None}
TerminalInteractiveShell = <class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'> 978 self._set_autosuggestions(self.autosuggestions_provider)
979 self.init_prompt_toolkit_cli()
980 self.init_term_title()
981 self.keep_running = True
982 self._set_formatter(self.autoformatter)
File /usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py:650, in InteractiveShell.__init__(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>, ipython_dir='/root/.ipython', profile_dir=<IPython.core.profiledir.ProfileDir object>, user_module=None, user_ns=None, custom_exceptions=((), None), **kwargs={'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>})
632 self.init_logger()
633 self.init_builtins()
635 # The following was in post_config_initialization
636 self.raw_input_original = input
637 self.init_completer()
638 # TODO: init_io() needs to happen before init_traceback handlers
639 # because the traceback handlers hardcode the stdout/stderr streams.
640 # This logic in in debugger.Pdb and should eventually be changed.
641 self.init_io()
642 self.init_traceback_handlers(custom_exceptions)
643 self.init_prompts()
644 self.init_display_formatter()
645 self.init_display_pub()
646 self.init_data_pub()
647 self.init_displayhook()
648 self.init_magics()
649 self.init_alias()
--> 650 self.init_logstart()
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590> 651 self.init_pdb()
652 self.init_extension_manager()
653 self.init_payload()
654 self.events.trigger('shell_initialized', self)
655 atexit.register(self.atexit_operations)
657 # The trio runner is used for running Trio in the foreground thread. It
658 # is different from `_trio_runner(async_fn)` in `async_helpers.py`
659 # which calls `trio.run()` for every cell. This runner runs all cells
660 # inside a single Trio event loop. If used, it is set from
661 # `ipykernel.kernelapp`.
662 self.trio_runner = None
File /usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py:817, in InteractiveShell.init_logstart(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>)
811 def init_logstart(self):
812 """Initialize logging in case it was requested at the command line.
813 """
814 if self.logappend:
815 self.magic('logstart %s append' % self.logappend)
816 elif self.logfile:
--> 817 self.magic('logstart %s' % self.logfile)
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590> 818 elif self.logstart:
819 self.magic('logstart')
AttributeError: 'TerminalInteractiveShell' object has no attribute 'magic'
**********************************************************************
Oops, ipython crashed. We do our best to make it stable, but...
A crash report was automatically generated with the following information:
- A verbatim copy of the crash traceback.
- A copy of your input history during this session.
- Data on your current ipython configuration.
It was left in the file named:
'/root/.ipython/Crash_report_ipython.txt'
If you can email this file to the developers, the information in it will help
them in understanding and correcting the problem.
You can mail it to: The IPython Development Team at [email protected]
with the subject 'ipython Crash Report'.
If you want to do it now, the following command will work (under Unix):
mail -s 'ipython Crash Report' [email protected] < /root/.ipython/Crash_report_ipython.txt
In your email, please also include information about:
- The operating system under which the crash happened: Linux, macOS, Windows,
other, and which exact version (for example: Ubuntu 16.04.3, macOS 10.13.2,
Windows 10 Pro), and whether it is 32-bit or 64-bit;
- How ipython was installed: using pip or conda, from GitHub, as part of
a Docker container, or other, providing more detail if possible;
- How to reproduce the crash: what exact sequence of instructions can one
input to get the same crash? Ideally, find a minimal yet complete sequence
of instructions that yields the crash.
To ensure accurate tracking of this issue, please file a report about it at:
https://github.com/ipython/ipython/issues
Hit <Enter> to quit (your terminal may close):Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/IPython/core/application.py", line 288, in excepthook
return self.crash_handler(etype, evalue, tb)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/IPython/core/crashhandler.py", line 206, in __call__
builtin_mod.input("Hit <Enter> to quit (your terminal may close):")
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
EOFError: EOF when reading a line
Original exception was:
Traceback (most recent call last):
File "/usr/local/bin/ipython", line 8, in <module>
sys.exit(start_ipython())
~~~~~~~~~~~~~^^
File "/usr/local/lib/python3.13/site-packages/IPython/__init__.py", line 139, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/usr/local/lib/python3.13/site-packages/traitlets/config/application.py", line 1074, in launch_instance
app.initialize(argv)
~~~~~~~~~~~~~~^^^^^^
File "/usr/local/lib/python3.13/site-packages/traitlets/config/application.py", line 118, in inner
return method(app, *args, **kwargs)
File "/usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py", line 286, in initialize
self.init_shell()
~~~~~~~~~~~~~~~^^
File "/usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py", line 300, in init_shell
self.shell = self.interactive_shell_class.instance(parent=self,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
profile_dir=self.profile_dir,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ipython_dir=self.ipython_dir, user_ns=self.user_ns)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/traitlets/config/configurable.py", line 583, in instance
inst = cls(*args, **kwargs)
File "/usr/local/lib/python3.13/site-packages/IPython/terminal/interactiveshell.py", line 977, in __init__
super(TerminalInteractiveShell, self).__init__(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py", line 650, in __init__
self.init_logstart()
~~~~~~~~~~~~~~~~~~^^
File "/usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py", line 817, in init_logstart
self.magic('logstart %s' % self.logfile)
^^^^^^^^^^
AttributeError: 'TerminalInteractiveShell' object has no attribute 'magic'
***************************************************************************
IPython post-mortem report
{'commit_hash': 'd64897cf0',
'commit_source': 'installation',
'default_encoding': 'utf-8',
'ipython_path': '/usr/local/lib/python3.13/site-packages/IPython',
'ipython_version': '9.0.1',
'os_name': 'posix',
'platform': 'Linux-6.10.14-linuxkit-aarch64-with-glibc2.36',
'sys_executable': '/usr/local/bin/python3.13',
'sys_platform': 'linux',
'sys_version': '3.13.2 (main, Feb 25 2025, 21:31:02) [GCC 12.2.0]'}
***************************************************************************
Application name: ipython
Current user configuration structure:
{'BaseIPythonApplication': {'verbose_crash': True},
'TerminalInteractiveShell': {'logfile': 'log.txt'}}
***************************************************************************
Crash traceback:
---------------------------------------------------------------------------
---------------------------------------------------------------------------
AttributeError Python 3.13.2: /usr/local/bin/python3.13
Thu Mar 6 19:37:39 2025
A problem occurred executing Python code. Here is the sequence of function
calls leading up to the error, with the most recent (innermost) call last.
File /usr/local/bin/ipython:8
1 #!/usr/local/bin/python3.13
2 # -*- coding: utf-8 -*-
3 import re
4 import sys
5 from IPython import start_ipython
6 if __name__ == '__main__':
7 sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
----> 8 sys.exit(start_ipython())
File /usr/local/lib/python3.13/site-packages/IPython/__init__.py:139, in start_ipython(argv=None, **kwargs={})
113 def start_ipython(argv=None, **kwargs):
114 """Launch a normal IPython instance (as opposed to embedded)
115
116 `IPython.embed()` puts a shell in a particular calling scope,
(...) 136 allowing configuration of the instance (see :ref:`terminal_options`).
137 """
138 from IPython.terminal.ipapp import launch_new_instance
--> 139 return launch_new_instance(argv=argv, **kwargs)
launch_new_instance = <bound method Application.launch_instance of <class 'IPython.terminal.ipapp.TerminalIPythonApp'>>
argv = None
kwargs = {}
File /usr/local/lib/python3.13/site-packages/traitlets/config/application.py:1074, in Application.launch_instance(cls=<class 'IPython.terminal.ipapp.TerminalIPythonApp'>, argv=None, **kwargs={})
1067 @classmethod
1068 def launch_instance(cls, argv: ArgvType = None, **kwargs: t.Any) -> None:
1069 """Launch a global instance of this Application
1070
1071 If a global instance already exists, this reinitializes and starts it
1072 """
1073 app = cls.instance(**kwargs)
-> 1074 app.initialize(argv)
app = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>
argv = None 1075 app.start()
File /usr/local/lib/python3.13/site-packages/traitlets/config/application.py:118, in catch_config_error.<locals>.inner(app=<IPython.terminal.ipapp.TerminalIPythonApp object>, *args=(None,), **kwargs={})
115 @functools.wraps(method)
116 def inner(app: Application, *args: t.Any, **kwargs: t.Any) -> t.Any:
117 try:
--> 118 return method(app, *args, **kwargs)
method = <function TerminalIPythonApp.initialize at 0xffffa318f1a0>
app = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>
args = (None,)
kwargs = {} 119 except (TraitError, ArgumentError) as e:
120 app.log.fatal("Bad config encountered during initialization: %s", e)
121 app.log.debug("Config at the time: %s", app.config)
122 app.exit(1)
File /usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py:286, in TerminalIPythonApp.initialize(self=<IPython.terminal.ipapp.TerminalIPythonApp object>, argv=None)
274 @catch_config_error
275 def initialize(self, argv=None):
276 """Do actions after construct, but before starting the app."""
277 super(TerminalIPythonApp, self).initialize(argv)
278 if self.subapp is not None:
279 # don't bother initializing further, starting subapp
280 return
281 # print(self.extra_args)
282 if self.extra_args and not self.something_to_run:
283 self.file_to_run = self.extra_args[0]
284 self.init_path()
285 # create the shell
--> 286 self.init_shell()
self = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620> 287 # and draw the banner
288 self.init_banner()
289 # Now a variety of things that happen after the banner is printed.
290 self.init_gui_pylab()
291 self.init_extensions()
292 self.init_code()
File /usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py:300, in TerminalIPythonApp.init_shell(self=<IPython.terminal.ipapp.TerminalIPythonApp object>)
294 def init_shell(self):
295 """initialize the InteractiveShell instance"""
296 # Create an InteractiveShell instance.
297 # shell.display_banner should always be False for the terminal
298 # based app, because we call shell.show_banner() by hand below
299 # so the banner shows *before* all extension loading stuff.
--> 300 self.shell = self.interactive_shell_class.instance(parent=self,
self = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620> 301 profile_dir=self.profile_dir,
302 ipython_dir=self.ipython_dir, user_ns=self.user_ns)
303 self.shell.configurables.append(self)
File /usr/local/lib/python3.13/site-packages/traitlets/config/configurable.py:583, in SingletonConfigurable.instance(cls=<class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'>, *args=(), **kwargs={'ipython_dir': '/root/.ipython', 'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>, 'profile_dir': <IPython.core.profiledir.ProfileDir object>, 'user_ns': None})
553 @classmethod
554 def instance(cls: type[CT], *args: t.Any, **kwargs: t.Any) -> CT:
555 """Returns a global instance of this class.
556
557 This method create a new instance if none have previously been created
(...) 579 True
580 """
581 # Create and save the instance
582 if cls._instance is None:
--> 583 inst = cls(*args, **kwargs)
cls = <class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'>
args = ()
kwargs = {'parent': <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>, 'profile_dir': <IPython.core.profiledir.ProfileDir object at 0xffffa354bcb0>, 'ipython_dir': '/root/.ipython', 'user_ns': None} 584 # Now make sure that the instance will also be returned by
585 # parent classes' _instance attribute.
586 for subclass in cls._walk_mro():
587 subclass._instance = inst
589 if isinstance(cls._instance, cls):
590 return cls._instance
591 else:
592 raise MultipleInstanceError(
593 f"An incompatible sibling of '{cls.__name__}' is already instantiated"
594 f" as singleton: {type(cls._instance).__name__}"
595 )
File /usr/local/lib/python3.13/site-packages/IPython/terminal/interactiveshell.py:977, in TerminalInteractiveShell.__init__(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>, *args=(), **kwargs={'ipython_dir': '/root/.ipython', 'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>, 'profile_dir': <IPython.core.profiledir.ProfileDir object>, 'user_ns': None})
976 def __init__(self, *args, **kwargs) -> None:
--> 977 super(TerminalInteractiveShell, self).__init__(*args, **kwargs)
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590>
args = ()
kwargs = {'parent': <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>, 'profile_dir': <IPython.core.profiledir.ProfileDir object at 0xffffa354bcb0>, 'ipython_dir': '/root/.ipython', 'user_ns': None}
TerminalInteractiveShell = <class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'> 978 self._set_autosuggestions(self.autosuggestions_provider)
979 self.init_prompt_toolkit_cli()
980 self.init_term_title()
981 self.keep_running = True
982 self._set_formatter(self.autoformatter)
File /usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py:650, in InteractiveShell.__init__(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>, ipython_dir='/root/.ipython', profile_dir=<IPython.core.profiledir.ProfileDir object>, user_module=None, user_ns=None, custom_exceptions=((), None), **kwargs={'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>})
632 self.init_logger()
633 self.init_builtins()
635 # The following was in post_config_initialization
636 self.raw_input_original = input
637 self.init_completer()
638 # TODO: init_io() needs to happen before init_traceback handlers
639 # because the traceback handlers hardcode the stdout/stderr streams.
640 # This logic in in debugger.Pdb and should eventually be changed.
641 self.init_io()
642 self.init_traceback_handlers(custom_exceptions)
643 self.init_prompts()
644 self.init_display_formatter()
645 self.init_display_pub()
646 self.init_data_pub()
647 self.init_displayhook()
648 self.init_magics()
649 self.init_alias()
--> 650 self.init_logstart()
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590> 651 self.init_pdb()
652 self.init_extension_manager()
653 self.init_payload()
654 self.events.trigger('shell_initialized', self)
655 atexit.register(self.atexit_operations)
657 # The trio runner is used for running Trio in the foreground thread. It
658 # is different from `_trio_runner(async_fn)` in `async_helpers.py`
659 # which calls `trio.run()` for every cell. This runner runs all cells
660 # inside a single Trio event loop. If used, it is set from
661 # `ipykernel.kernelapp`.
662 self.trio_runner = None
File /usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py:817, in InteractiveShell.init_logstart(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>)
811 def init_logstart(self):
812 """Initialize logging in case it was requested at the command line.
813 """
814 if self.logappend:
815 self.magic('logstart %s append' % self.logappend)
816 elif self.logfile:
--> 817 self.magic('logstart %s' % self.logfile)
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590> 818 elif self.logstart:
819 self.magic('logstart')
AttributeError: 'TerminalInteractiveShell' object has no attribute 'magic'
***************************************************************************
History of session input:
```
</details> | 1medium
|
Title: parametrize_with_cases: map case variables by name rather than position (dict vs list)
Body: Is it possible to use `parametrize_with_cases` so that the parameters of decorated function are mapped from a dictionary of case data, rather than a list of case data? I'm using the `cases` parameter to generate the case data.
Below is an example that doesn't work (but wish it did):
```python
import pytest_cases
def _get_cases():
'''
Return a list of test cases to ensure that the addition operator works as expected
'''
return [
# test case 1
{
"value1": 1,
"value2": 2,
"expected": 3,
},
# test case 2
{
"value1": 10,
"value2": 20,
"expected": 30,
},
]
@pytest_cases.parametrize(case=_get_cases())
def get_cases(case):
return case
@pytest_cases.parametrize_with_cases('value1, value2, expected', cases=get_cases)
def test_addition(value1, value2, expected):
assert value1 + value2 == expected
```
Resulting errors...
```
FAILED debug_pytest_cases.py::test_addition[get_cases-case={'value1': 1, 'value2': 2, 'expected': 3}] - Exception: Unable to unpack parameter value to a tuple: dict_values([1, 2, 3])
FAILED debug_pytest_cases.py::test_addition[get_cases-case={'value1': 10, 'value2': 20, 'expected': 30}] - Exception: Unable to unpack parameter value to a tuple: dict_values([10, 20, 30])
```
This can be "fixed", by having each case be a list (rather than a dict)....
```python
@pytest_cases.parametrize(case=_get_cases())
def get_cases(case):
return case.values() # return the case as list of values only, e.g. [1,2,3]
```
... but obviously we lose the name mapping, resulting in the case variables getting improperly mapped into the test function...
```
========================================= FAILURES =========================================
__________ test_addition[get_cases-case={'expected': 3, 'value2': 2, 'value1': 1}] __________
value1 = 3, value2 = 2, expected = 1
@pytest_cases.parametrize_with_cases('value1, value2, expected', cases=get_cases)
def test_addition(value1, value2, expected):
> assert value1 + value2 == expected
E assert (3 + 2) == 1
```
So instead of simply converting the case data into a list of values ( via `case.values()`), we could convert it to a list of tuples via `case.items()` , e.g. `[("expected": 3), ("value1": 1), ("value2": 2)]`
But now we need to unpack these values in our test function...
```python
@pytest_cases.parametrize_with_cases('case_data', cases=get_cases)
def test_addition(case_data):
# convert case_data back into it's original dictionary form
case_data = dict(case_data)
# unpack variables from dict. brittle and ugly :(
value1, value2, expected = [case_data[k] for k in ['value1', 'value2', 'expected']]
assert value1 + value2 == expected
```
This certainly works (I've been using it for a couple years), but it has a few drawbacks:
- Every test function will always need this same boiler plate code (unpacking the case data into discrete variables)
- The unpacking logic is brittle. It requires parameters on both side of `=` to mirror one another (including their order).
In conclusion, I think it could be notably simpler/convenient to map case data (a dictionary of values) directly to the function parameters (by name) so that no unpacking is necessary (as shown in the top/original example).
I realize that there's certainly some caveats with this approach (e.g. parameter name clashes, illegal parameter names, etc), but I imagine this wouldn't be the first time such issues/compromises needed to be considered.
... OR perhaps this functionality already exists and I just need to RTFM :)
Thank you! | 2hard
|
Title: Document table callbacks and configs
Body: ## Checklist
- [ ] on_changelog_event
- [ ] on_recover
| 1medium
|
Title: Option to keep output style
Body: It would be nice to have an option to keep the way output is displayed (between standard, scrolled, and collapsed) since it is more part of formatting than of outputs (I am enabling scrolling when I know that the output will be large, but when I still want to display it). | 1medium
|
Title: how to deploy mercury to k8s cluster?
Body: | 1medium
|
Title: orange associate can't be loaded
Body: <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
I have added an add-in called Associate.
Although all seems wel after installation- meaning that all the necessary source files seem in place,
the log shows a problem in loading Associate and some other things
```
2023-09-29 01:29:44,099:INFO:orangecanvas.registry.discovery: Could not import 'orangecontrib.associate.widgets.owassociate'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/associate/widgets/owassociate.py", line 9, in <module>
from AnyQt.QtWidgets import QTableView, qApp, QGridLayout, QLabel
ImportError: cannot import name 'qApp' from 'AnyQt.QtWidgets' (/home/frankc/orange/lib/python3.11/site-packages/AnyQt/QtWidgets.py)
2023-09-29 01:29:44,100:INFO:orangecanvas.registry.discovery: Could not import 'orangecontrib.associate.widgets.owitemsets'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/associate/widgets/owitemsets.py", line 9, in <module>
from AnyQt.QtWidgets import QTreeWidget, QTreeWidgetItem, qApp
ImportError: cannot import name 'qApp' from 'AnyQt.QtWidgets' (/home/frankc/orange/lib/python3.11/site-packages/AnyQt/QtWidgets.py)
2023-09-29 01:29:44,144:INFO:orangecanvas.registry.discovery: Could not import 'orangecontrib.educational.widgets.ow1ka'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/educational/widgets/ow1ka.py", line 24, in <module>
from Orange.widgets.utils.webview import wait
ImportError: cannot import name 'wait' from 'Orange.widgets.utils.webview' (/home/frankc/orange/lib/python3.11/site-packages/Orange/widgets/utils/webview.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/educational/widgets/ow1ka.py", line 27, in <module>
from AnyQt.QtWidgets import qApp
ImportError: cannot import name 'qApp' from 'AnyQt.QtWidgets' (/home/frankc/orange/lib/python3.11/site-packages/AnyQt/QtWidgets.py)
2023-09-29 01:29:44,146:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/explain/widgets/owexplainfeaturebase.py'.
2023-09-29 01:29:44,150:WARNING:root: No module named 'tempeh': LawSchoolGPADataset will be unavailable. To install, run:
pip install 'aif360[LawSchoolGPA]'
2023-09-29 01:29:45,819:WARNING:root: No module named 'fairlearn': ExponentiatedGradientReduction will be unavailable. To install, run:
pip install 'aif360[Reductions]'
2023-09-29 01:29:45,820:WARNING:root: No module named 'fairlearn': GridSearchReduction will be unavailable. To install, run:
pip install 'aif360[Reductions]'
2023-09-29 01:29:45,820:WARNING:root: No module named 'fairlearn': GridSearchReduction will be unavailable. To install, run:
pip install 'aif360[Reductions]'
2023-09-29 01:29:45,841:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/fairness/widgets/utils.py'.
2023-09-29 01:29:45,842:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/geo/widgets/plotutils.py'.
2023-09-29 01:29:45,891:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/survival_analysis/widgets/data.py'.
2023-09-29 01:29:46,036:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableCategory'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableCategory.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,037:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableContext'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableContext.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,038:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableConvert'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableConvert.py", line 27, in <module>
from PyQt5.QtWidgets import QMessageBox, QApplication, QFileDialog
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,039:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableCooccurrence'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableCooccurrence.py", line 30, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,040:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableCount'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableCount.py", line 28, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,041:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableDisplay'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableDisplay.py", line 28, in <module>
from PyQt5.QtWidgets import QTextBrowser, QFileDialog, QMessageBox, QApplication
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,042:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableExtractXML'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableExtractXML.py", line 25, in <module>
from PyQt5.QtGui import QFont
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,044:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableInterchange'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableInterchange.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,045:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableIntersect'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableIntersect.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,046:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableLength'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableLength.py", line 28, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,047:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableMerge'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableMerge.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,048:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableMessage'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableMessage.py", line 26, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,049:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextablePreprocess'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextablePreprocess.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,050:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableRecode'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableRecode.py", line 25, in <module>
from PyQt5.QtGui import QFont
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,051:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableSegment'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableSegment.py", line 25, in <module>
from PyQt5.QtGui import QFont
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,052:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableSelect'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableSelect.py", line 31, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,053:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableTextField'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableTextField.py", line 28, in <module>
from PyQt5.QtWidgets import QPlainTextEdit
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,053:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableTextFiles'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableTextFiles.py", line 30, in <module>
from PyQt5.QtCore import QTimer
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,054:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableTreetagger'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableTreetagger.py", line 32, in <module>
from PyQt5.QtWidgets import QFileDialog, QMessageBox
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,055:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableURLs'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableURLs.py", line 31, in <module>
from PyQt5.QtCore import QTimer
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,056:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableVariety'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableVariety.py", line 28, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,058:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.TextableUtils'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,328:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/timeseries/widgets/_owmodel.py'.
2023-09-29 01:29:46,328:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/timeseries/widgets/_rangeslider.py'.
2023-09-29 01:29:46,328:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/timeseries/widgets/owperiodbase.py'.
2023-09-29 01:29:46,329:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/timeseries/widgets/utils.py'.
2023-09-29 01:29:46,334:INFO:orangecanvas.main: Adding search path '/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/styles/orange' for prefix, 'canvas_icons'
```
Constantly it show that QApp can't be found
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Manjaro
- Orange version: 3.36.1
- How you installed Orange: through pip in a virtual environment.
| 1medium
|
Title: User might think that HiveContext creation is taking longer than it is
Body: Last cell output says "Creating HiveContext as 'sqlContext'" but does not tell user when context has been created and code is now running.
| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.