text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Update the kubernetes object's status with server-side apply
Body: /kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
I'd like to replace the client-side apply with the server-side apply when the controller updates the any statuses like https://github.com/kubeflow/katib/blob/fc858d15dd41ff69166a2020efa200199063f9ba/pkg/controller.v1beta1/experiment/experiment_controller_status.go#L27.
Server-side applies allow us to avoid updating errors due to conflicts, and then it would reduce confusions for users.
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
---
<!-- Don't delete this message to encourage users to support your issue! -->
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| 0easy
|
Title: Validator is not used when passed to `create_model` and having the same name as a field
Body: ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Hi!
I noticed that the validator is not used and called if it has the same name as the field (or some other limitations, I'm not sure)
I guess a warning or a error would be really useful. Or at least something specified in docs about this
### Example Code
```Python
from pydantic import create_model, field_validator
def bar(cls, v):
print("bar_called", v)
return v
validator = field_validator("bar", mode="before")(bar)
# OPTION1: validator is not called
validators = {"bar": validator}
# OPTION2: validator is called
# validators = {"bar_validator": validator}
Foo = create_model("Foo", bar=(str, ...), __validators__=validators)
foo = Foo(bar="bar")
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: D:\project\.venv\Lib\site-packages\pydantic
python version: 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]
platform: Windows-10-10.0.19045-SP0
related packages: fastapi-0.112.1 mypy-1.11.1 pydantic-settings-2.4.0 typing_extensions-4.12.2
commit: unknown
```
| 0easy
|
Title: accessing user_prompt passed to run
Body: Is there a specific reason why when invoking `run`, the user's message ( `user_prompt`) is not available in the `RunContext` for prompts despite being added to the messages array during execution?
## Current Behavior
The `user_prompt` is added to messages in the code:
```python
run_context = RunContext(deps, 0, [], None, model_used, usage or result.Usage())
messages = await self._prepare_messages(user_prompt, message_history, run_context)
run_context.messages = messages
```
However, when trying to access messages in a prompt, it's empty as it's set after the the system prompts are executed (i.e., after `_prepare_messages`):
```python
@test_agent.system_prompt
def print_ctx(ctx: RunContext[str]) -> str:
print(f"Messages is: {ctx.messages}")
return f""
result = await test_agent.run("Hey, World!")
# Output:
# Messages is: []
```
## Current Workaround
The only current solution I can think of to access the `user_prompt` requires creating a custom context class and passing the message redundantly:
```python
from dataclasses import dataclass
@dataclass
class PromptContext:
user_prompt: str
other_stuff: Optional[str] = None
test_agent = Agent(
'openai:gpt-4',
deps_type=PromptContext
)
@test_agent.system_prompt
def dynamic_system_prompt(ctx: RunContext[PromptContext]) -> str:
print(f"User asked: {ctx.deps.user_prompt}")
return f"The user asked: {ctx.deps.user_prompt}"
# When running:
await test_agent.run(
"Hi there",
deps=PromptContext(user_prompt="hi there")
)
```
I think it woud be useful to have access to the message within the `RunContext` passed to prompts, eliminating the need for redundant passing of the message through custom context objects. Any suggestions for a more elegant solution would be appreciated.
| 0easy
|
Title: Allow using tb['some_object'] or tb.get('some_object') instead of tb.ref('some_object')
Body: It's more Pythonic, easier to remember, and makes it easy to migrate in-notebook tests to a script.
Here's my use case:
When I'm working on a notebook, I like to write tests inline first, and then move it to a `test.py` file:
```python
# notebook.ipynb
def add(x, y):
return x + y
def test_add():
assert add(2, 3) == 5
```
This allows me to test the function instantly without having to leave Jupyter, open a terminal, create a new script, etc., etc. This is especially useful when I'm using Binder, Colab, or Kaggle notebooks, and opening up a terminal or a new script isn't that straightforward.
Later, I move the test to a `test.py` file:
```python
# test.py
def test_add(tb):
add = tb.ref('add')
assert add(2, 3)
```
This requires an additional step of adding the `tb` argument and accessing `add` using `tb.ref('add')`. This change has to be made for every single test I write and every time I update a test in my notebook.
It'd be nice is if I could use `tb` like a dictionary. Then, I can write the tests within my notebook in this fashion:
```python
# notebook.ipynb
def add(x, y):
return x + y
def test_add(tb):
add = tb['add'] # or tb.get('add') for safety
assert add(2, 3) == 5
# A simple in-notebook test runner
def run_test(test):
return test(globals())
```
Within the notebook I can run the tests by simply passing `globals()` as the value for `tb` and then I can easily migrate the tests to a script without making any code changes.
```python
# test.py
def test_add(tb):
add = tb['add'] # or tb.get('add') for safety
assert add(2, 3) == 5
# Look ma, no changes!
```
| 0easy
|
Title: Notebook describing the ask and tell API
Body: Now that #234 has been merged, it would be nice to complement the code with a notebook illustrating how and when to use this API. | 0easy
|
Title: Squeeze momentum indicator (with Lazybear) wrong results.
Body: I follow Binance data and Lazybear's squeeze indicator on tradingview. But the SQZ_20_2.0_20_1.5_LB values I see on tradingview are wrong also SQZ_OFF and SQZ_ON are wrong too.
My code :
```python
ohlcvsqueeze = account_binance.fetch_ohlcv(symbol, timeframe)
if len(ohlcvsqueeze):
dfsqz = pd.DataFrame(ohlcvsqueeze, columns=['time', 'open', 'high', 'low', 'close', 'volume'])
dfsqz['time'] = pd.to_datetime(dfsqz['time'], unit='ms')
squeeze = dfsqz.ta.squeeze(bb_length=20, bb_std=2.0, kc_length=20, kc_scalar=1.5, lazybear=True, use_tr=True,)
dfsqz = pd.concat([dfsqz, squeeze], axis=1)
time.sleep(1)
```
| 0easy
|
Title: Add sklearn's HistGradientBoosting
Body: | 0easy
|
Title: Using GaussianProcessRegressor fails for default values
Body: ```python
from skopt.learning import GaussianProcessRegressor
dimensions = [(-5.0, 10.0), (0.0, 15.0)]
gpr = GaussianProcessRegressor()
optimizer = Optimizer(
dimensions=dimensions,
base_estimator=gpr,
n_random_starts=1,
acq_func="EI",
random_state=0)
optimizer.tell(optimizer.ask(), branin(x_cand))
AttributeError: 'Product' object has no attribute 'gradient_x'
``` | 0easy
|
Title: Add FAQ with answers to the most common questions to the documentation
Body: It would be helpful to include a more immediate link to where to get help and a FAQ on the readme and in the documentation.
- **Q.** I've already got results from my Monte Carlo simulation - which technique can I use?
**A.** DMIM
- **Q.** I get this error when installing SALib 1.3 with Python 2.7
**A.** SALib 1.3 onwards does not support Python 2.7
- **Q.** Can you help me with implementing my sensitivity analysis?
**A.** Check out the examples [here](https://github.com/SALib/SALib/tree/master/examples) | 0easy
|
Title: Don't include credentials in requirements lock file
Body: Easy way to accidentally commit credentials.
I think adding `--no-emit-find-links` to pip tools compile in tasks might work. | 0easy
|
Title: `CallbackData` usage example
Body: ### aiogram version
3.x
### Problem
Let's assume I'm a newbie in aiogram. I see its awesome feature called "`CallbackQuery.data` parsing" and I don't know how to use it properly. I try to read the docs and still encounter some problems, so I need a code to compare with mine.
### Possible solution
Port and adapt `callback_data_factory` example from 2.x to 3.x. Optionally improve it with new features.
### Alternatives
_No response_
### Code example
_No response_
### Additional information
This issue is more about adding an example to `examples/` directory of repo rather than the example request itself. I'm already using `CallbackData` class at its 100% potential in my projects, so I, personally, don't need it. | 0easy
|
Title: 移动端 后台底部 必读说明 下面那三个带颜色的按钮 不能自适应 导致整个页面失去焦点
Body: 1. BUG反馈请描述最小复现步骤
2. 普通问题:99%的答案都在帮助文档里,请仔细阅读https://kmfaka.baklib-free.com/
3. 新功能新概念提交:请文字描述或截图标注 | 0easy
|
Title: Fix comments about return value in my_compare_type
Body: **Migrated issue, originally created by Mauricio Farías ([@negas](https://github.com/negas))**
The comments are opposite to the code.
my_compare_type(context, inspected_column,metadata_column, inspected_type, metadata_type) should return False if the types are different, True if not.
It's an issue in the comments not code. I fixed it but I can not push it.
| 0easy
|
Title: Can you use Pydantic Field Aliasing with Pandera / PydanticModel schema definitions?
Body: #### How to use Pydantic Field Alias with pandera
I am processing a CSV and I am trying to use Pandera to validate the data. The names in the CSV header row are not what I want the names in my model to be. I haven't figured out how to achieve field aliasing. Any suggestions?
Here is a snippet that reproduces the error I am getting.
```python
import io
import pydantic
import pandas as pd
import pandera as pa
from pandera.engines.pandas_engine import PydanticModel
class AliasedRecord(pydantic.BaseModel):
name: str = pydantic.Field(alias="Name")
amt_in_local: float = pydantic.Field(alias="Amount in local currency")
class AliasDFSchema(pa.DataFrameModel):
"""Pandera schema using the pydantic model."""
class Config:
"""Config with dataframe-level data type."""
dtype = PydanticModel(AliasedRecord)
strict=True
coerce = True # this is required, otherwise a SchemaInitError is raised
# Direct Pydantic Model Validation
ar_m = AliasedRecord.model_validate({"Name":"Foo", "Amount in local currency": 1.32})
print(f"My Model is: {ar_m}")
# Now try validating a DataFrame
# Generate data similar to the source CSV
f = io.StringIO('Name,Amount in local currency\nfoo,1.32\nbar,3.34')
df1 = pd.read_csv(f)
validated_df = AliasDFSchema(df1)
```
#### Output
The successful Model:
```text
My Model is: name='Foo' amt_in_local=1.32
```
The DataFrame / Pandera error ...
```
... bunch of stuff removed for brevity
SchemaError: column 'Name' not in DataFrameSchema {}
```
df1 is correctly created
<img width="297" alt="Screenshot 2023-10-18 at 18 33 30" src="https://github.com/unionai-oss/pandera/assets/6108038/c5d9fbb3-fff7-41af-874c-c275ffd13627">
| 0easy
|
Title: [DOCS] Spark
Body: Refresh for modern practices -- code style, arrow | 0easy
|
Title: Support disabling mo.ui.chat input
Body: ### Description
I was hoping to make my chatbot disabled until the user entered an api key in another input but didn't see any parameter to disable the input.
### Suggested solution
Support disabled parameter on the Chat input.
When set to true it can create a non interactive overlay over the input.
Html now supports inert attribute to disable interactions inside an element. Inert elements can be styled to achieve the disabled styling.
See: https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/inert
### Alternative
Currently using mo.stop to not show the cell at all.
### Additional context
Here's the notebook where I want to use this:
https://aszenz.github.io/kashmiri-chatbot/
By not showing the input, I have a feeling people won't understand how the chatbot looks like. | 0easy
|
Title: Custom Replicate Model Block
Body: A block that allows you to use **any** [replicate](https://replicate.com/) model by supplying the model name and whatever inputs it needs. | 0easy
|
Title: add test to determine if WoERatioCategoricalEncoder returns an error when the probability in the denominator is 0
Body: At the moment the test is commented out:
https://github.com/solegalli/feature_engine/blob/master/tests/test_encoding/test_woe_encoder.py#L112
the aim is to test this bit of code [here](https://github.com/solegalli/feature_engine/blob/master/feature_engine/encoding/woe.py#L205)
I think the test should work, and I commented out because I changed the error for a warning. But now, we decided to go back to returning an error.
So in short, the aim is to corroborate that the commented out test checks the intended bit of code and if yes, uncomment it, otherwise, replace it by the suitable test.
| 0easy
|
Title: Labor Investment metric API
Body: The canonical definition is here: https://chaoss.community/?p=3559 | 0easy
|
Title: SSL channel Indicator (Semaphore Signal Level channel)
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
lattest
**Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
I want to use SSL channel Indicator (Semaphore Signal Level channel)
**Describe alternatives you've considered**
I dont have Any Idea, are there alternatives of SSL channel indicator
**Additional context**
The indicator is available on Trading View, but not on exchanges like binance
Thanks for using Pandas TA!
| 0easy
|
Title: Accessing matches in regular expression globbing
Body: In regular expression globbing (or even in regular globbing), it would be nice if we could access the Match object (https://docs.python.org/3/library/re.html#match-objects) corresponding to each resulting match.
For example, replacing this:
```xsh
for filepath in `data/(.*)/(.*)/.*\.png`:
match = re.match(r'data/(.*)/(.*)/.*\.png', filepath)
dir1, dir2 = match.groups()
```
with something like:
```xsh
for match in m`data/(.*)/(.*)/.*\.png`:
dir1, dir2 = match.groups()
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Importing ridgeplot fails with scipy 1.14.1 with "symbol not found in flat namespace '_npy_cabs'"
Body: This is more of an FYI for visibility than an issue that needs action, but importing ridgeplot fails when using scipy 1.14.1 ("symbol not found in flat namespace '_npy_cabs'")
Issue has been raised with scipy maintainers: https://github.com/scipy/scipy/issues/21434
I'm on Mac OS 14.6.1, Python 3.10.14. Downgrading scipy solved the issue:
```
pip install --upgrade scipy==1.14.0
```
| 0easy
|
Title: Request Full Guide
Body: Hi,
Any Full guide for this?
Thanks in advanced | 0easy
|
Title: Error: 'Failed to Complete Task in Maximum Steps' Not Visible in Agent History List
Body: ### Bug Description
"When my agent fails to complete a task within the defined maximum number of steps, it stops with an error message saying, 'Failed to complete task in maximum steps.' However, when I check the Agent History List, I see that the list is empty. It seems that after throwing this error, we are not saving the issue in the agent history.
### Reproduction Steps
Add an error entry in self.history whenever we hit the else block.
### Code Sample
```python
async def run(self, max_steps: int = 100) -> AgentHistoryList:
"""Execute the task with maximum number of steps"""
try:
self._log_agent_run()
# Execute initial actions if provided
if self.initial_actions:
result = await self.controller.multi_act(self.initial_actions, self.browser_context, check_for_new_elements=False)
self._last_result = result
for step in range(max_steps):
if self._too_many_failures():
break
# Check control flags before each step
if not await self._handle_control_flags():
break
await self.step()
if self.history.is_done():
if self.validate_output and step < max_steps - 1:
if not await self._validate_output():
continue
logger.info('✅ Task completed successfully')
if self.register_done_callback:
self.register_done_callback(self.history)
break
else:
logger.info('❌ Failed to complete task in maximum steps')
return self.history
finally:
self.telemetry.capture(
AgentEndTelemetryEvent(
agent_id=self.agent_id,
success=self.history.is_done(),
steps=self.n_steps,
max_steps_reached=self.n_steps >= max_steps,
errors=self.history.errors(),
)
)
if not self.injected_browser_context:
await self.browser_context.close()
if not self.injected_browser and self.browser:
await self.browser.close()
if self.generate_gif:
output_path: str = 'agent_history.gif'
if isinstance(self.generate_gif, str):
output_path = self.generate_gif
self.create_history_gif(output_path=output_path)
```
### Version
main
### LLM Model
GPT-4
### Operating System
macOs 15.2
### Relevant Log Output
```shell
``` | 0easy
|
Title: Docstring for AllenNlpTestCase is out-of-date
Body: https://github.com/allenai/allennlp/blob/ae7cf85b8f755f086341a087aeca1ba7c1df3769/allennlp/common/testing/test_case.py#L14
This docstring suggests that the TestCase class should subclass `unittest.TestCase` but it does not do so. Should this be updated or is this intentional? | 0easy
|
Title: Implement env vars properly
Body: Stop declaring the env vars on scanapi.yaml file and start getting these variables in fact from the env. | 0easy
|
Title: Don't set `CurlOpt.CAINFO` when `verify=False`
Body: | 0easy
|
Title: Test coverage metric API
Body: The canonical definition is here: https://chaoss.community/?p=3957 | 0easy
|
Title: Deprecate `SHORTEST` mode being default with `FOR IN ZIP` loops
Body: RF 6.1 made it possible to configure what to do if lengths of lists iterated using `FOR IN ZIP` are different (#4682). The old default behavior to silently ignore items in longer lists (i.e. `SHORTEST` mode) was preserved, but we plan to require lengths to match (i.e. `STRICT` mode) by default in the future. Before changing the default, we should deprecate the old behavior. What needs to be done is to check that list lengths are same also if mode isn't set. If lengths don't match, there should be a deprecation warning instead of a failure, though. | 0easy
|
Title: [Docs] Improve DPSK docs in dark mode
Body: ### Checklist
- [x] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 2. Please use English, otherwise it will be closed.
### Motivation
<img width="1393" alt="Image" src="https://github.com/user-attachments/assets/39d60ef8-c7fa-42e0-9961-5bd9c082209f" />
I use html to write this docs and it looks bad. So could someone fix it here?
https://github.com/sgl-project/sglang/blob/main/docs/references/deepseek.md
### Related resources
_No response_ | 0easy
|
Title: Add `PYTHONWARNDEFAULTENCODING` to pipeline
Body: Given this comment https://github.com/pydantic/bump-pydantic/pull/85#issuecomment-1632808494, we want to add `PYTHONWARNDEFAULTENCODING` on the pipeline. Check more about it on https://docs.python.org/3/library/io.html#opt-in-encodingwarning. | 0easy
|
Title: Docstring of Datalab.find_issues issue_types docstring should link to the guide
Body: The docstring here: https://docs.cleanlab.ai/master/cleanlab/datalab/datalab.html#cleanlab.datalab.datalab.Datalab.find_issues
for the `issue_types` argument should link to the Issue Types guide: https://docs.cleanlab.ai/stable/cleanlab/datalab/guide/index.html
(in the proper internal linking format).
So that users know what are possible values for this arg. | 0easy
|
Title: Gensim's FastText model reads in unsupported modes from Facebook's FastText
Body: In gensim/models/fasttext.py:
```python
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 supervised) possibly creating inconsistent gensim model likely to fail later. Displaying
# clear error/warning with explanatory message would be far better - even if there might be some reason
# to continue with the load - such as providing read-only access to word-vectors trained those ways. (See:
# https://github.com/facebookresearch/fastText/blob/2cc7f54ac034ae320a9af784b8145c50cc68965c/src/args.h#L19
# for FB FT mode definitions.)
hs=int(m.loss == 1),
hs=int(m.loss == 1),
sg=int(m.model == 2),
sg=int(m.model == 2),
bucket=m.bucket,
bucket=m.bucket,
min_count=m.min_count,
min_count=m.min_count,
sample=m.t,
sample=m.t,
min_n=m.minn,
min_n=m.minn,
max_n=m.maxn,
max_n=m.maxn,
)
``` | 0easy
|
Title: Ability to show/toggle all student names in formgrader
Body: Follow up on:
https://github.com/jupyter/nbgrader/issues/575
... which led to:
https://github.com/jupyter/nbgrader/pull/617
I would highly appreciate the possibility to **toggle the visibility of all student names--with one click**. We do grading anonymously and I think it's a good default.
However, when students want to discuss individual issues **after they've received their grades/feedback**, I currently have to toggle up to 200 submission names to find the formgrader version of their submission.
Thanks for any suggestions!
| 0easy
|
Title: ValueError: cannot specify integer `bins` when input data contains infinity [BUG]
Body: Trying to open a dataframe from a .csv that contains numpy.inf values fails with this error:
```
import lux, pandas
df = pandas.read_csv('filepath')
```
```
C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\IPython\core\formatters.py:918: UserWarning:
Unexpected error in rendering Lux widget and recommendations. Falling back to Pandas display.
Please report the following issue on Github: https://github.com/lux-org/lux/issues
C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\lux\core\frame.py:632: UserWarning:Traceback (most recent call last):
File "C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\lux\core\frame.py", line 594, in _ipython_display_
self.maintain_recs()
File "C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\lux\core\frame.py", line 436, in maintain_recs
custom_action_collection = custom_actions(rec_df)
File "C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\lux\action\custom.py", line 76, in custom_actions
recommendation = lux.config.actions[action_name].action(ldf)
File "C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\lux\action\correlation.py", line 50, in correlation
vlist = VisList(intent, ldf)
File "C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\lux\vis\VisList.py", line 43, in __init__
self.refresh_source(self._source)
File "C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\lux\vis\VisList.py", line 336, in refresh_source
lux.config.executor.execute(self._collection, ldf, approx=approx)
File "C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\lux\executor\PandasExecutor.py", line 146, in execute
PandasExecutor.execute_2D_binning(vis)
File "C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\lux\executor\PandasExecutor.py", line 387, in execute_2D_binning
vis._vis_data["xBin"] = pd.cut(vis._vis_data[x_attr], bins=lux.config.heatmap_bin_size)
File "C:\Users\???\anaconda\envs\base_up_to_date\lib\site-packages\pandas\core\reshape\tile.py", line 244, in cut
"cannot specify integer `bins` when input data contains infinity"
ValueError: cannot specify integer `bins` when input data contains infinity
``` | 0easy
|
Title: Switch to strict static type checking
Body: #306 adds static type checking; it would be nice to switch over to mypy strict mode (`--strict`) once we have full coverage with type annotations. | 0easy
|
Title: Duplicated function/method
Body: This function already exists on the object, why add an identical one inside the function?
https://github.com/kennethreitz/responder/blob/78b5bef879380146d0bf14b35e0fd618d9694971/responder/api.py#L645-L647
**Original definition:**
https://github.com/kennethreitz/responder/blob/78b5bef879380146d0bf14b35e0fd618d9694971/responder/api.py#L624-L627 | 0easy
|
Title: [MNT] Update `ruff` version bound for `pre-commit` checks
Body: ### Describe the issue
Recent versions of `ruff` cause failures on our CI due to a few outdated notebooks
See https://github.com/aeon-toolkit/aeon/actions/runs/10445173697/job/28920730227?pr=1986
### Suggest a potential alternative/fix
Fix the issues in these notebooks and upgrade the bound
### Additional context
_No response_ | 0easy
|
Title: Comprehensive benchmarking of AutoGPTQ; Triton vs CUDA; vs old 'ooba' GfL
Body: Over on @Laaza's text-gen-ui PR there has been discussion of inference performance.
In particular, people are concerned that recent versions of GPTQ-for-LLaMa have performed worse than the older versions. This is one reason why many people still use ooba's fork of GfL.
In order to help people understand the improvements AutoGPTQ has made, and also to help @PanQiWei and @qwopqwop200 look at potential performance differences in AutoGPTQ, I have compiled a list of benchmarks.
I have compared AutoGPTQ CUDA, AutoGPTQ Triton and the old GfL ooba fork with CUDA.
I've compared act-order/desc_act vs not, with and without streaming in text-gen-UI, and with/without the `fused_attn` and `fused_mlp` parameters.
I've not done absolutely every possible permutation of all those params. I only ran a few tests with streaming enabled in text-gen-ui as it always performs much worse. But I think I have enough here to get a good overview.
I will be posting these figures in the text-gen-ui PR thread as well.
**Other benchmarks to do**
To these results we could also add the latest GPTQ-for-LLaMa Triton and CUDA figures. I did already benchmark that yesterday and it compared the same or worse than AutoGPTQ. But later today I will run those benchmarks again and add them to the spreadsheet in this new format.
I would also like to test with some weaker/smaller GPUs, to see how performance might vary with less GPU processing available. And I'd also like to test on some larger models, to see if there is any difference in performance delta with varying model sizes.
# Benchmarks of: ooba CUDA; AutoGPTQ CUDA; AutoGPTQ Triton
## Implementations tested
* AutoGPTQ as of https://github.com/PanQiWei/AutoGPTQ/pull/43, using text-gen-ui from [LaaZa's AutoGPTQ PR]( https://github.com/LaaZa/text-generation-webui/tree/AutoGPTQ)
* GPTQ-for-LLaMa using [ooba's GfL fork (oobabooga/GPTQ-for-LLaMa)](https://github.com/oobabooga/GPTQ-for-LLaMa), using text-gen-ui from `main`, commit: `875da16b7bc2a676d1a9d389bf22ee4579722073`.
## Test system
* Ubuntu 20.04, with Docker https://hub.docker.com/repository/docker/thebloke/runpod-pytorch-new/general
* CUDA toolkit 11.6
* NVidia 4090 24GB
* [WizardLM 7B 128g](https://huggingface.co/TheBloke/wizardLM-7B-GPTQ)
## Test method
* All benchmarking done in text-gen-ui, using the output time and token/s reported by it.
* text-gen-ui restarted between each test.
* Output limit = 512 tokens.
* 'Default' paramater set used + 'ban eos token' set so as to always get 512 tokens returned
* Run one test and discard results as warm up, then record 4 results.
## Results spreadsheet and overview charts
### Spreadsheet
[Google sheets with results and charts](https://docs.google.com/spreadsheets/d/1_VWHD-5jgVq5_nlP_0KJeKLsWDkZrZNBFdGR7YRMuME/edit?usp=sharing)
### Charts


A chart showing streaming performance is in the spreadsheet.
## Description of results
### AutoGPTQ vs 'ooba' CUDA with --no-stream
* AutoGPTQ CUDA outperforms GfL 'ooba' CUDA by 15% on a no-act-order model
* AutoGPTQ CUDA outperforms GfL 'ooba' CUDA by 10% on an act-order model (comparing AutoGPTQ on act-order model to GfL on no-act-order model)
* AutoGPTQ Triton is 5% slower than GfL 'ooba' CUDA
* AutoGPTQ Triton is 20% slower than AutoGPTQ CUDA
### AutoGPTQ vs 'ooba' CUDA with streaming (no-act-order model)
* AutoGPTQ CUDA outperforms GfL 'ooba' CUDA by 12%
* AutoGPTQ CUDA outperforms AutoGPTQ Triton by 13%
* AutoGPTQ Triton outperforms GfL 'ooba' CUDA by 15%
* Interesting that with streaming on, Triton does better than GfL CUDA.
### desc_act models vs non-desc_act models
* AutoGPTQ CUDA is 4.5% slower on a desc_act model vs not
* AutoGPTQ Triton has no performance difference between desc_act model vs not
* AutoGPTQ CUDA records significantly higher GPU usage % on desc_act models
* 80% usage with desc_act + fused_attn vs 30% with no-desc_act model
* This might be a problem on weaker cards? That needs testing.
* AutoGPTQ Triton has only a few % extra GPU usage with desc_act models.
### fused_attn and fused_mlp
* AutoGPTQ CUDA: fused_attn increases performance by 20%
* This seems to account for nearly all the performance difference between AutoGPTQ CUDA and ooba GfL CUDA
* AutoGPTQ Triton: fused_mlp on its own increases performance by 15%
* AutoGPTQ Triton: fused_attn on its own increases performance by 26%
* AutoGPTQ Triton: fused_mlp and fused_attn together increases performance over no/no by 48%
### Slow loading time with AutoGPTQ Triton
* AutoGPTQ Triton takes *significantly* longer to load a model vs CUDA
* I didn't record yet benchmarks for this, but from looking through my logs I see:
* CUDA: 2 -3 seconds
* Triton: 40-45 seconds (with or without fused_mlp)
* Triton + fused_attn: 55 - 90 seconds
* I'll add this to the benchmark table later.
## Results table
|Implementation|Method|Streaming|Model type |fused_attn|fused_mlp|GPU usage max %|VRAM max after 512 tok|Avg token/s|Run 1 tokens/s|Run 2 tokens/s|Run 3 tokens/s|Run 4 tokens/s|
|--------------|------|---------|------------|----------|---------|---------------|----------------------|-----------|--------------|--------------|--------------|--------------|
|ooba GfL |CUDA |No |no-act-order|N/A |N/A |25% |5837 |23.84 |23.86 |23.91 |23.70 |23.90 |
|AutoGPTQ |CUDA |No |no-act-order|No |N/A |24% |6711 |22.66 |22.63 |22.63 |22.78 |22.61 |
|AutoGPTQ |CUDA |No |no-act-order|Yes |N/A |28% |6849 |27.22 |27.23 |27.33 |27.33 |27.00 |
|AutoGPTQ |Triton|No |no-act-order|No |No |27% |6055 |15.25 |15.25 |15.25 |15.29 |15.20 |
|AutoGPTQ |Triton|No |no-act-order|No |Yes |30% |6691 |17.48 |17.52 |17.51 |17.43 |17.47 |
|AutoGPTQ |Triton|No |no-act-order|Yes |No |30% |6013 |19.33 |19.37 |19.42 |19.29 |19.24 |
|AutoGPTQ |Triton|No |no-act-order|Yes |Yes |34% |6649 |22.58 |22.19 |22.67 |22.70 |22.75 |
|AutoGPTQ |CUDA |No |act-order |No |N/A |64% |6059 |20.35 |20.38 |20.42 |20.31 |20.30 |
|AutoGPTQ |CUDA |No |act-order |Yes |N/A |80% |6079 |26.02 |26.12 |26.15 |26.18 |25.61 |
|AutoGPTQ |Triton|No |act-order |No |No |30% |6057 |15.39 |15.47 |15.30 |15.35 |15.42 |
|AutoGPTQ |Triton|No |act-order |No |yes |33% |6691 |17.48 |17.54 |17.53 |17.38 |17.48 |
|AutoGPTQ |Triton|No |act-order |Yes |No |33% |6013 |19.55 |19.56 |19.59 |19.51 |19.55 |
|AutoGPTQ |Triton|No |act-order |Yes |yes |38% |6649 |22.86 |22.86 |23.01 |22.98 |22.57 |
|ooba GfL |CUDA |Yes |no-act-order|N/A |N/A |17% |5837 |14.93 |14.86 |14.77 |15.05 |15.05 |
|AutoGPTQ |CUDA |Yes |no-act-order|No |N/A |20% |6711 |16.85 |16.94 |16.84 |16.87 |16.76 |
|AutoGPTQ |CUDA |Yes |no-act-order|Yes |no |22% |6849 |19.55 |19.87 |19.40 |19.50 |19.41 |
|AutoGPTQ |Triton|Yes |no-act-order|Yes |Yes |27% |6429 |17.19 |17.08 |17.38 |17.15 |17.14 |
|AutoGPTQ |Triton|Yes |act-order |No |No |25% |6055 |12.43 |12.39 |12.46 |12.40 |12.47 |
|AutoGPTQ |Triton|Yes |act-order |Yes |Yes |33% |6649 |17.19 |17.03 |17.03 |17.26 |17.42 |
## Benchmark logs
<details>
<summary>Benchmark logs in full</summary>
### ooba GPTQ-for-LLaMA CUDA no streaming (`--no-stream`). no-act-order model. no fused_attn
#### Command
```
python server.py --model wiz-no-act --wbits 4 --groupsize 128 --model_type llama --listen --no-stream
```
#### Benchmark
GPU usage max: 25%, VRAM idle: 6037, VRAM after 512 tokens: 5837
```
Output generated in 21.46 seconds (23.86 tokens/s, 512 tokens, context 16, seed 104167586)
Output generated in 21.41 seconds (23.91 tokens/s, 512 tokens, context 16, seed 448558865)
Output generated in 21.60 seconds (23.70 tokens/s, 512 tokens, context 16, seed 816202521)
Output generated in 21.42 seconds (23.90 tokens/s, 512 tokens, context 16, seed 63649370)
```
### ooba GPTQ-for-LLaMA CUDA with streaming. no-act-order model. no fused_attn
#### Command
```
python server.py --model wiz-no-act --wbits 4 --groupsize 128 --model_type llama --listen
```
#### Benchmark
GPU usage max: 17%, VRAM idle: 5247, VRAM after 512 tokens: 5837
```
Output generated in 34.39 seconds (14.86 tokens/s, 511 tokens, context 16, seed 572742302)
Output generated in 34.60 seconds (14.77 tokens/s, 511 tokens, context 16, seed 677465334)
Output generated in 33.95 seconds (15.05 tokens/s, 511 tokens, context 16, seed 1685629937)
Output generated in 33.95 seconds (15.05 tokens/s, 511 tokens, context 16, seed 1445023832)
```
### AutoGPTQ CUDA no streaming (`--no-stream`). no-act-order model. fused_attn enabled
#### Command
```
python server.py --model wiz-no-act --autogptq --listen --quant_attn --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max: 28%, VRAM idle: 6849, VRAM after 512 tokens: 6849
```
Output generated in 18.81 seconds (27.23 tokens/s, 512 tokens, context 16, seed 1130150188)
Output generated in 18.74 seconds (27.33 tokens/s, 512 tokens, context 16, seed 939013757)
Output generated in 18.73 seconds (27.33 tokens/s, 512 tokens, context 16, seed 1724107769)
Output generated in 18.97 seconds (27.00 tokens/s, 512 tokens, context 16, seed 54252597)
```
### AutoGPTQ CUDA with streaming. no-act-order model. fused_attn enabled
#### Command
```
python server.py --model wiz-no-act --autogptq --listen --quant_attn --wbits 4 --groupsize 128 --model_type llama
```
#### Benchmark
GPU usage max: 22%, VRAM idle: 5437, VRAM after 512 tokens: 6849
```
Output generated in 25.71 seconds (19.87 tokens/s, 511 tokens, context 16, seed 1472734050)
Output generated in 26.33 seconds (19.40 tokens/s, 511 tokens, context 16, seed 1285036592)
Output generated in 26.20 seconds (19.50 tokens/s, 511 tokens, context 16, seed 938935319)
Output generated in 26.32 seconds (19.41 tokens/s, 511 tokens, context 16, seed 2142008394)
```
### AutoGPTQ CUDA no streaming (`--no-stream`). no-act-order model. no fused_attn
#### Command
```
python server.py --model wiz-no-act --autogptq --listen --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max: 24%, VRAM idle: 6711, VRAM after 512 tokens: 6711
```
Output generated in 22.63 seconds (22.63 tokens/s, 512 tokens, context 16, seed 1551481428)
Output generated in 22.63 seconds (22.63 tokens/s, 512 tokens, context 16, seed 1993869704)
Output generated in 22.48 seconds (22.78 tokens/s, 512 tokens, context 16, seed 596462747)
Output generated in 22.64 seconds (22.61 tokens/s, 512 tokens, context 16, seed 619504695)
```
### AutoGPTQ CUDA with streaming. no-act-order model. no fused_attn
#### Command
```
python server.py --model wiz-no-act --autogptq --listen --wbits 4 --groupsize 128 --model_type llama
```
#### Benchmark
GPU usage max: 20%, VRAM idle: 5277, VRAM after 512 tokens: 6711
```
Output generated in 30.16 seconds (16.94 tokens/s, 511 tokens, context 16, seed 709588940)
Output generated in 30.34 seconds (16.84 tokens/s, 511 tokens, context 16, seed 574596607)
Output generated in 30.30 seconds (16.87 tokens/s, 511 tokens, context 16, seed 16071815)
Output generated in 30.48 seconds (16.76 tokens/s, 511 tokens, context 16, seed 1202346043)
```
### AutoGPTQ CUDA no streaming (`--no-stream`). act-order / desc_act model. fused_attn=yes
#### Command
```
python server.py --model wiz-act --autogptq --listen --quant_attn --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max: 80%, VRAM idle: 6077, VRAM after 512 tokens: 6079
```
Output generated in 19.60 seconds (26.12 tokens/s, 512 tokens, context 16, seed 1857860293)
Output generated in 19.58 seconds (26.15 tokens/s, 512 tokens, context 16, seed 616647949)
Output generated in 19.56 seconds (26.18 tokens/s, 512 tokens, context 16, seed 1384039801)
Output generated in 19.99 seconds (25.61 tokens/s, 512 tokens, context 16, seed 411623614)
```
### AutoGPTQ CUDA no streaming (`--no-stream`). act-order / desc_act model. fused_attn=no
#### Command
```
python server.py --model wiz-act --autogptq --listen --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max: 64%, VRAM idle: 6059, VRAM after 512 tokens: 6059
```
Output generated in 25.12 seconds (20.38 tokens/s, 512 tokens, context 16, seed 1777836493)
Output generated in 25.07 seconds (20.42 tokens/s, 512 tokens, context 16, seed 349075793)
Output generated in 25.21 seconds (20.31 tokens/s, 512 tokens, context 16, seed 188931785)
Output generated in 25.22 seconds (20.30 tokens/s, 512 tokens, context 16, seed 485419750)
```
### AutoGPTQ Triton no streaming (`--no-stream`). no-act-order model. fused_attn=yes. fused_mlp=yes
#### Command
```
python server.py --model wiz-no-act --autogptq --autogptq-triton --fused_mlp --quant_attn --listen --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max: 34%, VRAM idle: 6649, VRAM after 512 tokens: 6649
```
Output generated in 23.07 seconds (22.19 tokens/s, 512 tokens, context 16, seed 1396024982)
Output generated in 22.59 seconds (22.67 tokens/s, 512 tokens, context 16, seed 1322798716)
Output generated in 22.56 seconds (22.70 tokens/s, 512 tokens, context 16, seed 935785726)
Output generated in 22.50 seconds (22.75 tokens/s, 512 tokens, context 16, seed 2135223819)
```
### AutoGPTQ Triton with streaming. no-act-order model. fused_attn=yes. fused_mlp=yes
#### Command
```
python server.py --model wiz-no-act --autogptq --autogptq-triton --fused_mlp --quant_attn --listen --wbits 4 --groupsize 128 --model_type llama
```
#### Benchmark
GPU usage max: 27%, VRAM idle: 6299, VRAM after 512 tokens: 6429
```
Output generated in 29.92 seconds (17.08 tokens/s, 511 tokens, context 16, seed 1687853126)
Output generated in 29.40 seconds (17.38 tokens/s, 511 tokens, context 16, seed 1796675019)
Output generated in 29.79 seconds (17.15 tokens/s, 511 tokens, context 16, seed 1342449921)
Output generated in 29.81 seconds (17.14 tokens/s, 511 tokens, context 16, seed 1283884954)
```
### AutoGPTQ Triton no streaming (`--no-stream`). no-act-order model. fused_attn=no. fused_mlp=no
#### Command
```
python server.py --model wiz-no-act --autogptq --autogptq-triton --listen --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max: 27%, VRAM idle: 6055, VRAM after 512 tokens: 6055
```
Output generated in 33.57 seconds (15.25 tokens/s, 512 tokens, context 16, seed 1071469137)
Output generated in 33.58 seconds (15.25 tokens/s, 512 tokens, context 16, seed 1554707022)
Output generated in 33.48 seconds (15.29 tokens/s, 512 tokens, context 16, seed 588803760)
Output generated in 33.69 seconds (15.20 tokens/s, 512 tokens, context 16, seed 719688473)
```
### AutoGPTQ Triton no streaming (`--no-stream`). no-act-order model. fused_attn=no. fused_mlp=yes
#### Command
```
python server.py --model wiz-no-act --autogptq --autogptq-triton --fused_mlp --listen --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max: 30%, VRAM idle: 6691, VRAM after 512 tokens: 6691
```
Output generated in 29.23 seconds (17.52 tokens/s, 512 tokens, context 16, seed 1413673599)
Output generated in 29.24 seconds (17.51 tokens/s, 512 tokens, context 16, seed 2120666307)
Output generated in 29.38 seconds (17.43 tokens/s, 512 tokens, context 16, seed 2057265550)
Output generated in 29.32 seconds (17.47 tokens/s, 512 tokens, context 16, seed 1082953773)
```
### AutoGPTQ Triton no streaming (`--no-stream`). no-act-order model. fused_attn=yes. fused_mlp=no
#### Command
```
python server.py --model wiz-no-act --autogptq --autogptq-triton --quant_attn --listen --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max: 30%, VRAM idle: 6013, VRAM after 512 tokens: 6013
```
Output generated in 26.43 seconds (19.37 tokens/s, 512 tokens, context 16, seed 1512231234)
Output generated in 26.36 seconds (19.42 tokens/s, 512 tokens, context 16, seed 2018026458)
Output generated in 26.54 seconds (19.29 tokens/s, 512 tokens, context 16, seed 1882161798)
Output generated in 26.61 seconds (19.24 tokens/s, 512 tokens, context 16, seed 1512440780)
```
### AutoGPTQ Triton no streaming (`--no-stream`). act-order/desc_act model. fused_attn=yes. fused_mlp=yes
#### Command
```
python server.py --model wiz-act --autogptq --autogptq-triton --fused_mlp --quant_attn --listen --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max: 38%, VRAM idle: 6649, VRAM after 512 tokens: 6649
```
Output generated in 22.40 seconds (22.86 tokens/s, 512 tokens, context 16, seed 1359206825)
Output generated in 22.25 seconds (23.01 tokens/s, 512 tokens, context 16, seed 609149608)
Output generated in 22.28 seconds (22.98 tokens/s, 512 tokens, context 16, seed 226374340)
Output generated in 22.68 seconds (22.57 tokens/s, 512 tokens, context 16, seed 1070157383)
```
### AutoGPTQ Triton with streaming. act-order/desc_act model. fused_attn=yes. fused_mlp=yes
#### Command
```
python server.py --model wiz-act --autogptq --autogptq-triton --fused_mlp --quant_attn --listen --wbits 4 --groupsize 128 --model_type llama
```
#### Benchmark
GPU usage max: 33%, VRAM idle: 6299, VRAM after 512 tokens: 6649
```
Output generated in 30.00 seconds (17.03 tokens/s, 511 tokens, context 16, seed 456349974)
Output generated in 30.00 seconds (17.03 tokens/s, 511 tokens, context 16, seed 767092960)
Output generated in 29.61 seconds (17.26 tokens/s, 511 tokens, context 16, seed 381684718)
Output generated in 29.33 seconds (17.42 tokens/s, 511 tokens, context 16, seed 283294303)
```
### AutoGPTQ Triton no streaming (`--no-stream`). act-order/desc_act model. fused_attn=no. fused_mlp=yes
#### Command
```
python server.py --model wiz-act --autogptq --autogptq-triton --fused_mlp --listen --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max:33%, VRAM idle: 6691, VRAM after 512 tokens: 6691
```
Output generated in 29.19 seconds (17.54 tokens/s, 512 tokens, context 16, seed 1575265983)
Output generated in 29.21 seconds (17.53 tokens/s, 512 tokens, context 16, seed 1616043283)
Output generated in 29.47 seconds (17.38 tokens/s, 512 tokens, context 16, seed 1647334679)
Output generated in 29.29 seconds (17.48 tokens/s, 512 tokens, context 16, seed 256676128)
```
### AutoGPTQ Triton no streaming (`--no-stream`). act-order/desc_act model. fused_attn=yes. fused_mlp=no
#### Command
```
python server.py --model wiz-act --autogptq --autogptq-triton --quant_attn --listen --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max:33%, VRAM idle: 6013, VRAM after 512 tokens: 6013
```
Output generated in 26.18 seconds (19.56 tokens/s, 512 tokens, context 16, seed 289490511)
Output generated in 26.13 seconds (19.59 tokens/s, 512 tokens, context 16, seed 2123553925)
Output generated in 26.24 seconds (19.51 tokens/s, 512 tokens, context 16, seed 563248868)
Output generated in 26.19 seconds (19.55 tokens/s, 512 tokens, context 16, seed 1773520422)
```
### AutoGPTQ Triton no streaming (`--no-stream`). act-order/desc_act model. fused_attn=no. fused_mlp=no
#### Command
```
python server.py --model wiz-act --autogptq --autogptq-triton --listen --wbits 4 --groupsize 128 --model_type llama --no-stream
```
#### Benchmark
GPU usage max:30%, VRAM idle: 6057, VRAM after 512 tokens: 6057
```
Output generated in 33.09 seconds (15.47 tokens/s, 512 tokens, context 16, seed 1881763981)
Output generated in 33.47 seconds (15.30 tokens/s, 512 tokens, context 16, seed 83555537)
Output generated in 33.36 seconds (15.35 tokens/s, 512 tokens, context 16, seed 332008224)
Output generated in 33.20 seconds (15.42 tokens/s, 512 tokens, context 16, seed 657280485)
```
### AutoGPTQ Triton with streaming. act-order/desc_act model. fused_attn=no. fused_mlp=no
#### Command
```
python server.py --model wiz-act --autogptq --autogptq-triton --listen --wbits 4 --groupsize 128 --model_type llama
```
#### Benchmark
GPU usage max: 25%, VRAM idle: 5503, VRAM after 512 tokens: 6055
```
Output generated in 41.23 seconds (12.39 tokens/s, 511 tokens, context 16, seed 1164743843)
Output generated in 41.02 seconds (12.46 tokens/s, 511 tokens, context 16, seed 509370735)
Output generated in 41.21 seconds (12.40 tokens/s, 511 tokens, context 16, seed 246113358)
Output generated in 40.99 seconds (12.47 tokens/s, 511 tokens, context 16, seed 667851869)
```
</details> | 0easy
|
Title: m.fit("Differential Evolution") should require bounded=True
Body: Currently, `m.fit("Differential Evolution")` raises an error regardless: of the bounds set in the parameters, because it checks `m._bounds_as_tuple()`, which is only set if `bounded=True`.
Solution for users is just to set `bounded=True` as an argument to `m.fit()` (and ensure that the bounds are set), but this is not clear in the error message:
```python
ValueError: Finite upper and lower bounds must be specified for every free parameter when `optimizer='Differential Evolution'`
``` | 0easy
|
Title: PositionalEmbedding
Body: The position embedding in the BERT is not the same as in the transformer. Why not use the form in bert? | 0easy
|
Title: [BUG] lazy loading regression
Body: after 0.29.0 (big pyg[ai] merge), looks like import is slow again, likely some top-level imports getting triggered:

for reference, pandas is just 600ms on same box -- so we can be targeting subsecond, but aren't
step #1 is to profile again: https://www.graphistry.com/blog/import-pygraphistry-from-10s-to-1s-by-python-import-lazy-loading-and-tuna
| 0easy
|
Title: Change Request Closure Ratio metric API
Body: The canonical definition is here: https://chaoss.community/?p=4834 | 0easy
|
Title: 卡密一行长度太长的话,批量买几百条就页面展示不出来全部只能展示一部分
Body: 
卡密里面有逗号,就会换行,卡密长度187 | 0easy
|
Title: docs: toggle Gurubase widget theme with original documentation one
Body: 
We should switch Gurubase widget theme accordingly to documentation preffered one. Gurubase has already written example how it should be implemented with mkdocs
https://github.com/Gurubase/gurubase-widget/issues/17
https://github.com/Gurubase/gurubase-widget/blob/master/examples/mkdocs/docs/js/theme-switch.js | 0easy
|
Title: Refactoring: split `tools.py` into libs
Body: `tools.py` has 2877 lines. We need to review the functions and:
* group functions by topics in `xonsh.lib`
* move functions to `xonsh.platforms` e.g. `hardcode_colors_for_win10`, `WIN10_COLOR_MAP`, `_win_bold_color_map`, etc
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Error in Renaming the Node
Body: <!-- In the following, please describe your issue in detail! -->
<!-- If some of the sections do not apply, just remove them. -->
### Short description
Error while Renaming node created using FlowChart class
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```python
from PyQt5 import QtWidgets as Qtw
from pyqtgraph.flowchart import Flowchart
class MainWindow(Qtw.QWidget):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
layout = Qtw.QGridLayout()
# Create flowchart, define input/output terminals
fc = Flowchart(terminals={
'Input': {'io': 'in'},
'Output': {'io': 'out'},
})
layout.addWidget(fc.widget())
self.setLayout(layout)
self.show()
if __name__ == '__main__':
import sys
app = Qtw.QApplication(sys.argv)
w = MainWindow(windowTitle="create FlowChart")
sys.exit(app.exec_())
```
### Expected behavior
No Traceback
### Real behavior
<!-- What happens? -->
```
Traceback (most recent call last):
File "T:\flowchart_test\env\lib\site-packages\pyqtgraph\flowchart\Flowchart.py", line 219, in nodeRenamed
self.sigChartChanged.emit(self, 'rename', node)
File "T:\flowchart_test\env\lib\site-packages\pyqtgraph\flowchart\Flowchart.py", line 701, in nodeRenamed
KeyError: <Node lol @25bd816a7a0>
```
### Tested environment(s)
* PyQtGraph version: 0.12.3
* Qt Python binding: PyQt5
* Python version: 3.10
* Operating system: Windows 10
* Installation method: pip
| 0easy
|
Title: `create_result` returns x with lowest observed f(x), not lowest predicted f(x)
Body: This seems wrong to me. The optimizer seems to be aware of its uncertainty, re-evaluating the "fluke point" and discovering that it's actually not usually that great. But then it gives me the fluke point at the end! This is especially problematic when variance is non-uniform and lower near the true minimum.
Ideally, the convergence plot would be updated as well, showing the expected minimum as well as the variance around that estimate. This would lead to the convergence plot being non-monotonic, but it would converge on the true minimum rather than some other point. | 0easy
|
Title: python-gitlab uses implicit reexports
Body: python-gitlab does not define `__all__` which causes problems in client applications that are inspected with mypy.
I think the solution here is to just define `__all__` in `__init__.py` and maybe some other libraries that re-export.
| 0easy
|
Title: A better DHT benchmark
Body: __Problem:__ to maintain & enhance DHT performance, we need a way to measure it. Our current benchmark covers an edge case that does not represent our typical usage.
* [x] __Preliminaries__
* Install hivemind using [this quickstart guide](https://learning-at-home.readthedocs.io/en/latest/user/quickstart.html)
* Learn to use DHT using [this tutorial](https://learning-at-home.readthedocs.io/en/latest/user/dht.html)
* btw any feedback on tutorials is very welcome
* Run current benchmark: [``benchmark_dht.py``](https://github.com/learning-at-home/hivemind/blob/master/benchmarks/benchmark_dht.py)
* `cd benchmarks && python benchmark_dht.py --num_peers 256 --num_experts 4096 --expert_batch_size 16 --expiration 9999`
* [x] __Main quest__
* Instead of using just one peer for all get/store action, we should spawn `--num_threads 16` parallel tasks that collectively run `--total_num_rounds` consecutive iterations. On each iteration:
* generate a unique random key, then pick `--num_store_peers` (from 1 to 10) random peers (default = 1)
* each peer should .store a unique subkey to the same key in parallel (using return_future=True from tutorial)
* after all peers finished storing, choose `--num_get_peers` that may or may not be the same as the peers that stored the value
* each of the new peers should .get the key with latest=True or False (option, default=True)
__Side quests (optional):__
- [x] measure the global frequency at which .get requests returned the correct output
- [x] measure the mean, stdev and max wall time for each .store and each .get request (use ddof=1 for stdev)
- [x] add an option to sleep for `--delay 0` seconds after store and before get
- [x] implement an option to randomly shut down `--failure_rate 0.1` peers halfway through operation, between store and get
- [ ] accelerate DHT swarm creation (@borzunov has some ideas)
- [ ] update [benchmarking docs](https://learning-at-home.readthedocs.io/en/latest/user/benchmarks.html#dht-performance)
- [ ] see if we can trick DHT into emulating network latency through [`tc qdisc netem`](https://netbeez.net/blog/how-to-use-the-linux-traffic-control/) and using public IP address instead of localhost
__Note:__ my naming convention is likely not very intuitive, feel free to propose a better one.
__Additional references:__
* API reference for DHT - https://learning-at-home.readthedocs.io/en/latest/modules/dht.html
* Original paper on Kademlia DHT - https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia-lncs.pdf
| 0easy
|
Title: Organizational Diversity metric API
Body: The canonical definition is here: https://chaoss.community/?p=3464 | 0easy
|
Title: [Feature request] Add apply_to_images to SaltAndPepper
Body: | 0easy
|
Title: Add captcha in user register form,
Body: user register form needs optinoal captcha
| 0easy
|
Title: Fix warning: Directly instantiating an LLMMathChain with an llm is deprecated
Body: It's reproducible by running `pytest`
```shell
======================================================== warnings summary ========================================================
../../pyenvs/langchain_api/lib/python3.9/site-packages/langchain/chains/llm_math/base.py:50
/Users/alexander/pyenvs/langchain_api/lib/python3.9/site-packages/langchain/chains/llm_math/base.py:50: UserWarning: Directly instantiating an LLMMathChain with an llm is deprecated. Please instantiate with llm_chain argument or using the from_llm class method.
warnings.warn(
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
```
| 0easy
|
Title: "Add page" on PageListingViewSet should skip choosing parent page if only one choice exists
Body: ### Is your proposal related to a problem?
According to [this Stack Overflow question](https://stackoverflow.com/questions/78513499/wagtail-6-1-pagelistingviewset-how-to-automatically-select-a-parent-page-when-c), the "Add new page" button on a `PageListingViewSet` view takes users to the "choose parent page" view and prompts them to select a parent page for the new page, even when only one valid parent exists.
### Describe the solution you'd like
If only one valid parent page exists, the view should skip the "choose parent page" prompt, and redirect directly to the main "create page" form to create a page under that parent. This matches the old behaviour of the ModelAdmin module (in Wagtail until 5.x and now available as the wagtail-modeladmin package).
### Working on this
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| 0easy
|
Title: Replace the `dict` response in `load_existing_datasets` with a Pydantic class
Body: We should avoid passing loosely typed dictionaries around if we can avoid it. This should be replaced by a Pydantic class.
Note: This is in `BufferedLogger`. | 0easy
|
Title: Feature request: Change ReCaptcha to another service
Body:
## Use case
The public server for wger uses Recaptcha for their captcha. Recaptcha by google is not very privacy friendly and is needed to be used to register a new account. Switching to another captcha provider that respects privacy more would fit better with the project.
## Proposal
I would like wger to swap the use of recaptcha for registering an account to another captcha service, something like [hCaptcha](https://www.hcaptcha.com/).
| 0easy
|
Title: Remove docs for other projects from the Scrapy docs
Body: We have some auto* statements for things from other projects, like itemadapter in https://github.com/scrapy/scrapy/blob/92f86fab062d179fd68590f1be6a24ca13ec33b6/docs/topics/items.rst?plain=1#L404 and parsel in https://github.com/scrapy/scrapy/blob/92f86fab062d179fd68590f1be6a24ca13ec33b6/docs/topics/selectors.rst#L1037.
We should find these and remove them, instead linking to the other projects docs and maybe adding some explanations locally. | 0easy
|
Title: `[testenv]` `install_command` is also used for provision environment
Body: From #2427. If I specify a high `minversion` or `requires`, a provision environment is created. Apparently, it is using `install_command` from `[testenv]` section. Is this right? I thought that `testenv` means test environment and will not be used for anything else.
If this is correct behavior, perhaps document it? Also document the way to make provision environment not use test environment. Apparently, this works:
```ini
[testenv:.tox]
install_command = python -m pip install {opts} {packages}
[testenv]
install_command = your_custom_command
```
<details>
<summary>Moved to discussions</summary>
P.S. Why is it necessary to specify the full env name in a section? As in, why require this: `[testenv:my-cool-env]` but disallow this: `[testenv:cool]` or `[testenv:!.tox]`?
</details>
<details>
<summary>Moved to another issue</summary>
Also, documentation for `install_command` says,
> Note: You can also provide arbitrary commands to the install_command.
I don't think this is correct? Using this configuration, and running with `-vvvr`, I only see the first install command being executed:
```
[tox]
skip_install = true
skipsdist = true
envlist = foo
[testenv]
install_command =
echo '!!!!!!!!!!!!!!!!!!!!!!! install 1' {packages}
echo '!!!!!!!!!!!!!!!!!!!!!!! install 2' {packages}
commands =
echo '!!!!!!!!!!!!!!!!!!!!!!! test' {envname}
deps =
zoo
```
If `install_command` only accepts one command, perhaps fail if there are many? And the documentation shouldn't say it accepts command**s**?
</details>
<details>
<summary>This should probably be elsewhere</summary>
Finally, I want to note that this is not an error:
```
install_command =
echo '\
' {packages}
```
And this is also not an error:
```
install_command =
foo: echo '\
' {packages}
```
But this is fails:
```
install_command =
foo3: echo '\
' {packages}
```
> File "/usr/lib/python3.8/shlex.py", line 191, in read_token
> raise ValueError("No closing quotation")
> ValueError: No closing quotation
</details> | 0easy
|
Title: Remove deprecated JS packages and get rid of vulnerability messages for dash-uploader 0.7.0
Body: A long time has gone from the early days of dash-uploader (there might something in the `package.json` even from the early 2018 from [dash-resumable-upload](https://github.com/rmarren1/dash-resumable-upload)).
The JS word changes fast, and it would be time to:
- [ ] Remove any dependencies in package.json that are not needed
- [ ] Update / change any dependencies of deprecated packages
- [ ] Update any package with known security issues, if fix available.
I think this should be handled before the release of the dash-uploader 0.7.0, which aims to [fix many other things, too](https://github.com/np-8/dash-uploader/discussions/67).
I'm not a front-end expert myself, so it probably will take me some time to clean the package.json. If anyone else here is more familiar with JS, and would like to take a look, please read [this](https://github.com/np-8/dash-uploader/blob/dev/docs/CONTRIBUTING.md) to get started! | 0easy
|
Title: Feature: add ExceptionMiddleware to handle processing exceptions
Body: We should provide users with a suitable mechanism to handle and process application exceptions like [**FastAPI** does](https://fastapi.tiangolo.com/reference/fastapi/#fastapi.FastAPI.exception_handler)
I think the best way to make such mechanism suitable and flexable in the same time - implement it via special built-in middleware like in the following example:
Register handlers by constuctor:
```python
from faststream import ExceptionMiddleware
from faststream.nats import NatsBroker
async def handle_value_exception(exc: ValueError): ...
async def handle_type_exception(exc: TypeError): ...
exception_handle_middleware = ExceptionMiddleware(
{
ValueError: handle_value_exception,
TypeError: handle_type_exception,
}
)
broker = NatsBroker(middlewares=(exception_handle_middleware,))
```
And register handlers by decorator:
```python
from faststream import ExceptionMiddleware
from faststream.nats import NatsBroker
exception_handle_middleware = ExceptionMiddleware()
@exception_handle_middleware.add_handler(ValueError)
async def handle_value_exception(exc: ValueError): ...
@exception_handle_middleware.add_handler(TypeError)
async def handle_type_exception(exc: TypeError): ...
broker = NatsBroker(middlewares=(exception_handle_middleware,))
``` | 0easy
|
Title: Verify contributed recipes
Body: Contributed recipes are not checked in CI, were mostly added a long time ago and so they might be out-of-date.
It would be nice if someone checked them, updated packages versions used there and verified that they still work. | 0easy
|
Title: Update README.md with ask-and-tell example
Body: Hi,
I think it would be good to add a quick ask-and-tell example to the README.md. This is something quite unique about scikit-optimize that we should highlight more.
Also, not sure why the README now includes twice the CircleCI badge. | 0easy
|
Title: [NEW TRANSFORMER] arcsin transformation for decimal data
Body: For more info on the transformation:
https://stats.stackexchange.com/questions/251449/how-do-i-find-a-variance-stabilizing-transformation
https://www.stata.com/users/njc/topichlp/transint.hlp
Creating the new class is fairly straighforward.
Use the ReciprocalTransformer() as template. Copy it into a new python script called arcsin.py and edit the fit, transform and inverse transform to return the arcsin with np.arcsin and the inverse of that (which we need to investigate how to get it).
Then, we need to adjust the docstrings with some info about the arcsin transformation, which can be found in the links above. | 0easy
|
Title: Inline environment variable: `ValueError: identifier field can't represent`
Body: ```xsh
$QWE=False echo 1
```
```xsh
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/xonsh/ptk_shell/shell.py", line 389, in _push
code = self.execer.compile(
^^^^^^^^^^^^^^^^^^^^
ValueError: identifier field can't represent 'False' constant
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Loudly deprecate singular section headers
Body: As discussed in #4431, we have made a decision to remove the support for singular section headers like `Test Case` and require using the plural form like `Test Cases`. Singular headers were deprecated already in RF 6.0, but at that point only documentation was updated and no deprecation warnings are shown. This issue covers emitting actual deprecation warnings when singular headers are used.
| 0easy
|
Title: Fix call of tifffile write to remove warnings
Body: ## 🧰 Task
<!-- A clear and concise description of the task -->
In the test we got multiple warnings:
```
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tifffile/tifffile.py:1428: DeprecationWarning: <tifffile.TiffWriter 'Labels.tif'> passing multiple values to the 'compression' parameter is deprecated since 2022.7.28. Use 'compressionargs' to pass extra arguments to the compression codec.
```
Our current mimum requirement for tiffile is 2022.4.8. So I think that we could easy bump it and fix calls to resolve problem before this option is removed. | 0easy
|
Title: Allow the "value" argument to "@filter" to be optional and missing
Body: Some filter operators, such as `is_null` and `is_not_null`, are unary and do not require passing additional values. Currently, because the `@filter` directive specifies the `value` argument as non-null (`[String!]!`), that means that the syntax for those filters is the following:
```graphql
@filter(op_name: "is_null", value: [])
```
If we changed the definition to make the list nullable and keep only the contents non-nullable (`[String!]`), we could instead have the possibly more ergonomic syntax:
```graphql
@filter(op_name: "is_null")
```
Do you think this is an improvement in ergonomics and worth considering? | 0easy
|
Title: Change the log level of `Set Log Level` message from INFO to DEBUG
Body: When using `Set Log Level`, it logs a message like
> Log level changed from DEBUG to INFO.
using the INFO level. This message can be useful when debugging issues related to logging, but typically it doesn't add much value. As discussed in #4919, it gets especially annoying if want to disable logging while flattening.
For example, a keyword like
```robotframework
Example
[Tags] robot:flatten
Log Message that we want to preserve.
${old} = Set Log Level NONE
Log Message that we want to remove.
Set Log Level ${old}
Log Another message that we want to preserve.
```
will yield following messages:
```
15:31:16.892 INFO Message that we want to preserve.
15:31:16.893 INFO Log level changed from NONE to INFO.
15:31:16.893 INFO Another message that we want to preserve.
```
The message about changing the level from NONE to INFO is totally useless and distracting. See #4919 for a real world example. | 0easy
|
Title: `BuiltIn.set_global/suite/test/local_variable` should not log if used by listener and no keyword is started
Body: `BuiltIn.set_global/suite/test/local_variable` methods are often used by libraries and listeners for setting variables. When these methods are used, they log the name and value of the variable that's set. That's typically fine, but not when they are used by listeners when keywords aren't started.
If they are used in `start/end_keyword`, messages are ignored, because it in general isn't possible to log in those methods. That used to be the case also in `start/end_test`, but that was changed by #5266 in RF 7.2. As the result, a listener like this causes a message to be logged:
```python
from robot.libraries.BuiltIn import BuiltIn
def start_test(data, result):
BuiltIn().set_test_variable('${EXAMPLE}', 'value')
```
The logged message is shown in the log file and it is also in `TestCase.body` in the result model. The message will look like `${EXAMPLE} = value` so it's not very informative without additional context. In the result model it may cause problems with tools that don't expect messages to be present in test body. Because these message mostly add noise and they can cause severe problems, it's better to not log them in the first place. If a listener wants variable values to be shown, they can log such information explicitly with enough context information. | 0easy
|
Title: Process: `Split/Join Command Line` do not work properly with `pathlib.Path` objects
Body: Several related problems:
- `Split Command Line` fails if given `pathlib.Path`.
- `Join Command Line` fails as well with Python < 3.8.
- `Join Command Line` could also handle other non-string values.
| 0easy
|
Title: Garmin CN GPX heartrate解析问题
Body: 在自己部署的页面上没看到bpm数据,后来看了gpx内容改了以下内容就显示了,请作者看看是不是该这么改哈
gpx中内容:
`<extensions>
<ns3:TrackPointExtension>
<ns3:atemp>34.0</ns3:atemp>
<ns3:hr>168</ns3:hr>
<ns3:cad>113</ns3:cad>
</ns3:TrackPointExtension>
</extensions>`
track.py文件修改
修改前:
`heart_rate_list.extend(
[
int(p.extensions[0].getchildren()[0].text)
for p in s.points
if p.extensions
]
)`
修改后:
`heart_rate_list.extend(
[
int(p.extensions[0].getchildren()[1].text)
for p in s.points
if p.extensions
]
)` | 0easy
|
Title: Jaccard dissimilarity measure
Body: Are we adding a jaccard dissimilary measure? | 0easy
|
Title: Turn near-duplicate score test into a property-based test
Body: After updating the near-duplicate scores, a test was added to ensure that near-duplicate examples have worse scores than non-near-duplicates.
RIght now, the test only works on a small, toy dataset. Turning it into a property-based test would be ideal.
So that the test doesn't take too long, it may be a good idea to make a different embedding strategy that:
1. Generates a random array
2. Selects a handful of random examples to have near-duplicates. Append a new example that takes the original example with variable noise.
```python
# append to existing feature array
x[sample_ids] + small_random_noise
```
3. The embedding strategy should always generate "proper" near-duplicates that will get flagged. We don't want to test cases where there are no near-duplicates.
Below is the existing test case for comparison. We want to make sure that the issue scores created by the issue manager don't violate this property.
https://github.com/cleanlab/cleanlab/blob/23af72a2bd260be3b39cf35f66a8c187ebc4d1d2/tests/datalab/issue_manager/test_duplicate.py#L89-L98
_Originally posted by @elisno in https://github.com/cleanlab/cleanlab/pull/943#discussion_r1442401494_ | 0easy
|
Title: return in finally can swallow exception
Body: ### Description
There are two places in `scrapy/contracts/__init__.py` where a `finally:` body contains a `return` statement, which would swallow any in-flight exception.
This means that if a `BaseException` (such as `KeyboardInterrupt`) is raised from the body, or any exception is raised from one of the `except:` clause, it will not propagate on as expected.
The pylint warning about this was suppressed in [this commit](https://github.com/scrapy/scrapy/commit/991121fa91aee4d428ae09e75427d4e91970a41b) but it doesn't seem like there was justification for that.
These are the two locations:
https://github.com/scrapy/scrapy/blame/b4bad97eae6bcce790a626d244c63589f4134408/scrapy/contracts/__init__.py#L56
https://github.com/scrapy/scrapy/blame/b4bad97eae6bcce790a626d244c63589f4134408/scrapy/contracts/__init__.py#L86
See also https://docs.python.org/3/tutorial/errors.html#defining-clean-up-actions. | 0easy
|
Title: Need to raise a warning in radardisplaycartopy when passed axes dont have projection
Body: See https://github.com/ARM-DOE/pyart/blob/master/pyart/graph/radarmapdisplay_cartopy.py#L235
Twice I have wondered why radardisplaycartopy ignored passed axes... At the moment if the axes don't have a projection we override.. this can make users pull hair out... we need to raise a warning here | 0easy
|
Title: Allure throws UnicodeDecodeError when recieve \\u in test parameter value
Body: [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
Allure cant escape test name when meet `\\u` in test parameter value and throws UnicodeDecodeError exception.
#### Reproduction
Run code below with argument `--alluredir=somedir`
```python
import pytest
@pytest.mark.parametrize('param', ['\\u'])
def test_decode_error( param):
pass
```
Trace:
```
pytest test_allure.py --alluredir=allure-results -v
============================= test session starts =============================
platform win32 -- Python 2.7.15, pytest-3.7.1, py-1.5.4, pluggy-0.7.1 -- c:\git\allure_error\venv\scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\GIT\allure_error, inifile:
plugins: allure-pytest-2.5.0
collected 1 item
test_allure.py::test_decode_error[C:\source\unigine.vcxproj-/p:Platform=x64] ERROR [100%]
=================================== ERRORS ====================================
ERROR at setup of test_decode_error[C:\source\unigine.vcxproj-/p:Platform=x64]
self = <allure_pytest.listener.AllureListener object at 0x06697590>
item = <Function 'test_decode_error[C:\\source\\unigine.vcxproj-/p:Platform=x64]'>
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_setup(self, item):
uuid = self._cache.set(item.nodeid)
test_result = TestResult(name=item.name, uuid=uuid)
self.allure_logger.schedule_test(uuid, test_result)
yield
uuid = self._cache.get(item.nodeid)
test_result = self.allure_logger.get_test(uuid)
for fixturedef in _test_fixtures(item):
group_uuid = self._cache.get(fixturedef)
if not group_uuid:
group_uuid = self._cache.set(fixturedef)
group = TestResultContainer(uuid=group_uuid)
self.allure_logger.start_group(group_uuid, group)
self.allure_logger.update_group(group_uuid, children=uuid)
params = item.callspec.params if hasattr(item, 'callspec') else {}
> test_result.name = allure_name(item, params)
venv\lib\site-packages\allure_pytest\listener.py:78:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
venv\lib\site-packages\allure_pytest\utils.py:88: in allure_name
name = escape_name(item.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = 'test_decode_error[C:\\source\\unigine.vcxproj-/p:Platform=x64]'
def escape_name(name):
if six.PY2:
try:
name.decode('string_escape').encode('unicode_escape')
except UnicodeDecodeError:
return name.decode('string_escape').decode('utf-8')
> return name.encode('ascii', 'backslashreplace').decode('unicode_escape')
E UnicodeDecodeError: 'unicodeescape' codec can't decode bytes in position 27-28: truncated \uXXXX escape
venv\lib\site-packages\allure_pytest\utils.py:108: UnicodeDecodeError
```
#### Environment:
- Allure version: 2.7.0
- Test framework: [email protected]
- Allure adaptor: [email protected]
- Python version: 2.7.15
#### Other information
Without `--alluredir` argument exception is not appears
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| 0easy
|
Title: Unable to chain find_by_css().find_by_text() with 0.9.0
Body: https://github.com/cobrateam/splinter/issues/426 seems to have resurfaced in 0.9.0.
With 0.8.0, `browser.find_by_css(".onboarding-modal").find_by_text("Slack")` finds one element on my site, but 0.9.0 returns none.
I think https://github.com/cobrateam/splinter/commit/1cfd05086a5ccb1c3297072e2bc0e713e71b2156 is the commit that changed this behavior.
I can try to find a reproduceable example on a public site if necessary. | 0easy
|
Title: [New transforms] Add parameter tweaks in other than HSV space.
Body: | 0easy
|
Title: 可以麻烦支持讯虎支付吗
Body: https://pay.xunhuweb.com/ | 0easy
|
Title: Good First Issue >> Strictly Increasing / Decreasing
Body: Thanks for Pandas-Ta
Can you add a "strictly" boolean parameter to the Increasing / Decreasing function?
Now you are checking the difference between the current value with a length value ie. increasing = close.diff(length) > 0
But I want to check continuous increasing / decreasing condition like
```python
test_list = [1, 4, 5, 7, 8, 10]
res = all(i < j for i, j in zip(test_list, test_list[1:]))
which result True
test_list = [1, 8, 5, 4, 8, 10]
result = False
``` | 0easy
|
Title: Libdoc performance degradation starting from RF 6.0
Body: In the project I'm working on we use dynamic python libraries with 10k+ keywords. After migrating to RF 6.0 we noticed that the libdoc execution time doubled.
Here is a comparison of the time taken by libdoc in each version given the same parameters:

Digging into the code a bit I noticed that there is a class called `Languages` that is instantiated once for each Keyword, but could be instanciated once and use n times. Here is a pyinstrument screenshot:

## Proposed Fixes
* Suggestion 1: A pre-created instance of `Languages` could be provided when calling `TypeConverter.converter_for`:
https://github.com/robotframework/robotframework/blob/c9c9d414d8aa66959d3df7ef1895516c1d37d6c2/src/robot/libdocpkg/datatypes.py#L55
* Suggestion 2: Convert the class `Languages` into a singleton.
| 0easy
|
Title: Marketplace - creator page - add 24px to the padding at the bottom of the page
Body:
### Describe your issue.
Please add 24px at the bottom of the page's padding, so that it totals to 90px padding.
<img width="1318" alt="Screenshot 2024-12-16 at 21 28 21" src="https://github.com/user-attachments/assets/6773fd48-e7e2-482b-a0c5-774230b99ff6" />
| 0easy
|
Title: Display name spill-over
Body: In isolated cases users misuse their freedom to use unicode display names to set bad names which spill over to other parts of the UI. Specifically affected are the leaderboard and trollboard pages which list users.
Use css styles to clip display names at the border of the surrounding cell's box.
| 0easy
|
Title: [DOC] `get_features_targets` should be removed from general_functions.rst
Body: # Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
When I see docs I found [`get_features_targets` page](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.get_features_targets.html) was black.
After #556 merged, there also should remove `get_features_targets` from general_functions.rst.
But #556 didn't remove it. So this cause that page became blank.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [`get_features_targets` page is blank](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.get_features_targets.html)
- [general_functions.rst should update](https://github.com/pyjanitor-devs/pyjanitor/blob/dev/docs/reference/general_functions.rst)
| 0easy
|
Title: Messages logged by `start_test` and `end_test` listener methods are ignored
Body: If `start_test` or `end_test` listener method logs something using `robot.api.logger`, the messages don't end up to output.xml and log.html. Although they go to the syslog, the syslog isn't enabled by default and thus they are in practice ignored.
I think this was a technical limitation earlier when tests could only contain keywords. Nowadays they have a generic `body` that can contain keywords, control structures and also messages. We just explicitly forward messages logged before any keyword or control structure has started to syslog.
I'm not sure should this issue be considered a bug or an enhancement, but decided to go with the former. Either way, this is easy to fix. | 0easy
|
Title: BBands documentation incomplete
Body: The [documentation](https://alpha-vantage.readthedocs.io/en/latest/source/alpha_vantage.html#alpha_vantage.techindicators.TechIndicators.get_bbands) for `get_bbands()` seems to be missing the `time_period` argument (as seen in the [Technical Indicators](https://github.com/RomelTorres/alpha_vantage#technical-indicators) example.
That, or the example is wrong and needs updating... | 0easy
|
Title: ScriptRunner: Cannot import modules when other directories
Body: I am trying to run the pipeline via the ScriptRunner and am not able to import modules from other directories.
My current project structure looks like this:
```
└── project/
├── common/
│ └── lib
├── file1.py --> want to import common.lib here
├── file2.py
└── pipeline.yaml
```
Adding the project to the sys.path fixes this issues.
Note: This behaviour is not observed with the default NotebookRunner. | 0easy
|
Title: returns None
Body: Please tell me why it returns None
```python
from pybit import usdt_perpetual
import time
import pandas as pd
import pandas_ta as ta
import numpy as np
print(ta.version)
0.3.14b0
session_unauth = usdt_perpetual.HTTP(endpoint="https://api.bybit.com")
def klines():
session=session_unauth.query_mark_price_kline(symbol="BTCUSDT", interval=60, limit=500, from_time=int(time.time()-60*60*120))
session=session['result']
df=pd.DataFrame(session)
df=df.iloc[:,[6]]
return df
print(type(klines()))
close = klines()
m = ta.sma(close, 14)
print(m)
```
```sh
<class 'pandas.core.frame.DataFrame'>
None
``` | 0easy
|
Title: `TimeSeries.plot` Title as parameter
Body: **Is your feature request related to a current problem? Please describe.**
The `TimeSeries.plot` method sets the plot title to the name of the underlying xarray's name.
However, it is unclear to me (and to my knowledge not documented) how to set this name without touching the private `._xa`.
In my experience, the name attribute is not set, if it is not constructed directly from a named xarray, which is rarely the case.
**Describe proposed solution**
I propose a new parameter for the plot function, that can overwrite this behaviour.
- title (str): Set a title for the figure. Default is None, which plots the name of the underlying xarray (if any)
https://github.com/unit8co/darts/blob/71a1902b1c28b579328984cd3a37349cd9a9aba9/darts/timeseries.py#L4257
would change then to:
`ax.set_title(title if title else self._xa.name)`
**Describe potential alternatives**
- Extend the TimeSeries API so a name can be set. E.g. `TimeSeries.set_name(self, name: str)`
- Provide a name parameter for all TimeSeries construction methodes, such as e.g. `TimeSeries.from_array(..., name: str)`
I would prefer proposed solution which is more flexible to the user
**PR**
I can to a PR for that
| 0easy
|
Title: Allure report showing NaN% while using --device matrix capability in Json format
Body: Hi Team,
I am facing issue to generate allure report using allure-pytest-bdd 2.9.45 version. We are passing browser capabilities by using Json file.
[
{
"local" : "true",
"browser": "Chrome",
"driver_path": "./binaries/webdriver/chromedriver",
"resolution": "1900x1100",
"headless": "true"
}
]
python -m pytest -v --gherkin-terminal-reporter --tags "$TAGS" -n "$THREADS" --device-matrix "$CONFIG_JSON_FILE" --alluredir=output/reports/allure-results
It is creating extra dictionary in the allure json file due to which report is showing NaN%

Json Result -
{
"name":"Navigate to championspfizercom with proxy enabled [chrome]",
"status":"passed",
"steps":[
{
"name":"Given The browser resolution is '1367' per '768'",
"status":"passed",
"start":1648644612533,
"stop":1648644615274
},
{
"name":"Given I am on the url 'https://championspfizercom-preview.dev.pfizerstatic.io/'",
"status":"passed",
"start":1648644615276,
"stop":1648644617743
},
{
"name":"Then I expect that the title is not 'Sign in ・ Cloudflare Access'",
"status":"passed",
"start":1648644617749,
"stop":1648644627957
},
{
"name":"And I expect that the title is 'Alliance Anticoagulation Foundation | Homepage'",
"status":"passed",
"start":1648644627974,
"stop":1648644627981
}
],
**"parameters":[
{
"name":"custom_browser_config",
"value":{
"local":"true",
"browser":"Chrome",
"driver_path":"./binaries/webdriver/chromedriver",
"headless":"true"
}
}**
],
"start":1648644612528,
"stop":1648644627982,
"uuid":"1fae8e62-a32d-b230-6436-801757483f67",
"historyId":"1fae8e62a32db2306436801757483f67",
"fullName":"features\\proxy.feature:Navigate to championspfizercom with proxy enabled",
"labels":[
{
"name":"host",
"value":"DESKTOP-7MNDH5C"
},
{
"name":"thread",
"value":"13280-MainThread"
},
{
"name":"framework",
"value":"pytest-bdd"
},
{
"name":"language",
"value":"cpython3"
},
{
"name":"feature",
"value":"Tests to verify proxy config"
}
]
}
#### I'm submitting a ...
- [ ] bug report
#### What is the current behavior?
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
#### What is the expected behavior?
#### What is the motivation / use case for changing the behavior?
#### Please tell us about your environment:
- Allure version: 2.1.0
- Test framework: [email protected]
- Allure adaptor: [email protected]
Kindly Advice.
| 0easy
|
Title: Types of Contributions metric API
Body: The canonical definition is here: https://chaoss.community/?p=3432 | 0easy
|
Title: missing documentation: RandomAdder
Body: Please add basic information for the documentation. | 0easy
|
Title: feat: Switch component
Body: Would be nice to wrap https://v2.vuetifyjs.com/en/components/switches/ using ipyvuetify, similar to checkbox https://github.com/widgetti/solara/blob/9cac7202072ab3fb3b316292501a1b880cc828de/solara/components/checkbox.py#L10
See https://github.com/widgetti/solara/commit/8480176b66f43c31b3c384d48ec9f35287f31ffc for the original commit that added the checkbox. | 0easy
|
Title: Tuple index out of range in Pyppeteer Exception _rewriteError
Body: Hey team,
I am getting a bit of an odd error that may be a bug, to be truthful I am unsure.
I am using Python 3.8.2.
For reference, I am automatically scanning a large number of URLs that may or may not exist/be online - basically think of them as the worst possible types of URLs you could throw at it - timeouts, URL doesn't actually exist, you get the idea. Each URL is checked to see if it loads with http:// or https:// protocol (as some cases only load with the specific protocol but not the other). As it goes, I am trying to catch any redirections and add them to a list. By the time this code runs, the browser has been launched and a page has been created. Keep in mind this isn't the exact code (for example Exception catching is much more thorough), but roughly the logic is as follows:
```
async def cCheck(self,page,pCheck):
try:
with async_timeout.timeout(120):
httpsURL = url.replace('http://','https://', 1)
httpURL = url.replace('https://','http://', 1)
if pCheck is True: #All this does is flip between HTTP vs HTTPS - the flag is determined in the prior function
if url.startswith('http://'):
url = httpsURL
elif url.startswith('https://'):
url = httpURL
else:
url = url
redirectionHistory = [url]
htmlSource = 'Source Code Not Found'
pageTitle = 'Page Title Not Found'
browserResponse = None
page.setDefaultNavigationTimeout(60000)
page.on(pyppeteer.frame_manager.FrameManager.Events.FrameNavigated,
lambda event: redirectionHistory.append(page.url))
browserResponse = await page.goto(url,waitUntil=['load','domcontentloaded','networkidle0'])
htmlSource = await page.content()
pageTitle = await page.title()
except Exception as e:
log.error('Could not parse results of '+url+' due to the following error: '+str(e)+ ' traceback: ' + traceback.format_exc())
return
if browserResponse is not None:
#Do Stuff
else:
#Do Stuff
```
I am getting a transient error (as in, most URLs I am checking this does not happen) with the below traceback.
When this occurs, it occurs as an uncaught exception and everything grinds to a halt.
```
2020-05-27 08:00:00,000 - model.check - ERROR - check : 424 - Could not parse results of http://notarealdomain.com due to the following error: tuple index out of range traceback:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/pyppeteer/execution_context.py", line 105, in evaluateHandle
'userGesture': True,
concurrent.futures._base.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/afcc/watcher/model/check.py", line 405, in cCheck
htmlSource = await page.content()
File "/usr/local/lib/python3.6/dist-packages/pyppeteer/page.py", line 805, in content
return await frame.content()
File "/usr/local/lib/python3.6/dist-packages/pyppeteer/frame_manager.py", line 393, in content
'''.strip())
File "/usr/local/lib/python3.6/dist-packages/pyppeteer/frame_manager.py", line 309, in evaluate
pageFunction, *args, force_expr=force_expr)
File "/usr/local/lib/python3.6/dist-packages/pyppeteer/execution_context.py", line 54, in evaluate
pageFunction, *args, force_expr=force_expr)
File "/usr/local/lib/python3.6/dist-packages/pyppeteer/execution_context.py", line 108, in evaluateHandle
_rewriteError(e)
File "/usr/local/lib/python3.6/dist-packages/pyppeteer/execution_context.py", line 235, in _rewriteError
if error.args[0].endswith('Cannot find context with specified id'):
IndexError: tuple index out of range
```
What seems to be happening is it is trying to fire the _rewriteError in execution_context.py, where it hits the line:
```
if error.args[0].endswith('Cannot find context with specified id'):
```
in
```
def _rewriteError(error: Exception) -> None:
if error.args[0].endswith('Cannot find context with specified id'):
msg = 'Execution context was destroyed, most likely because of a navigation.' # noqa: E501
raise type(error)(msg)
raise error
```
And it then generates an uncaught IndexError, at which point everything breaks down.
If I understand the above error correctly, it is caused by trying to process a page that has since navigated to a different URL/rendered JS. However, I have goto's waitUntil set so it should be waiting for it to finish, yes?
What seems to be happening is it is hitting:
```
htmlSource = await page.content()
```
And then tries to fire the ''Execution context was destroyed, most likely because of a navigation.', but then fails because the line:
```
if error.args[0].endswith('Cannot find context with specified id'):
```
Causes an IndexError.
I've been trying to diagnose this on my own for days, so I'm finally turning to here to see if anyone has any suggestions, and to run it by you as a possible bug.
Please tell me if you need any further information/code.
Thanks in advance to anyone who is able to help out. | 0easy
|
Title: Add references and links to conference talks explaining Testbook
Body: Add a section to the docs that links out to talk videos and slidedecks. | 0easy
|
Title: [GOOD FIRST ISSUE]: Encapsulate sleep timers
Body: ### Issue summary
Encapsulate sleep timers
### Detailed description
slightly better code would be to not have so many literals floating around. I propose to encapsulate the sleep timers in a medium or short sleep function in the utils. This will make editing these in the future more simple
### Steps to reproduce (if applicable)
_No response_
### Expected outcome
encapsulated sleep timers
### Additional context
_No response_ | 0easy
|
Title: Possible Bug in the Center of Gravity Indicator
Body: When I ran `Center of Gravity: cg` over 3 months of Bitcoin prices ("20200801" to "20201101"), I got
| | Close | cg |
| -- | -- | -- |
| count | 132481.000000 | 132472.000000 |
| mean | 11378.306788 | -5.499988 |
| std | 844.355621 | 0.001991 |
| min | 9881.820000 | -5.616297 |
| 25% | 10710.500000 | -5.500833 |
| 50% | 11368.680000 | -5.499987 |
| 75% | 11742.540000 | -5.499146 |
| 100% | 14068.490000 | -5.458667 |
After reading http://www.mesasoftware.com/papers/TheCGOscillator.pdf, I believe there should be both positive and negative values, as well as a fair bit of variance. Based on the equations and pseudo code given in the pdf, I think a `.sum()` at the end of the following line, https://github.com/twopirllc/pandas-ta/blob/development/pandas_ta/momentum/cg.py#L42, is needed.
Thanks again for the amazing library! | 0easy
|
Title: Better warning message for Vis and VisList
Body: We should display a better warning message when the user specifies an intent that indicates more than one item, but puts it into Vis instead of VisList.
For example, if I do:
```
Vis(["AverageCost","SATAverage", "Geography=?"],df)
```

The error that I get is not very interpretable. We should warn users that the intent should be used with a VisList as shown in the screenshot below. If possible, we might even map the intent down to a VisList so that we display something to the user.

| 0easy
|
Title: [Jobs] Refactor jobs logs from backend to jobs lib
Body: <!-- Describe the bug report / feature request here -->
Like #4046, to reduce the complexity of our backend, we should refactor jobs(spot) logs to jobs lib as well.
| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.