text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: [BUG] feat and umap cache incomplete runs
Body: **Describe the bug**
When canceling (or getting an exception) during feat/umap, that gets cached for subsequent runs. We expect them to be not cached, e.g., recomputed, on future runs.
**To Reproduce**
**Expected behavior**
The caching flow should catch the interrupt/exn, ensure no caching, and rethrow.
**Actual behavior**
**Screenshots**
**Browser environment (please complete the following information):**
**Graphistry GPU server environment**
**PyGraphistry API client environment**
**Additional context**
| 0easy
|
Title: [Feature request] Add apply_to_images to CoarseDropout
Body: | 0easy
|
Title: Incorrect indent for multiline string within list
Body: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Input:
```python
def test_snapshot(snapshot):
data = {
"key": [
"line 1\nline 2"
]
}
assert data == snapshot
```
```
pytest --snapshot-update
```
Output:
```ambr
# name: test_snapshot
<class 'dict'> {
'key': <class 'list'> [
'
line 1
line 2
',
],
}
---
```
**Expected behavior**
The multiline string `'` should be indented:
```ambr
# name: test_snapshot
<class 'dict'> {
'key': <class 'list'> [
'
line 1
line 2
',
],
}
---
```
syrupy==0.6.1
| 0easy
|
Title: Feat: add warning for NATS subscriber factory if user sets useless options
Body: **Describe the bug**
The extra_options parameter is not utilized when using pull_subscribe in nats.
Below is the function signature from natspy
``` python
async def subscribe(
self,
subject: str,
queue: Optional[str] = None,
cb: Optional[Callback] = None,
durable: Optional[str] = None,
stream: Optional[str] = None,
config: Optional[api.ConsumerConfig] = None,
manual_ack: bool = False,
ordered_consumer: bool = False,
idle_heartbeat: Optional[float] = None,
flow_control: bool = False,
pending_msgs_limit: int = DEFAULT_JS_SUB_PENDING_MSGS_LIMIT,
pending_bytes_limit: int = DEFAULT_JS_SUB_PENDING_BYTES_LIMIT,
deliver_policy: Optional[api.DeliverPolicy] = None,
headers_only: Optional[bool] = None,
inactive_threshold: Optional[float] = None,
) -> PushSubscription:
async def pull_subscribe(
self,
subject: str,
durable: Optional[str] = None,
stream: Optional[str] = None,
config: Optional[api.ConsumerConfig] = None,
pending_msgs_limit: int = DEFAULT_JS_SUB_PENDING_MSGS_LIMIT,
pending_bytes_limit: int = DEFAULT_JS_SUB_PENDING_BYTES_LIMIT,
inbox_prefix: bytes = api.INBOX_PREFIX,
) -> JetStreamContext.PullSubscription:
```
**How to reproduce**
```python
import asyncio
from faststream import FastStream
from faststream.nats import PullSub, NatsBroker
from nats.js.api import DeliverPolicy
broker = NatsBroker()
app = FastStream(broker)
@broker.subscriber(subject="test", deliver_policy=DeliverPolicy.LAST, stream="test", pull_sub=PullSub())
async def handle_msg(msg: str): ...
if __name__ == "__main__":
asyncio.run(app.run())
```
| 0easy
|
Title: WHILE `on_limit` missing from listener v2 attributes
Body: WHILE loops got an `on_limit` option for controlling what to do if the loop limit is reached in RF 6.1 (#4562). It seems we forgot to add that to the attributes passed to `start/end_keyword` methods of the listener v2 API. The User Guide claims it would be there which makes the situation worse. | 0easy
|
Title: Improve GitHub templates (issues, PRs and discussions)
Body: People should first create discussions, and the discussion should provide an MRE, if it's supposed to be a bug report. | 0easy
|
Title: GUI Setup Instructions for dedyn.io Domains Misleading
Body: When creating a new dedyn domain, the GUI shows this:

This is misleading because the user **does not** need to setup any DS records for the case for dedyn domains.
Proposed fix: the GUI should check if the name falls under the public suffix domains and make the instructions conditional. For dedyn / local public suffix domains, it should not display the information on DS records (or instead a hint that the DS records are deployed automatically).
The list of local public suffixes is [available in `DomainSetup.vue` as `LOCAL_PUBLIC_SUFFIXES`](https://github.com/desec-io/desec-stack/blob/ef688c410e3918ff1aeef4b0585aa78b5e4dfc84/www/webapp/src/views/DomainSetup.vue#L165) but is currently not used.
@Rotzbua Maybe you are interested in taking a look at this? :rocket: | 0easy
|
Title: [ENH] Series toset() functionality
Body: # Brief Description
<!-- Please provide a brief description of what you'd like to propose. -->
I would like to propose toset() functionality similar to [tolist()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.tolist.html).
Basically it will be a function call to tolist and then conversion to set.
Note: if the collection has no hashable member it will raise an exception.
# Example API
<!-- One of the selling points of pyjanitor is the API. Hence, we guard the API very carefully, and want to
make sure that it is accessible and understandable to many people. Please provide a few examples of what the API
of the new function you're proposing will look like. We have provided an example that you should modify. -->
Please modify the example API below to illustrate your proposed API, and then delete this sentence.
```python
# convert series to a set
df['col1'].toset()
| 0easy
|
Title: [Feature] enable SGLang custom all reduce by default
Body: ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 2. Please use English, otherwise it will be closed.
### Motivation
We need community users to help test these cases. After confirming that there are no issues, we will default to using the custom all reduce implemented in SGLang. You can reply with your test results below this issue. Thanks!
**GPU Hardware Options**:
- H100/H200/H20/H800/A100
**Model Configurations with Tensor Parallelism (TP) Settings**:
- Llama 8B with TP 1/2/4/8
- Llama 70B with TP 4/8
- Qwen 7B with TP 1/2/4/8
- Qwen 32B with TP 4/8
- DeepSeek V3 with TP 8/16
**Environment Variables**:
```
export USE_VLLM_CUSTOM_ALLREDUCE=0
export USE_VLLM_CUSTOM_ALLREDUCE=1
```
**Benchmarking Commands**:
```bash
python3 -m sglang.bench_one_batch --model-path model --batch-size --input 128 --output 8
python3 -m sglang.bench_serving --backend sglang
```
### Related resources
_No response_ | 0easy
|
Title: Add return codes to mouse.wait_for_click and keyboard.wait_for_keypress
Body: ### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Which Linux distribution did you use?
N/A
### Which AutoKey GUI did you use?
_No response_
### Which AutoKey version did you use?
N/A
### How did you install AutoKey?
N/A
### Can you briefly describe the issue?
I want to know how a wait for click or keypress completed so I can use that information for flow control in scripts.
E.g. a loop continues indefinitely until the mouse is clicked. This has to distinguish between a timeout and a click.
The same thing with a loop terminated by a keypress.
### Can the issue be reproduced?
N/A
### What are the steps to reproduce the issue?
N/A
### What should have happened?
These API calls should return 0 for success and one or more defined non-zero values to cover any alternatives. So far, 1 for timeout/failure is all that comes to mind.
### What actually happened?
AFAIK, they do not return any status code - which is equivalent to returning 0 no matter what happened.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | 0easy
|
Title: Users in admin scoreboard show user position instead of team position
Body: In teams mode on the admin panel, users are shown with their user position on the scoreboard instead of their teams position. We should be showing both. | 0easy
|
Title: [FEATURE] Rename the limit method to take. Add it in Fugue SQL
Body: **Context**
The limit method pull request is close to being [merged](https://github.com/fugue-project/fugue/pull/131). This method should be renamed to `head` because there already is a ANSI SQL keyword named `LIMIT`. After renaming this to `head`, implement the `HEAD` keyword for Fugue SQL with the same parameters as the execution engine definition.
| 0easy
|
Title: chandelier_exit
Body: Here is a code that i have written for chandelier_exit, if possible add to repo
```python
def chandelier_exit(df1, atr_length=14, roll_length=22, mult=2, use_close=False):
df = df1.copy()
df.columns = df.columns.str.lower()
my_atr = ta.Strategy(
name="atr",
ta=[{"kind": "atr", "length": atr_length, "col_names": ("ATR",)}]
)
# Run it
df.ta.strategy(my_atr, append=True)
if use_close:
df['chandelier_long'] = df.rolling(roll_length)["close"].max() + df.iloc[-1]["ATR"] * mult
df['chandelier_short'] = df.rolling(roll_length)["close"].min() - df.iloc[-1]["ATR"] * mult
else:
df['chandelier_long'] = df.rolling(roll_length)["high"].max() - df.iloc[-1]["ATR"] * mult
df['chandelier_short'] = df.rolling(roll_length)["low"].min() + df.iloc[-1]["ATR"] * mult
df.loc[df['close'] > df['chandelier_long'].shift(1), 'chd_dir'] = 1
df.loc[df['close'] < df['chandelier_short'].shift(1), 'chd_dir'] = -1
# chd = df[['chandelier_long', 'chandelier_short', 'chd_dir']]
return df
``` | 0easy
|
Title: /index query results are cut off for larger prompts
Body: If the prompt is large or multiple sentences, the results for /index query run the risk of being cut off:

| 0easy
|
Title: Support math.nextafter in nopython mode
Body: <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Feature request
<!--
Please include details of the feature you would like to see, why you would
like to see it/the use case.
-->
`math.nextafter` was added in Python 3.9. However, numba doesn't seem to support it in nopython mode. Can support please be added? | 0easy
|
Title: Add method to compute Out-of-bag Data Shapely value
Body: Add this efficient method for Data Shapely and Data Valuation w arbitrary ML models:
[Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value](https://arxiv.org/abs/2304.07718)
Make sure the code runs quickly, but is numerically reproducible/stable too | 0easy
|
Title: Coinbene publishes repeated trades
Body: Coinbene uses REST for market data. To get trades, it's API allows you to request the last N trades, up to 2,000. (See https://github.com/Coinbene/API-Documents/wiki/1.1.2-Get-Trades-%5BMarket%5D ). Thus, a way to get trades is to repeatedly query the last trades and publish only non-repeated trades. The current code does this, but there's a bug in the logic, in this function:
https://github.com/bmoscon/cryptofeed/blob/5016b0cdba5ae69d7f86e403f9ada59f6446a79a/cryptofeed/coinbene/coinbene.py#L28
Let's say you get trades with ids: a,b,c. You'll publish these and store a,b,c in the last_trade_update dict. If a,b,c repeats again next poll, you won't publish, but the last_trade_update will get cleared out since the update dict is never updated. If a,b,c comes up again, it will be republished.
You may have to check on timestamps instead, or keep a list of all tradeIDs. Several APIs in other exchanges allow you to get all trades after a given timestamp, so perhaps timestamp checking is the way to go? | 0easy
|
Title: invalid colour conversion raises error rather than warning
Body: This seems like a bug in colour conversion.
```python
lab2rgb([0, 0, 28.0])
```
raises `TypeError: 'numpy.float64' object does not support item assignment`
while:
```python
lab2rgb([0, 0, 27.0])
```
works fine.
This difference is because the former results in a negative z-value in _lab2xyz conversion. The intended behaviour seems to be to assign this to 0 and return the n_invalid count back to the caller for warnings to be raised.
However, the attempt to reassign is performed by indexing on the z value (a numpy.float64 object) which raises the type error in the last line here.
https://github.com/scikit-image/scikit-image/blob/2336565966acdcf2bfd6e6a7322c6f1f477fca8e/skimage/color/colorconv.py#L1173C1-L1182C23
A simple fix is for this last line to read z=0 rather than z[invalid] = 0
Should I submit a pull request for this? | 0easy
|
Title: Typo in docs
Body: **Migrated issue, originally created by Tim Mitchell**
https://alembic.readthedocs.org/en/latest/api/autogenerate.html#fine-grained-autogenerate-generation-with-rewriters
The code sample has
```
from alembic import ops
```
instead of
```
from alembic.operations import ops
```
also the second argument to AlterColumnOp should be op.column.name not op.column_name
| 0easy
|
Title: Deterministic evaluation of probabilistic models (e.g. nbeats)
Body: When predicting a time series, one can turn some models probabilistic by defining a likelihood parameter.
E.g.:
NBEATSModel(
...
likelihood=GaussianLikelihood(),
random_state=42
)
If done this way, the prediction is not deterministic any more. Two subsequent calls to
predict(series=.., n=1, num_samples=1)
yield different results.
For developing purposes, it would be nice to be able to fix the random state of the likelihood as well so that, independent of the num_samples defined, the result of the prediction always stays the same. | 0easy
|
Title: ActionsBlock elements are not parsed
Body: ### Description
In ActionsBlock __init__, elements are simplie copied.
[Exact bug place](https://github.com/slackapi/python-slack-sdk/blob/4e4524230ed41ef7cd9d637c52f4e86b1ffedad9/slack/web/classes/blocks.py#L241)
For all other Blocks, similar elements are parsed into BlockElement
[How it parsed for SectionBlock](https://github.com/slackapi/python-slack-sdk/blob/4e4524230ed41ef7cd9d637c52f4e86b1ffedad9/slack/web/classes/blocks.py#L143)
[How it is parsed for ContextBlock](https://github.com/slackapi/python-slack-sdk/blob/4e4524230ed41ef7cd9d637c52f4e86b1ffedad9/slack/web/classes/blocks.py#L269)
To fix this we can deal with it the same way as in ContextBlock -> replace [this line of code](https://github.com/slackapi/python-slack-sdk/blob/4e4524230ed41ef7cd9d637c52f4e86b1ffedad9/slack/web/classes/blocks.py#L241) by
`self.elements = BlockElement.parse_all(elements)`
Also the same issue present in current sdk version(3.1.0)
[Link for bug place in 3.1.0](https://github.com/slackapi/python-slack-sdk/blob/5340ee337a2364e84c38d696c107f19c341dd6eb/slack_sdk/models/blocks/blocks.py#L246)
### Reproducible in:
#### The Slack SDK version
slackclient==2.9.3
#### Python runtime version
Python 3.7.0
#### OS info
Ubuntu 20.04.1 LTS
#### Steps to reproduce:
1.Copy example json from slack API docs for ActionsBlock (https://api.slack.com/reference/block-kit/blocks#actions_examples)
2.Create an ActionsBlock from parsed (to dict) json
3. Check the type of elements attribute from created ActionsBlock object
### Expected result:
Elements should be instances of BlockElement, the same as it is for SectionBlock.accessory
### Actual result:
Elements of ActionsBlock is not parsed
| 0easy
|
Title: aiogram\utils\formatting.py (as_section)
Body: ### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
Windows 10
### Python version
3.12
### aiogram version
3.4.1
### Expected behavior
aiogram\utils\formatting.py (as_section)
...
return Text(title, "\n", **as_list(*body)**)
### Current behavior
aiogram\utils\formatting.py (as_section)
```
def as_section(title: NodeType, *body: NodeType) -> Text:
"""
Wrap elements as simple section, section has title and body
:param title:
:param body:
:return: Text
"""
return Text(title, "\n", *body)
```
### Steps to reproduce
Not required
### Code example
_No response_
### Logs
_No response_
### Additional information
It is necessary to use "as_list(*body)" instead of "*body", because "\n" characters are not added to the end of each body element. | 0easy
|
Title: add documentation for update and delete in the supabase docs
Body:
there is an insert and fetch example in the docs https://supabase.com/docs/reference/python/insert but there is no update nor delete. i think they should be added. | 0easy
|
Title: Add link to the Jupyter in Education map in third-party resources
Body: Here is the link: https://elc.github.io/jupyter-map/ | 0easy
|
Title: feature: concurrent Redis consuming
Body: **Describe the bug**
It seems tasks don't run in parallel
**How to reproduce**
Include source code:
```python
import asyncio
from faststream import FastStream
from faststream.redis import RedisBroker
from pydantic import BaseModel
redis_dsn = 'xxxx'
rb = RedisBroker(redis_dsn)
class User(BaseModel):
name: str
age: int = 0
@rb.subscriber(list="users")
async def my_listener(user: User):
await asyncio.sleep(3)
print(user, 'from faststream')
async def producer():
for i in range(10):
await rb.publish(User(name="Bob", age=i), list="users")
async def main():
await rb.connect()
asyncio.create_task(producer())
app = FastStream(rb)
await app.run()
if __name__ == '__main__':
asyncio.run(main())
```
And/Or steps to reproduce the behavior:
run the script above
**Expected behavior**
task should run in parallel
**Observed behavior**
task run one after one
| 0easy
|
Title: CHORE: too many warnings when running TPCH
Body: Note that the issue tracker is NOT the place for general support. For
discussions about development, questions about usage, or any general questions,
contact us on https://discuss.xorbits.io/.
```python
/Users/codingl2k1/Work/xorbits/python/xorbits/_mars/dataframe/tseries/to_datetime.py:195: UserWarning: The argument 'infer_datetime_format' is deprecated and will be removed in a future version. A strict version of it is now the default, see https://pandas.pydata.org/pdeps/0004-consistent-to-datetime-parsing.html. You can safely remove this argument.
```
```python
/Users/codingl2k1/Work/xorbits/python/xorbits/_mars/dataframe/groupby/aggregation.py:1288: FutureWarning: use_inf_as_na option is deprecated and will be removed in a future version. Convert inf values to NaN before operating instead.
pd.set_option("mode.use_inf_as_na", op.use_inf_as_na)
```
```python
/Users/codingl2k1/Work/xorbits/python/xorbits/_mars/dataframe/groupby/aggregation.py:1298: FutureWarning: use_inf_as_na option is deprecated and will be removed in a future version. Convert inf values to NaN before operating instead.
pd.reset_option("mode.use_inf_as_na")
``` | 0easy
|
Title: [doc] Adding proper documentation for proximal operators
Body: #### Issue
We have several cool proximal operators inside the submodule `tenalg/proximal.py`, which are not documented.
#### Fix
It would be simple and useful, in my opinion, to have a small text in the documentation about it.
I should be able to tackle this in the near future.
| 0easy
|
Title: Explore incorporatating softcite/softcite_kb data into Augur for Academic Metrics
Body: **Is your feature request related to a problem? If so, please describe the problem:**
Working in the context of the @chaoss project, we are developing metrics for Academic open source contexts. These include alt metrics related to software, as well as more conventional metrics related to academic publications.
https://github.com/softcite/softcite_kb is a project that could help support this effort.
| 0easy
|
Title: Negative exponent with value -1 (minus one) raises error when loading Doc2Vec model
Body: <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I try to vary the value of the negative exponent parameter. When I use a value of -1, training works fine, saving the model too, but when I try to load the model afterwards with Doc2Vec.load() it raises the error "ValueError: Integers to negative integer powers are not allowed."
This is due to the following line: https://github.com/RaRe-Technologies/gensim/blob/266a01455ade51a93a08dba5950e87b4d98e0724/gensim/models/word2vec.py#L836
Here, numpy does not raise an integer by the power of another, but negative integer.
I guess this could be solved by converting the exponent to a float in this case?
| 0easy
|
Title: Implement viewing videos
Body: **Describe the enhancement you'd like**
As a user, I want to be able to watch my videos.
**Describe why this will benefit the LibrePhotos**
Use react-native-vlc-media-player as this should be very stable and reliable.
| 0easy
|
Title: [new] `json_items(json_string)`
Body: Takes a json_string as input which has flat (no nested) key values and returns an `array<struct<key string, value string>>` | 0easy
|
Title: Discord Webhook integration
Body: Hello,
thanks for healthchecks !
Would it be possible to get [Discord Webhook integration](https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks) ?
There are simpler to set-up than the Discord App integration.
| 0easy
|
Title: AssertionError: OpenAI requires `tool_call_id`
Body: # Description
Hi. I had run into an issue when switching models.
Basically, I have implemented an API endpoint where I can change models.
Where is what happened:
- I started with `gemini-1.5-flash`, asking it what time is now, which would call my `now()` tool.
- It runs without any problem returning the current datetime
- Then I switched to `gpt-4o-mini` and asked the same question again, passing the message history I got after using Gemini
- This causes the following exception: `AssertionError: OpenAI requires `tool_call_id` to be set: ToolCallPart(tool_name='now', args={}, tool_call_id=None, part_kind='tool-call')`
## [Edit] Minimal working example
```python
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.models.gemini import GeminiModel
from datetime import datetime
open_ai_api_key = ...
gemini_api_key = ...
openai_model = OpenAIModel(
model_name='gpt-4o-mini',
api_key=open_ai_api_key,
)
gemini_model = GeminiModel(
model_name='gemini-2.0-flash-exp', # could be gemini-1.5-flash also
api_key=gemini_api_key,
)
agent = Agent(gemini_model)
@agent.tool_plain
def now():
return datetime.now().isoformat()
r1 = agent.run_sync('what is the current date time?')
print(r1.all_messages_json())
r2 = agent.run_sync( # this will fail
'what time is now?',
model=openai_model,
message_history=r1.all_messages(),
)
print(r2.all_messages_json())
```
## Message history (stored until call gpt-4o-mini)
```python
[ModelRequest(parts=[SystemPromptPart(content='\nYou are a test agent.\n\nYou must do what the user asks.\n', dynamic_ref=None, part_kind='system-prompt'), UserPromptPart(content='call now', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 5, 628330, tzinfo=TzInfo(UTC)), part_kind='user-prompt')], kind='request'),
ModelResponse(parts=[TextPart(content='I am sorry, I cannot fulfill this request. The available tools do not provide the functionality to make calls.\n', part_kind='text')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 6, 59052, tzinfo=TzInfo(UTC)), kind='response'),
ModelRequest(parts=[UserPromptPart(content='call the tool now', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 14, 394461, tzinfo=TzInfo(UTC)), part_kind='user-prompt')], kind='request'),
ModelResponse(parts=[TextPart(content='I cannot call a tool. The available tools are functions that I can execute, not entities that I can call in a telephone sense. Is there something specific you would like me to do with one of the available tools?\n', part_kind='text')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 15, 449295, tzinfo=TzInfo(UTC)), kind='response'),
ModelRequest(parts=[UserPromptPart(content='what time is now?', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 23, 502937, tzinfo=TzInfo(UTC)), part_kind='user-prompt')], kind='request'),
ModelResponse(parts=[ToolCallPart(tool_name='now', args={}, tool_call_id=None, part_kind='tool-call')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 24, 151395, tzinfo=TzInfo(UTC)), kind='response'),
ModelRequest(parts=[ToolReturnPart(tool_name='now', content='2025-02-11T12:55:24.153651-03:00', tool_call_id=None, timestamp=datetime.datetime(2025, 2, 11, 15, 55, 24, 153796, tzinfo=TzInfo(UTC)), part_kind='tool-return')], kind='request'),
ModelResponse(parts=[TextPart(content='The current time is 2025-02-11 12:55:24 -03:00.\n', part_kind='text')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 24, 560881, tzinfo=TzInfo(UTC)), kind='response')]
```
## Traceback
```
Traceback (most recent call last):
File "/app/agents/_agents/_wrapper.py", line 125, in run_stream
async with self._agent.run_stream(
~~~~~~~~~~~~~~~~~~~~~~^
user_prompt=user_prompt,
^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
deps=self.deps,
^^^^^^^^^^^^^^^
) as result:
^
File "/usr/local/lib/python3.13/contextlib.py", line 214, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/agent.py", line 595, in run_stream
async with node.run_to_result(GraphRunContext(graph_state, graph_deps)) as r:
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/contextlib.py", line 214, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/_agent_graph.py", line 415, in run_to_result
async with ctx.deps.model.request_stream(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
ctx.state.message_history, model_settings, model_request_parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
) as streamed_response:
^
File "/usr/local/lib/python3.13/contextlib.py", line 214, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 160, in request_stream
response = await self._completions_create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
messages, True, cast(OpenAIModelSettings, model_settings or {}), model_request_parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 203, in _completions_create
openai_messages = list(chain(*(self._map_message(m) for m in messages)))
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 267, in _map_message
tool_calls.append(self._map_tool_call(item))
~~~~~~~~~~~~~~~~~~~^^^^^^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 284, in _map_tool_call
id=_guard_tool_call_id(t=t, model_source='OpenAI'),
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/_utils.py", line 200, in guard_tool_call_id
assert t.tool_call_id is not None, f'{model_source} requires `tool_call_id` to be set: {t}'
^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: OpenAI requires `tool_call_id` to be set: ToolCallPart(tool_name='now', args={}, tool_call_id=None, part_kind='tool-call')
``` | 0easy
|
Title: Prompt the user to specify recursive type depth if they haven't already
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently, I imagine that most people will just use pseudo-recursive types as that is default and they don't know that there is a better option. We should try and make this option more present.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should output a message indicating that the recursive type depth option should be set in the schema if it is not already, e.g.
Message should be shown when:
```prisma
generator client {
provider = "prisma-client-py"
}
```
But not when the option is set:
```prisma
generator client {
provider = "prisma-client-py"
recursive_type_depth = 5
}
```
Message should look something like:
```
Some types are disabled by default due to being incompatible with Mypy, it is highly recommended
to use Pyright instead and configure Prisma Python to use recursive types to re-enable certain types:
generator client {
provider = "prisma-client-py"
recursive_type_depth = -1
}
If you need to use Mypy, you can also disable this message by explicitly setting the default value:
generator client {
provider = "prisma-client-py"
recursive_type_depth = 5
}
For more information see: https://prisma-client-py.readthedocs.io/en/stable/reference/limitations/#default-type-limitations
``` | 0easy
|
Title: Change Request Acceptance Ratio metric API
Body: The canonical definition is here: https://chaoss.community/?p=3598 | 0easy
|
Title: module 'asyncio' has no attribute 'run' (In python3.6)
Body: In setup.py, python version can be 3.6/3.7,but it raised exception when i run examples/demo_tcp.py used python=3.6.5.
```
File "demo_tcp.py", line 54, in <module>
asyncio.run(main())
AttributeError: module 'asyncio' has no attribute 'run'
```
Actually, asyncio.run is a Python 3.7 addition | 0easy
|
Title: [ENH] Expose sparse library functions
Body: XArray supports the [Sparse](https://github.com/pydata/sparse) package but doesn't expose the functions to convert to/from sparse objects. These functions could be nicely packaged in pyjanitor to do so:
```python
import sparse
@register_xarray_dataarray_method
def to_scipy_sparse(
da: xr.DataArray,
) -> xr.DataArray:
if isinstance(da.data, sparse.COO):
return xr.apply_ufunc(sparse.COO.to_scipy_sparse, da)
return da
@register_xarray_dataarray_method
def todense(
da: xr.DataArray,
) -> xr.DataArray:
if isinstance(da.data, sparse.COO):
return xr.apply_ufunc(sparse.COO.todense, da)
return da
@register_xarray_dataarray_method
def tocsc(
da: xr.DataArray,
) -> xr.DataArray:
if isinstance(da.data, sparse.COO):
return xr.apply_ufunc(sparse.COO.tocsc, da)
return da
@register_xarray_dataarray_method
def tocsr(
da: xr.DataArray,
) -> xr.DataArray:
if isinstance(da.data, sparse.COO):
return xr.apply_ufunc(sparse.COO.tocsr, da)
return da
@register_xarray_dataarray_method
def tosparse(
da: xr.DataArray,
) -> xr.DataArray:
if isinstance(da.data, sparse.COO):
return da
return xr.apply_ufunc(sparse.COO, da)
``` | 0easy
|
Title: [ENH] Warning cleanup
Body: * Good news: The new GHA will reject on flake8 errors, and a bunch are on by default via the cleanup in https://github.com/graphistry/pygraphistry/pull/206 !
* Less good news: Skip list has some bigger warning count ones still on the skiplist:
- [ ] E121
- [ ] E123
- [ ] E128
- [ ] E144
- [ ] E201
- [ ] E202
- [ ] E203
- [ ] E231
- [ ] E251
- [ ] E265
- [ ] E301
- [ ] E302
- [ ] E303
- [ ] E401
- [ ] E501
- [ ] E722
- [ ] F401
- [ ] W291
- [ ] W293 | 0easy
|
Title: tsignals generating wrong single with more than 2 indicator in strategy
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```
panda-ta=0.2.75b
```
**Describe the bug**
tsignals indicator is giving few wrong trade entry/exits in case of using multiple indicators. I've tried to use MACD with two SMA. And results are varying as per the chart.
**To Reproduce**
```python
#dump the attached csv file (it have close column)
dump_df #with strategy applied data
cnd = (dump_df['MACD_13_21_8'] >= dump_df['MACDs_13_21_8']) & (dump_df['close'] >= dump_df['SMA_13']) & (dump_df['close'] >= dump_df['SMA_21'])
dump_df.ta.tsignals(trend=cnd, append=True)
```
**Expected behavior**
the column generated through np.where in attached sheet have a correct trade. tsignals should match the same value.
```
eg : since it's the AND condition. Thus,final signal (s) should be only valid if all the indicator signal are the same
s = (s_1 & s3 & s_3)
```
**Additional context**
Note: Problem has experienced in case of more than 2 indicators in strategy. I've generated the actual signals through np. where with below condition and generated column s, which has been generated through s_0,s_1,s_2. column s_0,s_1,s_2 are respectively a signal for each indicator And it has an expected result.
```python
dump_df['signal'] = np.where((dump_df['s_1'].astype(int) == dump_df['s_0'].astype(int)) & (dump_df['s_2'].astype(int) == dump_df['s_1'].astype(int)),test_dump_df['s_2'],0)
```
```
S* = 0 (No Trade)
S* = 1 (Buy Trade) #in tsignal terminology entry
S* = -1 (Short Trade) #in tsignal terminology exit
```
Thanks in advance !!
[test-signal.xlsx](https://github.com/twopirllc/pandas-ta/files/6526970/test-signal.xlsx)
| 0easy
|
Title: Add interactivity patterns in SocketModeClient document
Body: Currently, https://slack.dev/python-slack-sdk/socket-mode/index.html has only Events API example. We add interactivity patterns (e.g., shortcuts, modal submission, button clicks, etc.) in the page.
---
@~tjstum Thanks for your prompt reply here!
>I might suggest adding something to the docs of SocketModeRequest (and the example usages) mentioning that the WebhookClient/AsyncWebhookClient can be used to work with the response_url provided in the payload.
This can be helpful for other developers too! Perhaps, rather than docstrings, updating [this documentation page](https://slack.dev/python-slack-sdk/socket-mode/index.html) to have an interactivity payload pattern with `response_url` usage would be a good way to improve the visibility of functionalities.
>Maybe also exposing the response_url as a property (again, to help promote visibility).
Thanks for the suggestion but we are not planning to have the property. The class represents the whole message structure as-is. No modification and addition are intended. I would recommend transforming `SocketModeRequest` data to your own class with utility methods (like we do in Bolt, which is equivalent to your app framework).
I'm thinking to create a new issue for the document improvement and then close this issue. Is that fine with you? Thanks again for your feedback and inputs here.
_Originally posted by @seratch in https://github.com/slackapi/python-slack-sdk/issues/1075#issuecomment-894284214_ | 0easy
|
Title: Investigate off-by-1 in `scrapy.cmdline._pop_command_name()`
Body: It looks like `del argv[i]` removes the wrong item in `scrapy.cmdline._pop_command_name()` but as we don't seem to see any problems because of this it's worth investigating what exactly happens here and either fixing or refactoring the code. | 0easy
|
Title: LiteLLM <> Logfire critical error
Body: ### Description
When using logfire with liteLLM I have a weird error that crashed the logfire span.
```
from litellm import completion
logfire.configure()
with logfire.span("litellm-test") as span:
response = completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Recommend a fantasy book"}],
)
span.set_attribute("response_data", response)
print(response.choices[0].message.content)
```
and it gives :
```
Internal error in Logfire
Traceback (most recent call last):
File "/home/user/Sources/callisto/backend/nkai/playground/test.py", line 21, in <module>
span.set_attribute("response_data", response)
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/main.py", line 1691, in set_attribute
self._json_schema_properties[key] = create_json_schema(value, set())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 135, in create_json_schema
return schema(obj, seen) if callable(schema) else schema
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 283, in _pydantic_model_schema
return _custom_object_schema(obj, 'PydanticModel', [*fields, *extra], seen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 356, in _custom_object_schema
**_properties({key: getattr(obj, key) for key in keys}, seen),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 342, in _properties
if (value_schema := create_json_schema(value, seen)) not in PLAIN_SCHEMAS:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 117, in create_json_schema
return _array_schema(obj, seen)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 246, in _array_schema
item_schema = create_json_schema(item, seen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 135, in create_json_schema
return schema(obj, seen) if callable(schema) else schema
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 283, in _pydantic_model_schema
return _custom_object_schema(obj, 'PydanticModel', [*fields, *extra], seen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 356, in _custom_object_schema
**_properties({key: getattr(obj, key) for key in keys}, seen),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 342, in _properties
if (value_schema := create_json_schema(value, seen)) not in PLAIN_SCHEMAS:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 135, in create_json_schema
return schema(obj, seen) if callable(schema) else schema
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 283, in _pydantic_model_schema
return _custom_object_schema(obj, 'PydanticModel', [*fields, *extra], seen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 356, in _custom_object_schema
**_properties({key: getattr(obj, key) for key in keys}, seen),
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/pydantic/main.py", line 856, in __getattr__
raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
AttributeError: 'Message' object has no attribute 'audio'
```
I tested this using LiteLLM 1.51, 1.52, 1.50 and logfire 0.53, 1.x and 2.x
### Python, Logfire & OS Versions, related packages (not required)
```TOML
logfire="1.3.2"
platform="Linux-6.8.0-45-generic-x86_64-with-glibc2.39"
python="3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:50:58) [GCC 12.3.0]"
[related_packages]
requests="2.32.3"
pydantic="2.9.2"
fastapi="0.111.1"
openai="1.54.1"
protobuf="4.25.5"
rich="13.9.4"
executing="2.1.0"
opentelemetry-api="1.27.0"
opentelemetry-exporter-otlp-proto-common="1.27.0"
opentelemetry-exporter-otlp-proto-http="1.27.0"
opentelemetry-instrumentation="0.48b0"
opentelemetry-instrumentation-asgi="0.48b0"
opentelemetry-instrumentation-celery="0.48b0"
opentelemetry-instrumentation-fastapi="0.48b0"
opentelemetry-proto="1.27.0"
opentelemetry-sdk="1.27.0"
opentelemetry-semantic-conventions="0.48b0"
opentelemetry-util-http="0.48b0"
```
| 0easy
|
Title: ENH: Improve error message for attempting to write multiple geometry columns using `to_file`
Body: xref #2565
```python
In [16]: gdf = gpd.GeoDataFrame({"a":[1,2]}, geometry=[Point(1,1), Point(1,2)])
In [17]: gdf['geom2'] = gdf.geometry
In [18]: gdf.to_file("tmp.gpkg")
-----------------------------------
File ...\venv\lib\site-packages\geopandas\io\file.py:596, in infer_schema.<locals>.convert_type(column, in_type)
594 out_type = types[str(in_type)]
595 else:
--> 596 out_type = type(np.zeros(1, in_type).item()).__name__
597 if out_type == "long":
598 out_type = "int"
TypeError: Cannot interpret '<geopandas.array.GeometryDtype object at 0x0000025FA88908E0>' as a data type
```
| 0easy
|
Title: Add .ruff_cache to .gitignore and .dockerignore
Body: Now that we've switched to ruff we should make sure to ignore the ruff cache files and such in our builds (e.g. `.gitignore` and `.dockerignore`) | 0easy
|
Title: Add tests for checking color of spinner in stdout
Body: <!-- Please use the appropriate issue title format:
BUG REPORT
Bug: {Short description of bug}
SUGGESTION
Suggestion: {Short description of suggestion}
OTHER
{Question|Discussion|Whatever}: {Short description} -->
## Description
Currently tests run on cleaned `stdout` with no ansi escapings. Actual color in output is not tested. This issue aims to track and fix it.
See https://github.com/manrajgrover/halo/pull/78#pullrequestreview-155713163 for more information.
## People to notify
<!-- Please @mention relevant people here:-->
@manrajgrover
| 0easy
|
Title: https://github.com/citruspi/Flask-Analytics
Body: https://github.com/citruspi/Flask-Analytics
| 0easy
|
Title: Add deployment instructions to output
Body: Let me preface this by saying I am not a developer, but I have a decent level of technical understanding and understand how code works.
In my instructions for the code I was generating, I said that I wanted step by step instructions for deploying the application on a server. In my case, this is a Telegram bot. It added that request to the specification, but did not provide the instructions in the output.
It might be good to have an instruction available to the user that does not generate code, but allows them to request instructions for installation or some other component of the app delivery. | 0easy
|
Title: Write tests for __main__.py
Body: As we can see on CodeClimate https://codeclimate.com/github/python-security/pyt/coverage/5935971dbf92ed000102998b there is pretty low test coverage of main, I understand why this is but adding some tests for it would increase our test coverage percentage and 75% isn't satisfying.
If you have any trouble with this I can help, I am going to label this issue as Easy so new comers see it. | 0easy
|
Title: When mutation takes a list as an argument - passing in a list of the type does not work
Body: I have an input type that takes a list as an argument. I haven't seen an example of this use case so I just guessed that I could pass in a python list of the right type. But I get the following error:
`AttributeError: 'MyInput' object has no attribute 'items'`
And, in fact, it is expecting a dict. Here's an edited version of the code I use to run the mutation:
list_arg = [
gql.MyInput({"key1":"val1"})
gql.MyInput({"key1":"val2"})
]
op.my_mutation(my_list=list_arg)
I'm assuming that passing a simple list into the argument is not the right way to go about it, but I'm not sure how to construct the list otherwise.
Thoughts? | 0easy
|
Title: Document return codes in `--help`
Body: I regularly use the `robot` and `rebot` scripts, notably in CI. When reading their `--help`, they give no clear indication of what their `exit_code`/`return_status` will be.
They both have a `--nostatusrc` :
> Sets the return code to zero regardless are there failures. Error codes are returned normally.
It's not clear to me what it means that "Error codes are returned normally" when "Sets the return code to zero". I think I don't know the nuance between "error code" and "return code".
And if I am not setting this option, what should be the normal error codes ?
I have found [3.1.3 Test results / Return codes](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#return-codes) in the UserGuide, which states the expected results codes for `robot` :
* 0 = all pass
* 1-249 = corresponding number of failing tests
* 250 = more than 250 tests have failed
* 251 = help
* 252 = bad command
* 253 = ctrl+C
* 254 unused ?
* 255 = unexpected internal error
Could a summary of that be included in the Robot help message ? [robot/run.py#L46](https://github.com/robotframework/robotframework/blob/master/src/robot/run.py#L46)
I can open a PR if you are interested.
Also, the same applies to `rebot`. There is *note* in [3.1.3 Test results / Return codes](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#return-codes) that indicates that it is the same for `rebot`, so could it be added to its help message too ? [robot/rebot.py#L46](https://github.com/robotframework/robotframework/blob/master/src/robot/rebot.py#L46) | 0easy
|
Title: [k8s] Unexpected error when relaunching an INIT cluster on k8s which failed due to capacity error
Body: To reproduce:
1. Launch a managed job with the controller on k8s with the following `~/.sky/config.yaml`
```yaml
jobs:
controller:
resources:
cpus: 2
cloud: kubernetes
```
```console
$ sky jobs launch test.yaml --cloud aws --cpus 2 -n test-mount-bucket
Task from YAML spec: test.yaml
Managed job 'test-mount-bucket' will be launched on (estimated):
Considered resources (1 node):
----------------------------------------------------------------------------------------
CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
----------------------------------------------------------------------------------------
AWS m6i.large 2 8 - us-east-1 0.10 ✔
----------------------------------------------------------------------------------------
Launching a managed job 'test-mount-bucket'. Proceed? [Y/n]:
⚙︎ Translating workdir and file_mounts with local source paths to SkyPilot Storage...
Workdir: 'examples' -> storage: 'skypilot-filemounts-vscode-904d206c'.
Folder : 'examples' -> storage: 'skypilot-filemounts-vscode-904d206c'.
Created S3 bucket 'skypilot-filemounts-vscode-904d206c' in us-east-1
Excluded files to sync to cluster based on .gitignore.
✓ Storage synced: examples -> s3://skypilot-filemounts-vscode-904d206c/ View logs at: ~/sky_logs/sky-2025-01-30-23-19-02-003572/storage_sync.log
Excluded files to sync to cluster based on .gitignore.
✓ Storage synced: examples -> s3://skypilot-filemounts-vscode-904d206c/ View logs at: ~/sky_logs/sky-2025-01-30-23-19-09-895566/storage_sync.log
✓ Uploaded local files/folders.
Launching managed job 'test-mount-bucket' from jobs controller...
Warning: Credentials used for [GCP, AWS] may expire. Clusters may be leaked if the credentials expire while jobs are running. It is recommended to use credentials that never expire or a service account.
⚙︎ Launching managed jobs controller on Kubernetes.
W 01-30 23:19:33 instance.py:863] run_instances: Error occurred when creating pods: sky.provision.kubernetes.config.KubernetesError: Insufficient memory capacity on the cluster. Required resources (cpu=4, memory=34359738368) were not found in a single node. Other SkyPilot tasks or pods may be using resources. Check resource usage by running `kubectl describe nodes`. Full error: 0/1 nodes are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
sky.provision.kubernetes.config.KubernetesError: Insufficient memory capacity on the cluster. Required resources (cpu=4, memory=34359738368) were not found in a single node. Other SkyPilot tasks or pods may be using resources. Check resource usage by running `kubectl describe nodes`.
Full error: 0/1 nodes are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
During handling of the above exception, another exception occurred:
NotImplementedError
The above exception was the direct cause of the following exception:
sky.provision.common.StopFailoverError: During provisioner's failover, stopping 'sky-jobs-controller-11d9a692' failed. We cannot stop the resources launched, as it is not supported by Kubernetes. Please try launching the cluster again, or terminate it with: sky down sky-jobs-controller-11d9a692
```
2. Launch again:
```console
$ sky jobs launch test.yaml --cloud aws --cpus 2 -n test-mount-bucket
Task from YAML spec: test.yaml
Managed job 'test-mount-bucket' will be launched on (estimated):
Considered resources (1 node):
----------------------------------------------------------------------------------------
CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
----------------------------------------------------------------------------------------
AWS m6i.large 2 8 - us-east-1 0.10 ✔
----------------------------------------------------------------------------------------
Launching a managed job 'test-mount-bucket'. Proceed? [Y/n]:
⚙︎ Translating workdir and file_mounts with local source paths to SkyPilot Storage...
Workdir: 'examples' -> storage: 'skypilot-filemounts-vscode-b7ba6a41'.
Folder : 'examples' -> storage: 'skypilot-filemounts-vscode-b7ba6a41'.
Created S3 bucket 'skypilot-filemounts-vscode-b7ba6a41' in us-east-1
Excluded files to sync to cluster based on .gitignore.
✓ Storage synced: examples -> s3://skypilot-filemounts-vscode-b7ba6a41/ View logs at: ~/sky_logs/sky-2025-01-30-23-20-51-067815/storage_sync.log
Excluded files to sync to cluster based on .gitignore.
✓ Storage synced: examples -> s3://skypilot-filemounts-vscode-b7ba6a41/ View logs at: ~/sky_logs/sky-2025-01-30-23-20-58-164407/storage_sync.log
✓ Uploaded local files/folders.
Launching managed job 'test-mount-bucket' from jobs controller...
Warning: Credentials used for [AWS, GCP] may expire. Clusters may be leaked if the credentials expire while jobs are running. It is recommended to use credentials that never expire or a service account.
Cluster 'sky-jobs-controller-11d9a692' (status: INIT) was previously in Kubernetes (gke_sky-dev-465_us-central1-c_skypilotalpha). Restarting.
⚙︎ Launching managed jobs controller on Kubernetes.
⨯ Failed to set up SkyPilot runtime on cluster. View logs at: ~/sky_logs/sky-2025-01-30-23-21-05-243052/provision.log
AssertionError: cpu_request should not be None
``` | 0easy
|
Title: Unpin PyQT version for CI builds
Body: Undo #229 once ContinuumIO/anaconda-issues#1068 has been fixed.
Once the bug in conda has been fixed we should stop pinning the version of pyqt that we use in the CI builds.
| 0easy
|
Title: allow legend to be computed from group kwarg
Body: Currently, legends are set explicitly by passing a list to the `plot` function. However, they could also be set automatically by passing a `group` list to the `legend` kwarg
e.g.
`hyp.plot(data, group=labels, legend=labels)` | 0easy
|
Title: Stop using master/slave terminology
Body: Can xdist please stop using the master/slave terminology? Would you accept a PR changing "slave" to "worker"? The terminology is upsetting to some people (and I've already had complaints where I work) and there is no reason to use it when other words exist that do not offend people. Words do matter. | 0easy
|
Title: Contributors metric API
Body: The canonical definition is here: https://chaoss.community/?p=3464 | 0easy
|
Title: 如何正确获取自己的账号主页链接?
Body: 开始获取收藏数据
获取账号简略失败
self 获取账号信息失败
程序运行耗时 0 分钟 3 秒
已退出批量下载收藏作品(抖音)模式
请选择采集功能: | 0easy
|
Title: Can't copy paste content from the doc itself
Body: ## Bug Report
**Problematic behavior**
I can't copy paste content from the doc I'm working on.
When I copy paste from outside the docs it works
**Steps to Reproduce**
1. Select content on the doc
2. Ctrl + C
3. Ctrl + V
4. Nothing happens
**Environment**
docs.numerique.gouv.fr on firefox
- Impress version:
- Platform:
**Possible Solution**
<!--- Only if you have suggestions on a fix for the bug -->
**Additional context/Screenshots**
Add any other context about the problem here. If applicable, add screenshots to help explain.
| 0easy
|
Title: Update installation instructions when sklearn 0.18 is released
Body: | 0easy
|
Title: Add a healthcheck endpoint
Body: Add a simple healthcheck endpoint. Likely something like `/healthcheck`. It should likely so a simple `SELECT 1` on the database and do a simple `get_config()` call to validate that everything is working and then return a 200 with "OK". On any failure it should return 500. | 0easy
|
Title: Update Katib Experiment Workflow Guide
Body: ### What you would like to be added?
We move this sequence diagram to [the Kubeflow Katib reference](https://www.kubeflow.org/docs/components/katib/reference/) guides: https://github.com/kubeflow/katib/blob/master/docs/workflow-design.md#what-happens-after-an-experiment-cr-is-created.
After that, we should remove this doc from the Katib repository.
### Love this feature?
Give it a 👍 We prioritize the features with most 👍 | 0easy
|
Title: Elder Force Index
Body: Hi ,
For the EFI , according to your formula here, you get the difference of current close price - prior close price * volume then you do the exponential weighted average for it. I may have the wrong understanding of it but according to investopedia and thinkorswim, the formula is current close price - prior close price * VFI(13)[ a.k.a 13 period ema of the force index]
Investopedia

Thinkorswim

| 0easy
|
Title: Enhancement: Add support for "mimesis" as an alternative to faker
Body: Built in support for [`mimesis`](https://github.com/lk-geimfari/mimesis/stargazers) would be nice.
Main motivation is that much of the faker methods such as `get_faker().latitude()` etc aren't typed and result in `Any` types :( | 0easy
|
Title: link from videos section to our youtube channel
Body: link to our youtube channel (add link at the top)
https://docs.ploomber.io/en/latest/videos.html
```
more videos available in our youtube channel
```
also, if there's a way to add a link that subscribes people, that's grat | 0easy
|
Title: BDD prefixes with same beginning are not handled properly
Body: **Description**:
When defining custom language prefixes for BDD steps in the Robot Framework (for example, in a French language module), I observed that adding multiple prefixes with overlapping substrings (e.g., "Sachant que" and "Sachant") leads to intermittent failures in matching the intended BDD step.
**Steps to Reproduce:**
Create a custom language (e.g., a French language class) inheriting from Language.
Define given_prefixes with overlapping entries such as:
`given_prefixes = ['Étant donné', 'Soit', 'Sachant que', 'Sachant']`
Run tests that rely on detecting BDD step prefixes.
Occasionally, tests fail with errors indicating that the keyword "Sachant que" cannot be found, even though it is defined.
Expected Behavior:
The regex built from the BDD prefixes should consistently match the longer, more specific prefix ("Sachant que") instead of prematurely matching the shorter prefix ("Sachant").
**Observed Behavior:**
Due to the unordered nature of Python sets, the bdd_prefixes property (which is constructed using a set) produces a regex with the alternatives in an arbitrary order. This sometimes results in the shorter prefix ("Sachant") matching before the longer one ("Sachant que"), causing intermittent failures.
**Potential Impact:**
Unpredictable test failures when multiple BDD prefixes with overlapping substrings are used.
Difficulty in debugging issues due to the non-deterministic nature of the problem.
Suggested Fix:
Sort the BDD prefixes by descending length before constructing the regular expression. For example:
```
@property
def bdd_prefix_regexp(self):
if not self._bdd_prefix_regexp:
# Sort prefixes by descending length so that the longest ones are matched first
prefixes = sorted(self.bdd_prefixes, key=len, reverse=True)
pattern = '|'.join(prefix.replace(' ', r'\s') for prefix in prefixes).lower()
self._bdd_prefix_regexp = re.compile(rf'({pattern})\s', re.IGNORECASE)
return self._bdd_prefix_regexp
```
This change would ensure that longer, more specific prefixes are matched before their shorter substrings, eliminating the intermittent failure.
**Environment:**
Robot Framework version: 7.1.1
Python version: 3.11.5
**Additional Notes:**
This bug appears to be caused by the unordered nature of sets in Python, which is used in the bdd_prefixes property to combine all prefixes. The issue might not manifest when there are only three prefixes or when there is no overlapping substring scenario.
I hope this detailed report helps in reproducing and resolving the issue. | 0easy
|
Title: Switch from pytorch_lightning to lightning
Body: We should change all the imports from `pytorch_lightning` to `lightning`
However, naively substituting all the occurrences does not work -- unknown reasons atm. | 0easy
|
Title: Manually wrap output lines in HOWTO files
Body: cf https://github.com/nltk/nltk/pull/2856#issuecomment-945193387 | 0easy
|
Title: Maintaining control identifiers in centralized location
Body: When desktop application is very large and has 10k identifiers,then how should one maintain and access then from centralized location (file) in python.
Please give suggestions.
Earlier we were using QTP for automation which has its own utility of object repository... For pywinauto how can we create similar utility or how can we organize objects in one file and access them efficiently in script | 0easy
|
Title: [Proposal] Inclusion of ActionRepeat wrapper
Body: ### Proposal
I would like to propose `ActionRepeat` wrapper that would allow the wrapped environment to repeat `step()` for the specified number of times.
### Motivation
I am working on implementing models like PlaNet and Dreamer, and I'm working with MuJoCo environments mostly. In these implementations, there is almost always a term like `action_repeat`. I think the proposed wrapper would simplify this line of implementation.
### Pitch
Assuming that the overridden `step()` is called at time-step `t`, it would return the followings
> `observation[t + n_repeat], sum(reward[t: t + n_repeat + 1]), terminated, truncated, info[t + n_repeat]`
That means, the overridden `step()` would call the parent `step()` at least once and it would assert that `n_repeat` is positive (>=0).
If `terminal` or `truncation` is reached within the action repetition, the loop would be exited, and the reward would be summed unto that point while observation and info from that time step would be returned.
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| 0easy
|
Title: [QUESTION] Croston and add_encoders
Body: Hi,
I cant get add_encoders functionality (for Croston) to work, see below

temporal_cv_lf basically just calls
m = Croston(**params)
**m.historical_forecasts**(series=series,
forecast_horizon=forecast_horizon,
last_points_only=False,
overlap_end=False,
stride=1,
retrain=True))
Any help would be appreciated
| 0easy
|
Title: Migrate from Poetry to uv package manager
Body: **Description**
We should consider migrating our dependency management from Poetry to uv, which is a new extremely fast Python package installer and resolver written in Rust.
**Benefits of migration:**
- Significantly faster package installation (up to 10-100x faster than pip)
- Built-in compile cache for faster repeated installations
- Reliable dependency resolution
- Native support for all standard Python package formats
- Compatible with existing `pyproject.toml` configurations
- Lower memory usage compared to Poetry
**Required Changes:**
1. Remove Poetry-specific configurations while keeping the essential `pyproject.toml` metadata
2. Update CI/CD pipelines to use uv instead of Poetry
3. Update development setup instructions in documentation
4. Ensure all development scripts/tools are compatible with uv
5. Update the project's virtual environment handling
**Notes for Implementation:**
- uv can read dependencies directly from `pyproject.toml`
- The migration should be backward compatible until fully tested
- We should provide clear migration instructions for contributors
- Consider adding both Poetry and uv support during a transition period
**Resources:**
- uv Documentation: https://docs.astral.sh/uv/
- Migration Guide: https://docs.astral.sh/uv/migration/
**Testing Requirements:**
- Verify all dependencies are correctly resolved
- Ensure development workflow remains smooth
- Test CI/CD pipeline with new setup
- Verify package installation in clean environments | 0easy
|
Title: Apple Silicon support
Body: ```
docker run -p 8880:8880 ghcr.io/remsky/kokoro-fastapi-cpu:v0.1.0post1 # CPU
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)
<jemalloc>: (This is the expected behaviour if you are running under QEMU)
``` | 0easy
|
Title: HTTP 500 on get permissions (ValueError)
Body: **Steps to reproduce**
_docker run -p 8888:8888 kinto/kinto-server_
Running kinto 14.0.1.dev0.
Request
```
GET /v1/permissions?_since=6148&_token= HTTP/1.1
Host: 127.0.0.1:8888
```
Response
```
{
"code": 500,
"errno": 999,
"error": "Internal Server Error",
"message": "A programmatic error occured, developers have been informed.",
"info": "https://github.com/Kinto/kinto/issues/"
}
```
Log:
```
"GET /v1/permissions?_since=6148&_token=" ? (? ms) not enough values to unpack (expected 3, got 2) errno=999
File "/app/kinto/core/events.py", line 157, in tween
File "/usr/local/lib/python3.7/site-packages/pyramid/router.py", line 148, in handle_request
registry, request, context, context_iface, view_name
File "/usr/local/lib/python3.7/site-packages/pyramid/view.py", line 683, in _call_view
response = view_callable(context, request)
File "/usr/local/lib/python3.7/site-packages/pyramid/config/views.py", line 169, in __call__
return view(context, request)
File "/usr/local/lib/python3.7/site-packages/pyramid/config/views.py", line 188, in attr_view
File "/usr/local/lib/python3.7/site-packages/pyramid/config/views.py", line 214, in predicate_wrapper
File "/usr/local/lib/python3.7/site-packages/pyramid/viewderivers.py", line 325, in secured_view
File "/usr/local/lib/python3.7/site-packages/pyramid/viewderivers.py", line 436, in rendered_view
result = view(context, request)
File "/usr/local/lib/python3.7/site-packages/pyramid/viewderivers.py", line 144, in _requestonly_view
response = view(request)
File "/usr/local/lib/python3.7/site-packages/cornice/service.py", line 590, in wrapper
response = view_()
File "/app/kinto/core/resource/__init__.py", line 350, in plural_get
return self._plural_get(False)
File "/app/kinto/core/resource/__init__.py", line 393, in _plural_get
include_deleted=include_deleted,
File "/app/kinto/views/permissions.py", line 77, in get_objects
parent_id=parent_id,
File "/app/kinto/views/permissions.py", line 109, in _get_objects
perms_by_object_uri = backend.get_accessible_objects(principals)
File "/app/kinto/core/decorators.py", line 45, in decorated
result = method(self, *args, **kwargs)
File "/app/kinto/core/permission/memory.py", line 101, in get_accessible_objects
_, object_id, permission = key.split(":", 2)
ValueError: not enough values to unpack (expected 3, got 2)
"GET /v1/permissions?_since=6148&_token=" 500 (4 ms) agent=python-requests/2.24.0 authn_type=account errno=999 time=2020-12-21T11:49:07.494000 uid=admin
``` | 0easy
|
Title: Do not split digest emails, summarize instead
Body: ### Describe the problem
This morning I woke up to 79 emails from Weblate in my inbox. All sent within the span of two minutes, all notifying about a single project (Organic Maps).
I checked my preferences. They correctly say that I want to receive digests, not one email per message.
However, it seems like even though I have set my preferences like this, if there are more than so many notifications of the same type (a 100?), the notification system will split them into multiple digests and send them as multiple emails. This seems rather suboptimal.
### Describe the solution you would like
Instead of sending dozens of emails, send just one, but add a note mentioning that "there are more changes than we have included here". Getting dozens of emails about new strings/suggestions/etc. isn't any more helpful than getting just one email. It's counter-productive even.
### Describe alternatives you have considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_ | 0easy
|
Title: Listeners do not get source information for keywords executed with `Run Keyword`
Body: This was initially reported as a regression (#4599), but it seems source information has never been properly sent to listeners for keywords executed with `Run Keyword` or its variants like `Run Keyword If`. | 0easy
|
Title: ENH: Add keyword add_lines in method plot_grid in gridmapdisplay
Body: On occasion you would like to have the grid lines but not the coast maps in the plot. Separating the two would be useful (see https://github.com/MeteoSwiss/pyart/blob/dev/pyart/graph/gridmapdisplay.py) | 0easy
|
Title: `default.faiss` not checked for existance before using
Body: jupyter-ai seems not to check if `default.faiss` exists before using it through `faiss`
```
[E 2024-08-03 17:49:43.118 AiExtension] Could not load vector index from disk. Full exception details printed below.
[E 2024-08-03 17:49:43.118 AiExtension] Error in faiss::FileIOReader::FileIOReader(const char*) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open /p/home/user1/cluster1/.local/share/jupyter/jupyter_ai/indices/default.faiss for reading: No such file or directory
```
and it also does not regenerate it if the file is missing.
(I am using jupyter-ai 2.19.1 in combination with JupyterLab 4.2.1)
| 0easy
|
Title: Process: Make warning about processes hanging if output buffers get full more visible
Body: Related to #3661,
Before finding the related issue above, my binary hung when using the `RUN PROCESS`. It has consumed a lot of my time investigating whether the robot test has found a bug or not. I found out that it is a limitation or weird behavior on RF based on the related issue. It could really help other developers to note this limitation, to avoid misunderstanding the test, and to avoid unnecessary investigation. Thank you in advance.
RobotFramework Version: 6.1.1
Actions to resolve the issue:
- [ ] Add this limitation or behavior in the Process Library Documentation
- [ ] Fix the bug ( if possible ) | 0easy
|
Title: Improve falcon-print-routes tool
Body: `falcon-print-routes` is a simple (albeit somewhat spartan) tool that may come handy to list API routes.
We could polish it, and expose more widely:
* Advertise it in the documentation (currently it is only mentioned in the `1.1.0` changelog)
* Make it aware of route suffixes, a new feature in 2.0
* Expose interface to be able to list routes programmatically (suggested by @CaselIT )
* Make it clear that the tool only works with the standard `CompiledRouter` (suggested by @CaselIT )
* Anything else? (suggestions welcome!) | 0easy
|
Title: Hi, can we support openrouter? openrouter has many visual models that can be used to test specific models on tasks of varying difficulty. Please consider it
Body: | 0easy
|
Title: Parsing model: Move `type` and `tokens` from `_fields` to `_attributes`
Body: Our parsing model is based on Python's [ast](https://docs.python.org/3/library/ast.html). The `Statement` base class currently has `type` and `token` listed in its `_fields`. According to the [documentation](https://docs.python.org/3/library/ast.html#ast.AST._fields), `_fields` should contain names of the child nodes and neither `type` or `token` is such a node. A better place to include them is `_attributes` (which doesn't seem to be documented) that we already use for things line `lineno` and `source`.
In addition to not using `_fields` for semantically wrong purposes, this change seems to increase the performance of visiting the model. The [generic_visit](https://docs.python.org/3/library/ast.html#ast.NodeVisitor.generic_visit) goes through child nodes listed in `_fields` and thus traverses all tokens unnecessarily. This most likely won't have a big impact in data parsing with Robot itself, but external tools using the parser model can benefit more. See the PR #4911 for more information and other related performance enhancements. This change was also initially part of that PR, but because there's a small behavior change a separate issue was created for it.
This change affects logging/debugging a bit. When using [ast.dump](https://docs.python.org/3/library/ast.html#ast.dump) with default arguments, it shows all fields but doesn't show attributes. Thus the type and the tokens are nowadays shown with statements, but they won't be included anymore in the future. It is, however, possible to use `include_attributes=True` to get attributes listed. Not showing tokens by default is actually mostly a good thing with bigger and more complicated models.
This change affects also anyone who inspects `_fields` or `_attributes`. I don't see why anyone would do that, but it's anyway a good idea to give this issue the ´backwards incompatible` label and mention the change in the release notes. | 0easy
|
Title: 能否在移动支付时增加打开支付宝按钮?
Body: 开通了当面付,才发现手机端唤醒点付款二维码就行了,可不可以加个明显按钮如:打开支付宝,打开app支付的按钮来提示唤醒移动端呢?或者手机端自动调用支付宝支付呢?还有,仅有一种支付方式的时候能否默认选中那个支付,而不需要再点选那个唯一的支付方式 | 0easy
|
Title: Add polynomial_kernel Function
Body: The polynomial kernel converts data via a polynomial into a new space. This is easy difficulty, but requires significant benchmarking to find when
the scikit-learn-intelex implementation provides better performance. This project will focus on the public API and including the benchmarking
results for a seamless, high-performance user experience. Combines with the other kernel projects to a medium time commitment.
Scikit-learn definition can be found at:
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.polynomial_kernel.html
The onedal interface can be found at:
https://github.com/uxlfoundation/scikit-learn-intelex/blob/main/onedal/primitives/kernel_functions.py#L98 | 0easy
|
Title: DOC: supported python version
Body: https://doc.xorbits.io/en/latest/getting_started/installation.html
Since we have dropped py 3.7, the doc should be updated as well. | 0easy
|
Title: Move `Order` enum to `types`
Body: | 0easy
|
Title: (CI) don't test the version number when running CI tests on merges
Body: I think the reason [this commit](https://github.com/autokey/autokey/runs/3335287095) fails is because merge commits don't have the rest of the git history, so there is no recent git tag to check the version against.
We should add a `skipif` to the pytest and find a way to check if we are testing a merge commit | 0easy
|
Title: Bug: CLI can't resolve `psycopg` import
Body: **How to reproduce**
Include source code:
```python
from sqlalchemy.ext.asyncio import create_async_engine
from faststream import FastStream
from faststream.nats import NatsBroker
broker = NatsBroker()
app = FastStream(broker)
engine = create_async_engine("postgresql+psycopg://user:pass@localhost:5432")
```
And/Or steps to reproduce the behavior:
```cmd
fastsream run serve:app
```
**Screenshots**
<img width="946" alt="Снимок экрана 2024-10-29 в 19 33 07" src="https://github.com/user-attachments/assets/15d64a37-f11c-42ee-98af-3ad36c61fe2e">
**Environment**
Include the output of the `faststream -v` command to display your current project and system environment.
```txt
faststream==0.5.28
psycopg==3.2.3
``` | 0easy
|
Title: Change HTML report to default
Body: Change HTML report to default. Current it is markdown, but HTML is way more popular and our html report is [beautiful](https://github.com/scanapi/scanapi/pull/157) now
Here is where we need to change it: https://github.com/scanapi/scanapi/blob/master/scanapi/settings.py#L11 | 0easy
|
Title: Ability to set a maximum number of intervals?
Body: As suggested by a community member:
> When I put it this way, the cleanest solution to my problem would be if `dcc.Interval()` supported the idea of counting a certain number of ticks and then stopping by itself, i.e. had something like a `max_intervals` parameter. I’m not sure if you have any other use-cases for that but if so, maybe it’s a feature suggestion?
Full discussion: https://community.plot.ly/t/should-i-use-an-interval-for-a-one-off-delay/11375/6
Seems like a pretty good idea, coul dbe useful for general delays and fits nicely into the existing component API | 0easy
|
Title: Support integer conversion with strings representing whole number floats like `'1.0'` and `'2e10'`
Body: Type conversions are a very convenient feature in Robot framework. To make them that extra bit convenient I propose an enhancement.
Currently passing any string representation of a `float` number to a keyword accepting only `int` will fail. In most cases this is justified, but there are situations where floats are convertible to `int` just fine. Examples are `"1.0"`, `"2.00"` or `"1e100"`. Note that these conversions currently are already accepted when passed as type `float` (i.e. `${1.0}` or `${1e100}`. Conversion for numbers for which the decimal part is non-zero should still fail. We are talking about conversion here, not type casting. | 0easy
|
Title: Update deprecated regex sequences
Body: ## Classification:
Maintenance
## Version
AutoKey version: develop 5963c7ceae668736e1b91809eff426a47ed5a507
## Summary
The CI tests are warning about several deprecated regex sequences. We should find alternatives and replace them.
The regex is one of the better-tested parts of the code in terms of unit tests, so we should be safe with what we change.
```
lib/autokey/model/key.py:122
/home/runner/work/autokey/autokey/lib/autokey/model/key.py:122: DeprecationWarning: invalid escape sequence \+
KEY_SPLIT_RE = re.compile("(<[^<>]+>\+?)")
lib/autokey/model/helpers.py:21
/home/runner/work/autokey/autokey/lib/autokey/model/helpers.py:21: DeprecationWarning: invalid escape sequence \w
DEFAULT_WORDCHAR_REGEX = '[\w]'
lib/autokey/model/abstract_abbreviation.py:143
/home/runner/work/autokey/autokey/lib/autokey/model/abstract_abbreviation.py:143: DeprecationWarning: invalid escape sequence \s
if len(stringBefore) > 0 and not re.match('(^\s)', stringBefore[-1]) and not self.triggerInside:
```
| 0easy
|
Title: Email column type
Body: We don't have an email column type at the moment. It's because Postgres doesn't have an email column type.
However, I think it would be useful to designate a column as containing an email.
## Option 1
We can define a new column type, which basically just inherits from `Varchar`:
```python
class Email(Varchar):
pass
```
## Option 2
Or, let the user annotate a column as containing an email.
One option is using [Annotated](https://docs.python.org/3/library/typing.html#typing.Annotated), but the downside is it was only added in Python 3.9:
```python
from typing import Annotated
class MyTable(Table):
email: Annotated[Varchar, 'email'] = Varchar()
```
Or something like this instead:
```python
class MyTable(Table):
email = Varchar(validators=['email'])
```
## Benefits
The main reason it would be useful to have an email column is when using [create_pydantic_model](https://piccolo-orm.readthedocs.io/en/latest/piccolo/serialization/index.html#create-pydantic-model) to auto generate a Pydantic model, we can set the type to `EmailStr` (see [docs](https://pydantic-docs.helpmanual.io/usage/types/)).
## Input
Any input is welcome - once we decide on an approach, adding it will be pretty easy. | 0easy
|
Title: When you clone something, the original is still highlighted instead of the clone
Body: ## Classification:
UI/Usability
## Reproducibility:
Always
## Version
AutoKey version: 0.95.10
Used GUI (Gtk, Qt, or both): GTK
Installed via: Mint's Software Manager
Linux Distribution: Mint Cinnamon
## Summary
Summary of the problem.
## Steps to Reproduce (if applicable)
1. Right-click on a phrase or script in the left sidebar
2. Click "Clone Item"
## Expected Results
The clone should be instantly highlighted to start doing stuff to
## Actual Results
The original remains highlighted
## Notes
Sorry I suck at programming or else I'd try to help! | 0easy
|
Title: [Tech debt] Improve interface for GridDropOut
Body: * `unit_size_min`, `unit_size_max` => `unit_size_range`
* `shift_x`, `shift_y` => `shift`
=>
We can update transform to use new signature, keep old as working, but mark as deprecated.
----
PR could be similar to https://github.com/albumentations-team/albumentations/pull/1704 | 0easy
|
Title: When following raw URL recipe based on RAW_URI, it breaks routing in TestClient
Body: I'm building REST API with many %-encoded slashes in URI (i.e., as data, not path separator). For example, request URI `/book/baking-book/document/sweet%2Fcheescake.md` will never reach the route declared as `/book/{book}/document/{path}`.
I understand that's not Falcon's fault, but the WSGI server, which "eats" %-encoding before providing it to Falcon in the `PATH_INFO` CGI variable. Thank you for the well-explained reason in FAQ and [decoding raw URL recipe](https://falcon.readthedocs.io/en/stable/user/recipes/raw-url-path.html) based on Gunicorn's `RAW_URI` non-standard variable.
However, if I follow this recipe, it breaks all tests because [TestClient hard-code `RAW_URI` to `/`](https://github.com/falconry/falcon/blob/master/falcon/testing/helpers.py#L1197), so no URI will route properly anymore. | 0easy
|
Title: [DOC] Check that all functions have a `return` and `raises` section within docstrings
Body: # Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the docstrings for some functions are lacking a `return` description and (where applicable) a `raises` description.
I would like to propose a change, such that now the docstrings for all functions within pyjanitor have a valid `return` and (where applicable) valid `raises` statements.
**Requires a look at all functions** (not just the provided examples below).
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Good example of complete docstring - janitor.complete](https://ericmjl.github.io/pyjanitor/reference/janitor.functions/janitor.complete.html)
- Examples for missing `returns`:
- [Link to documentation page - finance.convert_currency](https://ericmjl.github.io/pyjanitor/reference/finance.html)
- [Link to exact file to be edited - finance.py](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/finance.py)
- [Link to documentation page - functions.join_apply](https://ericmjl.github.io/pyjanitor/reference/janitor.functions/janitor.join_apply.html)
- [Link to exact file to be edited - functions.py](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/functions.py)
| 0easy
|
Title: Rename _compression to something else.
Body: This naming breaks the debugger with py3.6 and the latest lz4 library as it apparently tries to look for some class inside _compression but defaults to using our _compression module as it's in the scope. I am using a renamed version in the scope and it seems to be fine. | 0easy
|
Title: Batch Automatic Translation on Component and Project
Body: ### Describe the problem
Currently, automatic translation can only be performed in the language page, then if we have many languages, when we add some new text on template language, we need to go to each language page and click Automatic Translation, so if we have Batch Automatic Translation button on component tools, it will become very easy.
### Describe the solution you would like
Add Batch Automatic Translation button on component tools
### Describe alternatives you have considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_ | 0easy
|
Title: Self-Reflection
Body: Give the bot functions like /gpt ask and etc a self-reflection function where GPT asks itself if its generated answer is suitable | 0easy
|
Title: [New feature] Add apply_to_images to GaussianBlur
Body: | 0easy
|
Title: Simple question about documentation
Body: Why isn't the yahoo finance API listed in the Data Readers [list](https://pandas-datareader.readthedocs.io/en/latest/readers/index.html), but it is available as get_data_yahoo()? Will the get_data_yahoo() function be deprecated in future versions? | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.